Monday, August 6, 2012

McKinsey:Hidden flaws in strategy

Hidden flaws in strategy


Can insights from behavioral economics explain why good executives back bad strategies?

MAY 2003 • Charles Roxburgh

..After nearly 40 years, the theory of business strategy is well developed and widely disseminated. Pioneering work by academics such as Michael E. Porter and Henry Mintzberg has established a rich literature on good strategy. Most senior executives have been trained in its principles, and large corporations have their own skilled strategy departments.



Yet the business world remains littered with examples of bad strategies. Why? What makes chief executives back them when so much know-how is available? Flawed analysis, excessive ambition, greed, and other corporate vices are possible causes, but this article doesn’t attempt to explore all of them. Rather, it looks at one contributing factor that affects every strategist: the human brain.


The brain is a wondrous organ. As scientists uncover more of its inner workings through brain-mapping techniques,1 our understanding of its astonishing abilities increases. But the brain isn’t the rational calculating machine we sometimes imagine. Over the millennia of its evolution, it has developed shortcuts, simplifications, biases, and basic bad habits. Some of them may have helped early humans survive on the savannas of Africa ("if it looks like a wildebeest and everyone else is chasing it, it must be lunch"), but they create problems for us today. Equally, some of the brain’s flaws may result from education and socialization rather than nature. But whatever the root cause, the brain can be a deceptive guide for rational decision making.

The basic assumption of modern economics—rationality—does not stack up against the evidence

These implications of the brain’s inadequacies have been rigorously studied by social scientists and particularly by behavioral economists, who have found that the underlying assumption behind modern economics—human beings as purely rational economic decision makers—doesn’t stack up against the evidence. As most of the theory underpinning business strategy is derived from the rational world of microeconomics, all strategists should be interested in behavioral economics.

Insights from behavioral economics have been used to explain bad decision making in the business world,2 and bad investment decision making in particular. Some private equity firms have successfully remodeled their investment processes to counteract the biases predicted by behavioral economics. Likewise, behavioral economics has been applied to personal finance,3 thereby providing an easier route to making money than any hot stock tip. However, the field hasn’t permeated the day-to-day world of strategy formulation.

This article aims to help rectify that omission by highlighting eight4 insights from behavioral economics that best explain some examples of bad strategy. Each insight illustrates a common flaw that can draw us to the wrong conclusions and increase the risk of betting on bad strategy. All the examples come from a field with which I am familiar—European financial services—but equally good ones could be culled from any industry.

Several examples come from the dot-com era, a particularly rich period for students of bad strategy. But don’t make the mistake of thinking that this was an era of unrepeatable strategic madness. Behavioral economics tells us that the mistakes made in the late 1990s were exactly the sorts of errors our brains are programmed to make—and will probably make again.



Flaw 1: Overconfidence

Our brains are programmed to make us feel overconfident. This can be a good thing; for instance, it requires great confidence to launch a new business. Only a few start-ups will become highly successful. The world would be duller and poorer if our brains didn’t inspire great confidence in our own abilities. But there is a downside when it comes to formulating and judging strategy.



The brain is particularly overconfident of its ability to make accurate estimates. Behavioral economists often illustrate this point with simple quizzes: guess the weight of a fully laden jumbo jet or the length of the River Nile, say. Participants are asked to offer not a precise figure but rather a range in which they feel 90 percent confidence—for example, the Nile is between 2,000 and 10,000 miles long. Time and again, participants walk into the same trap: rather than playing safe with a wide range, they give a narrow one and miss the right answer. (I scored 0 out of 15 on such a test, which was one of the triggers of my interest in this field!) Most of us are unwilling and, in fact, unable to reveal our ignorance by specifying a very wide range. Unlike John Maynard Keynes, most of us prefer being precisely wrong rather than vaguely right.



We also tend to be overconfident of our own abilities.5 This is a particular problem for strategies based on assessments of core capabilities. Almost all financial institutions, for instance, believe their brands to be of "above-average" value.



Related to overconfidence is the problem of overoptimism. Other than professional pessimists such as financial regulators, we all tend to be optimistic, and our forecasts tend toward the rosier end of the spectrum. The twin problems of overconfidence and overoptimism can have dangerous consequences when it comes to developing strategies, as most of them are based on estimates of what may happen—too often on unrealistically precise and overoptimistic estimates of uncertainties.



One leading investment bank sensibly tested its strategy against a pessimistic scenario—the market conditions of 1994, when a downturn lasted about nine months—and built in some extra downturn. But this wasn’t enough. The 1994 scenario looks rosy compared with current conditions, and the bank, along with its peers, is struggling to make dramatic cuts to its cost base. Other sectors, such as banking services for the affluent and on-line brokerages, are grappling with the same problem.



There are ways to counter the brain’s overconfidence:



1.Test strategies under a much wider range of scenarios. But don’t give managers a choice of three, as they are likely to play safe and pick the central one. For this reason, the pioneers of scenario planning at Royal Dutch/Shell always insisted on a final choice of two or four options.6

2.Add 20 to 25 percent more downside to the most pessimistic scenario.7 Given our optimism, the risk of getting pessimistic scenarios wrong is greater than that of getting the upside wrong. The Lloyd’s of London insurance market—which has learned these lessons the hard, expensive way—makes a point of testing the market’s solvency under a series of extreme disasters, such as two 747 aircraft colliding over central London. Testing the resilience of Lloyd’s to these conditions helped it build its reserves and reinsurance to cope with the September 11 disaster.

3.Build more flexibility and options into your strategy to allow the company to scale up or retrench as uncertainties are resolved. Be skeptical of strategies premised on certainty.

Flaw 2: Mental accounting

Richard Thaler, a pioneer of behavioral economics, coined the term "mental accounting," defined as "the inclination to categorize and treat money differently depending on where it comes from, where it is kept, and how it is spent."8 Gamblers who lose their winnings, for example, typically feel that they haven’t really lost anything, though they would have been richer had they stopped while they were ahead.



Mental accounting pervades the boardrooms of even the most conservative and otherwise rational corporations. Some examples of this flaw include the following:



•being less concerned with value for money on expenses booked against a restructuring charge than on those taken through the P&L

•imposing cost caps on a core business while spending freely on a start-up

•creating new categories of spending, such as "revenue-investment spend" or "strategic investment"

All are examples of spending that tends to be less scrutinized because of the way it is categorized, but all represent real costs.



These delusions can have serious strategic implications. Take cost caps. In some UK financial institutions during the dot-com era, core retail businesses faced stringent constraints on their ability to invest, however sound the proposal, while start-up Internet businesses spent with abandon. These banks have now written off much of their loss from dot-com investment and must reverse their underinvestment in core businesses.



Make sure that all investments are judged on consistent criteria, and be wary of spending that has been reclassified to make it acceptable

Avoiding mental accounting traps should be easier if you adhere to a basic rule: that every pound (or dollar or euro) is worth exactly that, whatever the category. In this way, you will make sure that all investments are judged on consistent criteria and be wary of spending that has been reclassified. Be particularly skeptical of any investment labeled "strategic."



Flaw 3: The status quo bias

In one classic experiment,9 students were asked how they would invest a hypothetical inheritance. Some received several million dollars in low-risk, low-return bonds and typically chose to leave most of the money alone. The rest received higher-risk securities—and also left most of the money alone. What determined the students’ allocation in this experiment was the initial allocation, not their risk preference. People would rather leave things as they are. One explanation for the status quo bias is aversion to loss—people are more concerned about the risk of loss than they are excited by the prospect of gain. The students’ fear of switching into securities that might end up losing value prevented them from making the rational choice: rebalancing their portfolios.



A similar bias, the endowment effect, gives people a strong desire to hang on to what they own; the very fact of owning something makes it more valuable to the owner. Richard Thaler tested this effect with coffee mugs imprinted with the Cornell University logo. Students given one of them wouldn’t part with it for less than $5.25, on average, but students without a mug wouldn’t pay more than $2.75 to acquire it. The gap implies an incremental value of $2.50 from owning the mug.



The status quo bias, the aversion to loss, and the endowment effect contribute to poor strategy decisions in several ways. First, they make CEOs reluctant to sell businesses. McKinsey research shows that divestments are a major potential source of value creation but a largely neglected one.10 CEOs are prone to ask, "What if we sell for too little—how stupid will we look when this turns out to be a great buy for the acquirer?" Yet successful turnarounds, such as the one at Bankers Trust in the 1980s, often require a determined break with the status quo and an extensive reshaping of the portfolio—in that case, selling all of the bank’s New York retail branches.



These phenomena also make it hard for companies to shift their asset allocations. Before the recent market downturn, the UK insurer Prudential decided that equities were overvalued and made the bold decision to rebalance its fund toward bonds. Many other UK life insurers, unwilling to break with the status quo, stuck with their high equity weightings and have suffered more severe reductions in their solvency ratios.



This isn’t to say that the status quo is always wrong. Many investment advisers would argue that the best long-term strategy is to buy and hold equities (and, behavioral economists would add, not to check their value for many years, to avoid feeling bad when prices fall). In financial services, too, caution and conservatism can be strategic assets. The challenge for strategists is to distinguish between a status quo option that is genuinely the right course and one that feels deceptively safe because of an innate bias.



To make this distinction, strategists should take two approaches:



1.Adopt a radical view of all portfolio decisions. View all businesses as "up for sale." Is the company the natural parent, capable of extracting the most value from a subsidiary? View divestment not as a failure but as a healthy renewal of the corporate portfolio.

2.Subject status quo options to a risk analysis as rigorous as change options receive. Most strategists are good at identifying the risks of new strategies but less good at seeing the risks of failing to change.

Flaw 4: Anchoring

One of the more peculiar wiring flaws in the brain is called anchoring. Present the brain with a number and then ask it to make an estimate of something completely unrelated, and it will anchor its estimate on that first number. The classic illustration is the Genghis Khan date test. Ask a group of people to write down the last three digits of their phone numbers, and then ask them to estimate the date of Genghis Khan’s death. Time and again, the results show a correlation between the two numbers; people assume that he lived in the first millennium, when in fact he lived from 1162 to 1227.



Anchoring can be a powerful tool for strategists. In negotiations, naming a high sale price for a business can help secure an attractive outcome for the seller, as the buyer’s offer will be anchored around that figure. Anchoring works well in advertising too. Most retail-fund managers advertise their funds on the basis of past performance. Repeated studies have failed to show any statistical correlation between good past performance and future performance. By citing the past-performance record, though, the manager anchors the notion of future top-quartile performance to it in the consumer’s mind.



Anchoring can be dangerous—particularly when it is a question of becoming anchored to the past

However, anchoring—particularly becoming anchored to the past—can be dangerous. Most of us have long believed that equities offer high real returns over the long term, an idea anchored in the experience of the past two decades. But in the 1960s and 1970s, UK equities achieved real annual returns of only 3.3 and 0.4 percent, respectively. Indeed, they achieved double-digit real annual returns during only 4 of the past 13 decades. Our expectations about equity returns have been seriously distorted by recent experience.



In the insurance industry, changes in interest rates have caused major problems due to anchoring. The United Kingdom’s Equitable Life Assurance Society assumed that high nominal interest rates would prevail for decades and sold guaranteed annuities accordingly. That assumption had severe financial consequences for the company and its policyholders. The banking industry may now be entering a period of much higher credit losses than it experienced during the past decade. Some banks may be caught out by the speed of change.



Besides remaining unswayed by the anchoring tactics of others, strategists should take a long historical perspective. Put trends in the context of the past 20 or 30 years, not the past 2 or 3; for certain economic indicators, such as equity returns or interest rates, use a very long time series of 50 or 75 years. Some commentators who spotted the dot-com bubble early did so by drawing comparisons with previous technology bubbles—for example, the uncannily close parallels between radio stocks in the 1920s and Internet stocks in the 1990s.



Flaw 5: The sunk-cost effect

A familiar problem with investments is called the sunk-cost effect, otherwise known as "throwing good money after bad." When large projects overrun their schedules and budgets, the original economic case no longer holds, but companies still keep investing to complete them.



Financial institutions often face this dilemma over large-scale IT projects. There are numerous examples, most of which remain private. One of the more public cases was the London Stock Exchange’s automated-settlement system, Taurus. It took the intervention of the Bank of England to force a cancellation, write off the expenses, and take control of building a replacement.



Executives making strategic-investment decisions can also fall into the sunk-cost trap. Certain European banks spent fortunes building up large equities businesses to compete with the global investment-banking firms. It then proved extraordinarily hard for some of these banks to face up to the strategic reality that they had no prospect of ever competing successfully against the likes of Goldman Sachs, Merrill Lynch, and Morgan Stanley in the equities business. Some banks in the United Kingdom took the agonizing decision to write off their investments; other European institutions are still caught in the trap.



Why is it so hard to avoid? One explanation is based on loss aversion: we would rather spend an additional $10 million completing an uneconomic $110 million project than write off $100 million. Another explanation relies on anchoring: once the brain has been anchored at $100 million, an additional $10 million doesn’t seem so bad.



What should strategists do to avoid the trap?



1.Apply the full rigor of investment analysis to incremental investments, looking only at incremental prospective costs and revenues. This is the textbook response to the sunk-cost fallacy, and it is right.

2.Be prepared to kill strategic experiments early. In an increasingly uncertain world, companies will often pursue several strategic options.11 Successfully managing a portfolio of them entails jettisoning the losers. The more quickly you get out, the lower the sunk costs and the easier the exit.

3.Use "gated funding" for strategic investments, much as pharmaceutical companies do for drug development: release follow-on funding only once strategic experiments have met previously agreed targets.

Flaw 6: The herding instinct

The banking industry, like many others, shows a strong herding instinct. It tends to lend too much money to the same kinds of borrowers at the same time—to UK property developers in the 1970s, less-developed countries in the 1980s, and technology, media, and telecommunications companies more recently. And banks tend to pursue the same strategies, be it creating Internet banks with strange-sounding names during the dot-com boom or building integrated investment banks at the time of the "big bang," when the London stock market was liberalized.



This desire to conform to the behavior and opinions of others is a fundamental human trait and an accepted principle of psychology.12 Warren Buffett put his finger on this flaw when he wrote, "Failing conventionally is the route to go; as a group, lemmings may have a rotten image, but no individual lemming has ever received bad press."13 For most CEOs, only one thing is worse than making a huge strategic mistake: being the only person in the industry to make it.



We all felt the tug of the herd during the dot-com era. It was lonely being a Luddite, arguing the case against setting up a stand-alone Internet bank or an on-line brokerage. At times of mass enthusiasm for a strategic trend, pressure to follow the herd rather than rely on one’s own information and analysis is almost irresistible. Yet the best strategies break away from the trend. Some actions may be necessary to match the competition—imagine a bank without ATMs or a good on-line banking offer. But these are not unique sources of strategic advantage, and finding such sources is what strategy is all about. "Me-too" strategies are often simply bad ones.14 Seeking out the new and the unusual should therefore be the strategist’s aim. Rather than copying what your most established competitors are doing, look to the periphery15 for innovative ideas, and look outside your own industry.



Initially, an innovative strategy might draw skepticism from industry experts. They may be right, but as long as you kill a failing strategy early, your losses will be limited, and when they are wrong, the rewards will be great.



Flaw 7: Misestimating future hedonic states

What does it mean, in plain English, to misestimate future hedonic states? Simply that people are bad at estimating how much pleasure or pain they will feel if their circumstances change dramatically. Social scientists have shown that when people undergo major changes in circumstances, their lives typically are neither as bad nor as good as they had expected—another case of how bad we are at estimating. People adjust surprisingly quickly, and their level of pleasure (hedonic state) ends up, broadly, where it was before.



This research strikes a chord with anyone who has studied compensation trends in the investment-banking industry. Ever-higher compensation during the 1990s led only to ever-higher expectations—not to a marked change in the general level of happiness on the Street. According to Tom Wolfe’s Sherman McCoy, in Bonfire of the Vanities, it was hard to make ends meet in New York on $1 million a year in 1987. Back then, that was shocking hubris from a (fictional) top bond salesman. By 2000, even adjusted for inflation, it would have seemed a perfectly reasonable lament from a relatively junior managing director.



Another illustration of our poor ability to judge future hedonic states in the business world is the way we deal with a loss of independence. More often than not, takeovers are seen as the corporate equivalent of death, to be avoided at all costs. Yet sometimes they are the right move. Two once great British banks—Midland and National Westminster—both struggled to maintain their independence. Midland gave in to HSBC’s advances in 1992; NatWest was taken over by the Royal Bank of Scotland in 2000. At both institutions, the consequences were positive for customers, shareholders, and most employees on any test of the "greatest good of the greatest number." The employees ended up being part of better-managed, stronger, more respected institutions. Morale at NatWest has gone up. Midland has achieved what was, for an independent bank, an unrealistic goal: to become part of a great global bank.



Often, top management is blamed for resisting any loss of independence. Certainly part of the problem is the desire of managements and boards to hang on to the status quo. That said, frontline staff members often resist a takeover or merger however much they are frustrated with the existing top management. Some deeper psychological factor appears to be at work. We do seem very bad at estimating how we would feel if our circumstances changed dramatically—changes in corporate control, like changes in our personal health or wealth.



How can the strategist avoid this pitfall?



1.In takeovers, adopt a dispassionate and unemotional view. Easier said than done—especially for a management team with years of committed service to an institution and a personal stake in the status quo. Nonexecutives, however, should find it easier to maintain a detached view.

2.Keep things in perspective. Don’t overreact to apparently deadly strategic threats or get too excited by good news. During the high and low points of the crisis at Lloyd’s of London in the mid-1990s, the chairman used to quote Field Marshall Slim—"In battle nothing is ever as good or as bad as the first reports of excited men would have it." This is a good guide for every strategist trying to navigate a crisis, with the inevitable swings in emotion and morale.

Flaw 8: False consensus

People tend to overestimate the extent to which others share their views, beliefs, and experiences—the false-consensus effect. Research shows many causes, including these:



•confirmation bias, the tendency to seek out opinions and facts that support our own beliefs and hypotheses

•selective recall, the habit of remembering only facts and experiences that reinforce our assumptions

•biased evaluation, the quick acceptance of evidence that supports our hypotheses, while contradictory evidence is subjected to rigorous evaluation and almost certain rejection; we often, for example, impute hostile motives to critics or question their competence

•groupthink,16 the pressure to agree with others in team-based cultures

Consider how many times you may have heard a CEO say something like, "the executive team is 100 percent behind the new strategy" (groupthink); "the chairman and the board are fully supportive and they all agree with our strategy" (false consensus); "I’ve heard only good things from dealers and customers about our new product range" (selective recall); "OK, so some analysts are still negative, but those ’teenage scribblers’ don’t understand our business—their latest reports were superficial and full of errors" (biased evaluation). This hypothetical CEO might be right but more likely is heading for trouble. The role of any strategic adviser should be to provide a counterbalance to this tendency toward false consensus. CEOs should welcome the challenge.



False consensus often leads strategists to overlook important threats to their companies and to persist with doomed strategies

False consensus, which ranks among the brain’s most pernicious flaws, can lead strategists to miss important threats to their companies and to persist with doomed strategies. But it can be extremely difficult to uncover—especially if those proposing a strategy are strong role models. We are easily influenced by dominant individuals and seek to emulate them. This can be a force for good if the role models are positive. But negative ones can prove an irresistible source of strategic error.



Many of the worst financial-services strategies can be attributed to over-dominant individuals. The failure of several Lloyd’s syndicates in the 1980s and 1990s was due to powerful underwriters who controlled their own agencies. And overdominant individuals are associated with several more recent insurance failures. In banking, one European institution struggled to impose effective risk disciplines because its seemingly most successful employees were, in the eyes of junior staff, cavalier in their approach to compliance. Their behavior set the tone and created a culture of noncompliance.



The dangers of false consensus can be minimized in several ways:



1.Create a culture of challenge. As part of the strategic debate, management teams should value open and constructive criticism. Criticizing a fellow director’s strategy should be seen as a helpful, not a hostile, act. CEOs and strategic advisers should understand criticisms of their strategies, seek contrary views on industry trends, and, if in doubt, take steps to assure themselves that opposing views have been well researched. They shouldn’t automatically ascribe to critics bad intentions or a lack of understanding.

2.Ensure that strong checks and balances control the dominant role models. A CEO should be particularly wary of dominant individuals who dismiss challenges to their own strategic proposals; the CEO should insist that these proposals undergo an independent review by respected experts. The board should be equally wary of a domineering CEO.

3.Don’t "lead the witness." Instead of asking for a validation of your strategy, ask for a detailed refutation. When setting up hypotheses at the start of a strategic analysis, impose contrarian hypotheses or require the team to set up equal and opposite hypotheses for each key analysis. Establish a "challenger team" to identify the flaws in the strategy being proposed by the strategy team.

An awareness of the brain’s flaws can help strategists steer around them. All strategists should understand the insights of behavioral economics just as much as they understand those of other fields of the "dismal science." Such an understanding won’t put an end to bad strategy; greed, arrogance, and sloppy analysis will continue to provide plenty of textbook cases of it. Understanding some of the flaws built into our thinking processes, however, may help reduce the chances of good executives backing bad strategies.



About the Author

Charles Roxburgh is a director in McKinsey’s London office

Mckinsey:How strategists lead

source: https://www.mckinseyquarterly.com/Strategy/Strategic_Thinking/How_strategists_lead_2993#LettersToTheEditors

How strategists lead


A Harvard Business School professor reflects on what she has learned from senior executives about the unique value that strategic leaders can bring to their companies.

JULY 2012 • Cynthia A. Montgomery

Seven years ago, I changed the focus of my strategy teaching at the Harvard Business School. After instructing MBAs for most of the previous quarter-century, I began teaching the accomplished executives and entrepreneurs who participate in Harvard’s flagship programs for business owners and leaders.

Shifting the center of my teaching to executive education changed the way I teach and write about strategy. I’ve been struck by how often executives, even experienced ones, get tripped up: they become so interested in the potential of new ventures, for example, that they underestimate harsh competitive realities or overlook how interrelated strategy and execution are. I’ve also learned, in conversations between class sessions (as well as in my work as a board director and corporate adviser) about the limits of analysis, the importance of being ready to reinvent a business, and the ongoing responsibility of leading strategy.

All of this learning speaks to the role of the strategist—as a meaning maker for companies, as a voice of reason, and as an operator. The richness of these roles, and their deep interconnections, underscore the fact that strategy is much more than a detached analytical exercise. Analysis has merit, to be sure, but it will never make strategy the vibrant core that animates everything a company is and does.

The strategist as meaning maker

I’ve taken to asking executives to list three words that come to mind when they hear the word strategy. Collectively, they have produced 109 words, frequently giving top billing to plan, direction, and competitive advantage. In more than 2,000 responses, only 2 had anything to do with people: one said leadership, another visionary. No one has ever mentioned strategist.

Downplaying the link between a leader and a strategy, or failing to recognize it at all, is a dangerous oversight that I tried to start remedying in a Harvard Business Review article four years ago and in my new book, The Strategist, whose thinking this article extends.1 After all, defining what an organization will be, and why and to whom that will matter, is at the heart of a leader’s role. Those who hope to sustain a strategic perspective must be ready to confront this basic challenge. It is perhaps easiest to see in single-business companies serving well-defined markets and building business models suited to particular competitive contexts. I know from experience, though, that the challenge is equally relevant at the top of diversified multinationals.

What is it, after all, that makes the whole of a company greater than the sum of its parts—and how do its systems and processes add value to the businesses within the fold? Nobel laureate Ronald Coase posed the problem this way: “The question which arises is whether it is possible to study the forces which determine the size of the firm. Why does the entrepreneur not organize one less transaction or one more?”2 These are largely the same questions: are the extra layers what justifies the existence of this complex firm? If so, why can’t the market take care of such transactions on its own? If there’s more to a company’s story, what is it, really?

In the last three decades, as strategy has moved to become a science, we have allowed these fundamental questions to slip away. We need to bring them back. It is the leader—the strategist as meaning maker—who must make the vital choices that determine a company’s very identity, who says, “This is our purpose, not that. This is who we will be. This is why our customers and clients will prefer a world with us rather than without us.” Others, inside and outside a company, will contribute in meaningful ways, but in the end it is the leader who bears responsibility for the choices that are made and indeed for the fact that choices are made at all.

The strategist as voice of reason

Bold, visionary leaders who have the confidence to take their companies in exciting new directions are widely admired—and confidence is a key part of strategy and leadership. But confidence can balloon into overconfidence, which seems to come naturally to many successful entrepreneurs and senior managers who see themselves as action-oriented problem solvers.3

I see overconfidence in senior executives in class when I ask them to weigh the pros and cons of entering the furniture-manufacturing business. Over the years, a number of highly regarded, well-run companies—including Beatrice Foods, Burlington Industries, Champion, Consolidated Foods, General Housewares, Gulf + Western, Intermark, Ludlow, Masco, Mead, and Scott Paper—have tried to find fortune in the business, which traditionally has been characterized by high transportation costs, low productivity, eroding prices, slow growth, and low returns. It’s also been highly fragmented. In the mid-1980s, for example, more than 2,500 manufacturers competed, with 80 percent of sales coming from the biggest 400 of them. Substitutes abound, and there is a lot of competition for the customer’s dollar. Competitors quickly knock off innovations and new designs, and the industry is riddled with inefficiencies, extreme product variety, and long lead times that frustrate customers. Consumer research shows that many adults can’t name a single furniture brand. The industry does little advertising.

By at least a two-to-one margin, the senior executives in my classes typically are energized, not intimidated, by these challenges. Most argue, in effect, that where there’s challenge there’s opportunity. If it were an easy business, they say, someone else would already have seized the opportunity; this is a chance to bring money, sophistication, and discipline to a fragmented, unsophisticated, and chaotic industry. As the list above shows, my students are far from alone: with great expectations and high hopes of success, a number of well-managed companies over the years have jumped in with the intention of reshaping the industry through the infusion of professional management.

All those companies, though, have since left the business—providing an important reminder that the competitive forces at work in your industry determine some (and perhaps much) of your company’s performance. These competitive forces are beyond the control of most individual companies and their managers. They’re what you inherit, a reality you have to deal with. It’s not that a company can never change them, but in most cases that’s very difficult to do. The strategist must understand such forces, how they affect the playing field where competition takes place, and the likelihood that his or her plan has what it takes to flourish in those circumstances. Crucial, of course, is having a difference that matters in the industry. In furniture—an industry ruled more by fashion than function—it’s extremely challenging to uncover an advantage strong enough to counter the gravitational pull of the industry’s unattractive competitive forces. IKEA did it, but not by disregarding industry forces; rather, the company created a new niche for itself and brought a new economic model to the furniture industry.

A leader must serve as a voice of reason when a bold strategy to reshape an industry’s forces actually reflects indifference to them. Time and again, I’ve seen division heads, group heads, and even chief executives dutifully acknowledge competitive forces, make a few high-level comments, and then quickly move on to lay out their plans—without ever squarely confronting the implications of the forces they’ve just noted. Strategic planning has become more of a “check the box” exercise than a brutally frank and open confrontation of the facts.

The strategist as operator

A great strategy, in short, is not a dream or a lofty idea, but rather the bridge between the economics of a market, the ideas at the core of a business, and action. To be sound, that bridge must rest on a foundation of clarity and realism, and it also needs a real operating sensibility. Every year, early in the term, someone in class always wants to engage the group in a discussion about what’s more important: strategy or execution. In my view, this is a false dichotomy and a wrongheaded debate that the students themselves have to resolve, and I let them have a go at it.

I always bring that discussion up again at the end of the course, when we talk about Domenico De Sole’s tenure at Italian fashion eminence Gucci Group.4 De Sole, a tax attorney, was tapped for the company’s top job in 1995, following years of plummeting sales and mounting losses in the aftermath of unbridled licensing that had plastered Gucci’s name and distinctive red-and-green logo on everything from sneakers to packs of playing cards to whiskey—in fact, on 22,000 different products—making Gucci a “cheapened and over-exposed brand.”

De Sole started by summoning every Gucci manager worldwide to a meeting in Florence. Instead of telling managers what he thought Gucci should be, De Sole asked them to look closely at the business and tell him what was selling and what wasn’t. He wanted to tackle the question “not by philosophy, but by data”—bringing strategy in line with experience rather than relying on intuition. The data were eye opening. Some of Gucci’s greatest recent successes had come from its few trendier, seasonal fashion items, and the traditional customer—the woman who cherished style, not fashion, and who wanted a classic item she would buy once and keep for a lifetime—had not come back to Gucci.

De Sole and his team, especially lead designer Tom Ford, weighed the evidence and concluded that they would follow the data and position the company in the upper middle of the designer market: luxury aimed at the masses. To complement its leather goods, Ford designed original, trendy—and, above all, exciting—ready-to-wear clothing each year, not as the company’s mainstay, but as its draw. The increased focus on fashion would help the world forget all those counterfeit bags and the Gucci toilet paper. It would propel the company toward a new brand identity, generating the kind of excitement that would bring new customers into Gucci stores, where they would also buy high-margin handbags and accessories. To support the new fashion and brand strategies, De Sole and his team doubled advertising spending, modernized stores, and upgraded customer support. Unseen but no less important to the strategy’s success was Gucci’s supply chain. De Sole personally drove the back roads of Tuscany to pick the best 25 suppliers, and the company provided them with financial and technical support while simultaneously boosting the efficiency of its logistics. Costs fell and flexibility rose.

In effect, everything De Sole and Ford did—in design, product lineup, pricing, marketing, distribution, manufacturing, and logistics, not to mention organizational culture and management—was tightly coordinated, internally consistent, and interlocking. This was a system of resources and activities that worked together and reinforced each other, all aimed at producing products that were fashion forward, high quality, and good value.

It is easy to see the beauty of such a system of value creation once it’s constructed, but constructing it isn’t often an easy or a beautiful process. The decisions embedded in such systems are often gutsy choices. For every moving part in the Gucci universe, De Sole faced a strictly binary decision: either it advanced the cause of fashion-forwardness, high quality, and good value—or it did not and was rebuilt. Strategists call such choices identity-conferring commitments. They are central to what an organization is or wants to be and reflect what it stands for.

When I ask executives at the end of this class, “Where does strategy end and execution begin?” there isn’t a clear answer—and that’s as it should be. What could be more desirable than a well-conceived strategy that flows without a ripple into execution? Yet I know from working with thousands of organizations just how rare it is to find a carefully honed system that really delivers. You and every leader of a company must ask yourself whether you have one—and if you don’t, take the responsibility to build it. The only way a company will deliver on its promises, in short, is if its strategists can think like operators.

A never-ending task

Achieving and maintaining strategic momentum is a challenge that confronts an organization and its leader every day of their entwined existence. It’s a challenge that involves multiple choices over time—and, on occasion, one or two big choices. Very rare is the leader who will not, at some point in his or her career, have to overhaul a company’s strategy in perhaps dramatic ways. Sometimes, facing that inevitability brings moments of epiphany: “eureka” flashes of insight that ignite dazzling new ways of thinking about an enterprise, its purpose, its potential. I have witnessed some of these moments as managers reconceptualized what their organizations do and are capable of doing. These episodes are inspiring—and can become catalytic.

At other times, facing an overhaul can be wrenching, particularly if a company has a set of complex businesses that need to be taken apart or a purpose that has run its course. More than one CEO—men and women coming to grips with what their organizations are and what they want them to become—has described this challenge as an intense personal struggle, often the toughest thing they’ve done.

Yet those same people often say that the experience was one of the most rewarding of their whole lives. It can be profoundly liberating as a kind of corporate rebirth or creation. One CEO described his own experience: “I love our business, our people, the challenges, the fact that other people get deep benefits from what we sell,” he said. “Even so, in the coming years I can see that we will need to go in a new direction, and that will mean selling off parts of the business. The market has gotten too competitive, and we don’t make the margins we used to.” He winced as he admitted this. Then he lowered his voice and added something surprising. “At a fundamental level, though, it’s changes like this that keep us fresh and keep me going. While it can be painful when it happens, in the long run I wouldn’t want to lead a company that didn’t reinvent itself.”



About the Author

Cynthia Montgomery is the Timken Professor of Business Administration at Harvard Business School, where she’s been on the faculty for 20 years, and past chair of the school’s Strategy Unit.



Elements of this article were adapted from Cynthia Montgomery’s The Strategist: Be the Leader Your Business Needs (New York, NY: HarperCollins, 2012).

[스크랩] 개념미술의 작가 온 카와라 On Kawara

개념 미술의 거장 온 카와라, (Kawara On)
http://blog.daum.net/smreorjal/13712729


[임근준의 이것이 오늘의 미술!] 소피 칼

0. 소피칼 저서 - 진실된 이야기  : http://book.daum.net/detail/book.do?bookid=KOR9788960900042&introCpID=YE

1. 작품보기 : 일상은 곧 예술, 소피칼의 치유로서의 예술
   (http://parisart.tistory.com/20)


[임근준의 이것이 오늘의 미술!] 소피 칼


source: http://news.hankooki.com/lpage/culture/200808/h2008081802371584310.htm


실제와 허구로 재구성한 일상
미술ㆍ디자인 평론가



소피 칼, <색채 식사(The Chromatic Diet)> 중 일요일의 기록, 1997

유대계 프랑스인 현대미술가인 소피 칼(55)은 실재와 허구를 교묘히 뒤섞는 방법을 고안해 일상을 문제 삼는 것으로 유명하다. 작가는 일상을 특정한 양태로 재구성하기 위해 '게임의 법칙'을 정하곤 한다. 게임의 의례적 시행을 통해 그는 수행자인 자신이 통제권을 획득했다가 상실하는 일련의 과정을 경험하고 그것을 기록한다.



대표적인 예가 <고담 핸드북>(1994)과 <색채 식사>(1997)다. 이야기는 폴 오스터의 소설 <거대한 괴물>(1992)에서 시작한다. 이 책에서 소설가는 소피 칼에 기초한 캐릭터인 마리아를 선보인 바 있다. 마리아는 소피 칼처럼 일상을 의식에 따라 살지만, 작자는 자신이 창조한 새로운 법칙을 적용시켰다.



곧 소피 칼은 질세라 이에 화답했다. 그는 폴 오스터에게 '당신이 허구의 인물을 하나 창조해주면, 최대 1년 동안 그 인물로 살겠다'고 제안했다. 소설가는 아이디어를 살짝 변형했다. 소피 칼이 받은 것은 "뉴욕에서의 삶을 아름답게 만들기 위해 SC가 개인적으로 사용하게 될 교육 입문서(왜냐하면 요구했으니까...)"라는 지침이었다.



최종 결과물인 <고담 핸드북>은, 소피 칼이 지침을 따라 뉴욕의 한 공중전화 부스를 개인적인 공간으로 점유하고, 그곳에 머물며 어떤 일이 벌어지는지 기록하고, 걸인과 노숙자들에게 샌드위치와 담배를 나눠주고, 낯선 이들과 이야기를 나누고, 행인들에게 미소를 건네고 되받은 횟수를 갈무리한 작품이다.



허나, 작업 과정에서 도드라지는 것은, 작가가 순간순간 저지를 수밖에 없었던 (종종 지침을 위반하는) 자의적 결정들이다.



이후 소피 칼은 제 분신인 마리아와 자신을 좀 더 비슷하게 만들기 위해, 소설에 나오는 마리아의 의식을 실제로 행하기로 작정한다. 폴 오스터의 글에서 마리아는 하루에 한 가지 색으로 통일된 '색채 식사'를 행한다.





작가는 식사법을 실시하고 요일별 식단을 컬러 사진으로 기록했다. 그러나, 폴 오스터의 상상력에만 의존한 마리아의 식단은, 소피 칼의 구체적 시행을 통해 수정되고 보완되는데, 그 과정은 다시 한 번 실재와 허구의 위계를 흥미롭게 재구성한다.



예컨대 작가는, "월요일: 오렌지" 식단에서 소설가가 음료수를 빼먹었다며 오렌지 주스를 추가한다. "수요일: 하양" 식단에서는, 소설이 제시하는 레시피에 따라 감자 요리를 준비해보니 노란 색이 나왔다며, 쌀과 우유로 대치한다.



금요일과 토요일에는 색상이 지정되지 않았다면서 각각 노랑과 핑크를 배정하고, 일요일에는 월요일부터 토요일까지의 모든 색상-오렌지, 빨강, 하양, 초록, 노랑, 그리고 핑크-을 동원한 만찬을 준비한다.



하지만, 일요일의 식단엔 이런 설명이 덧붙었다. "[...] 개인적으로, 나는 먹지 않는 걸 선호한다; 소설은 모두 매우 좋지만, 당신이 글자 그대로 맞춰 산다면, 꼭 그렇게 꽤 맛깔스럽지는 않은 법이다." 결국 폴 오스터의 마리아는, 소피 칼의 검증 과정을 통해, 보다 현실적인 인물로 구현되는 동시에 현실성 부족으로 실격 처리된 셈이다.

펌:The BI-Search Evolution

source: http://www.information-management.com/newsletters/business_intelligence_bi_search-10017332-1.html?zkPrintable=true


The BI-Search Evolution

By David Caruso


MAR 10, 2010 5:31am ET


Several years ago, I was involved in a consulting engagement with the IT operations of a multibillion dollar firm. The focus was on the areas of budgets, ROI achievement, effective system use and user satisfaction. While the IT staff did a good job of managing budgets, they often struggled to deliver strategic business benefits to the users. One of their biggest problems was the inability to deliver timely information to executives and business users who needed to make ad hoc decisions. Consequently, expensive MBA types scrambled to extract data from business intelligence reports and spreadsheets in order to prepare analyses for the managers.

A recent Forrester Research survey noted that when it came to BI environments, 59 percent of the respondents indicated that users were unable to access 100 percent of the data needed for reporting and analytic work. Additionally, 78 percent of the respondents indicated that their BI environment did not enable exploration and analysis with features such as adaptive data models, unlimited dimensionality and guided analysis.



Information access and exploration has become more challenging as more people attempt to make more decisions based on more data. The people who need answers can’t find what they need – nor can they easily use what they found. Companies have been implementing and using information systems for decades, so what makes this so difficult and expensive? The answer is apparent when you look at the underlying source systems and the process of bringing the data from those sources together:



Too many disparate data sources. New data sources are being introduced every day, and existing sources undergo constant changes. This complicates the task of unifying data and allowing for decision-making based on the most up-to-date information.

Evolving user needs. The information needs of users change as rapidly and continually as the business needs evolve. Users also have expanded data universes, moving beyond just spreadsheets to include all company-wide and even Web-based data.

Expensive and time-consuming data modeling. Often, users don’t know what information they need, so it’s difficult for them to articulate what data they require for decision-making. Because IT attempts to anticipate all the answers a user will ultimately want, this process often requires extensive data modeling in order to get the right answers.

Power tools for the everyday user. Many analytic tools are intended for sophisticated power users, but these users are only a small fraction of the decision-making population in a company. Today, almost every business user is expected to make informed decisions.

Fortunately, easy-to-use search and the power of BI are finally merging, so IT can now deliver on the promise of providing users with all the data necessary to make strategic business decisions, and the power to discover, explore and analyze.



Taken separately, traditional BI and enterprise search tools were each designed to solve problems other than ad hoc decision-making. BI was originally developed for reporting on structured data while search was designed for retrieving unstructured documents. As a result, each technology fall shorts in different ways in several key areas – a good user experience, the types of accessible information and the ability to respond to rapid change.

Because of its focus on reporting and structured data, BI tools are good at answering predictable questions and reporting on key performance indicators. However, they aren’t as effective at answering new or ad hoc questions, requiring the user to request custom reports and cubes from IT analysts.

This is because the rigid, hierarchical data models in BI tools only allow limited exploration and are often complicated to use. Even with rigorous data modeling, most BI tools cannot access unstructured content, and adding new data sources requires analysis and redesign of the data models as well as the reports, analytics and dashboards they drive.

On the other hand, everyone uses search today because, in many ways, it has become the simplest form of computing. Googling someone or using search on an e-commerce site or even on a company home page is the starting point of many users’ regular computer use. However, basic search, with its incomplete data model and document-centric retrieval, also allows for only limited exploration, depriving users of necessary context. Structured data is an afterthought. And, while new data sources can be easily added, providing context and exposing relationships to existing data is difficult.

However, BI and search can be combined to preserve the strengths of both and mitigate the drawbacks of each.

Enabling Discovery

To understand how combining search and BI can bring a richer solution to bear on business decisions, we have to first consider how humans actually make an ad hoc decision.

In daily decision-making, people formulate their next question based on the answer to a previous question. In the process, people often need help formulating good questions because they want to understand what the alternatives might be when making trade-offs. As they gain insight into their problem, they can use additional filters, graphs and visualizations to drill down and explore deeper.

In the business world today, people typically rely on BI systems to get an answer to a known business problem and rely on search to find information. Unfortunately, answers to either structured BI queries or text searches are only as good as the question posed. Although users might glean insight from the results returned in either case, they might never know if the question asked was the right one. Only a few business questions are simple enough to be served with a "hole-in-one" answer.

The convergence of BI and search technologies can enable a user to expose relationships in data that can often lead to an unanticipated answer or new revelation – without the necessity of the perfectly formed question.



Unification of the Data

First, we must note that BI is based on a schema-driven model. That schema holds the key to what can be searched or navigated. But BI usually doesn’t accommodate complex data or unstructured content. In addition, in order to navigate the data effectively, applications must be created specifically for the query at hand.

Data-driven exploration and query refinement, allowing for search on both structured and unstructured data, is important when the data is heterogeneous and hard to understand for users – such as when unstructured content is being included in the search process. This flexibility allows IT to unify heterogeneous, changing data and content from multiple sources without the headaches and expense of traditional data modeling. Likewise, it enables IT to incorporate data from any format, structured or unstructured, and makes it possible to navigate across unstructured documents by automatically extracting structure from them.

Now, because the data is self-describing, it is able to build a dynamic data model; in effect, automating the data modeling task. This enables faceted search and navigation (or guided navigation) which allows the user to elaborate a query progressively, seeing the effect of each choice in the results set. The real power in this kind of search is that someone can expose new data relationships that help drive an unanticipated answer without having created the perfect query. For example, guided navigation enables users to find products or categories via attributes such as part numbers and commodity groups, size and weight. This allows users to enter a few keywords to locate information. The returned information is organized by category (or dimension) and then, using a graphical interface, users navigate the data and its relationships to locate their necessary answer.



Benefits for Users and IT

With search capabilities opening up the data and bringing a new level of ease of use to the analytic power of BI, new benefits include:


•Users will be able to execute data queries via a search box using natural language. Users can start a decision process with a simple search such as “machined parts” and be able to refine the data on a point-and-click basis. This helps users who don’t know how to structure queries find all data available to them, because the faceted search will organize the dimensions and open up the data for ad hoc interaction.

•A single point of entry for multiple BI systems, operational data stores and additional content. A data-driven model simplifies the data preparation and opens up the universe of data that can be explored. Users will be able to combine and explore data from any system, making federated queries against multiple BI systems and underlying source systems easy. This enables users to go to a single entry point to access all the data and content they need, no matter where it may have originated.

•Access to all the data, regardless of type. The explosion of information sources means businesses can make better decisions. Users want to access all relevant data without regard for whether it’s structured, unstructured, new or old. By unifying structured data and unstructured content, and making it all searchable, a search-enabled BI environment allows access to more data more quickly, with more flexibility and less modeling or integration overhead.

Friday, August 3, 2012

"Googlizing" BI with Search-Based Applications

source :  http://tdwi.org/articles/2011/06/08/googlizing-bi-with-search-based-applications.aspx

"Googlizing" BI with Search-Based Applications


Unstructured data holds essential business insights. How can you get to that insight?



June 8, 2011

By Eric Rogge, Sr. Director of Marketing, Exalead



Organizations are increasingly storing vast amounts of unstructured data in new Hadoop, NoSQL, and MPP analytic databases, and business intelligence tools are getting better about connecting with them.

Still, even with improving connections between BI and unstructured data stores, the challenge with today's business intelligence deployments is that they only enable quantitative analysis of a fraction of an enterprises' information assets. That's because the majority of information available to an enterprise is unstructured content held in documents, e-mail messages, collaboration forums, and on the Web. Enterprises now realize that to have a complete, 360-degree view of their operations, they need to analyze that unstructured data. That analysis involves both qualitative assessments as well as quantitative analytics. The challenge of BI isn't storing the unstructured data; it is the significant back-end development work needed to gather and quantify unstructured information sources.


Missing from an enterprise's portfolio of BI tools are search and semantic processing technology, which can efficiently process unstructured data into gists and metrics, plus handle large volumes of data from widely dispersed sources.

The effectiveness of today's BI solutions can be improved by working in conjunction with search-based applications (SBAs). SBAs are a new, emerging category of search and semantic technology that aim to improve operational productivity through processing, analysis, and delivery of key information drawn from internal and Web unstructured data. SBAs are a form of business intelligence and complement the highly quantitative analytics delivered by traditional BI products.



Search-based applications complement the ad hoc analysis and quantitative reporting typical of BI implementations. Where BI addresses the what questions, SBAs address the who, how, and why questions to give qualitative cause-and-effect explanations. They do this by collecting and co-displaying quantitative metrics and explanatory text in the same view. SBAs are also useful for extracting customer sentiment and other informational trends from the Internet -- a complex task beyond the capabilities of traditional BI.

By integrating semantic search-based applications with BI information sources (sometimes called the "Googlization of BI"), companies gain a broader understanding of their business activity that enables better business decisions to be made faster. Instead of using a single source of data as with traditional BI, SBAs can simultaneously access a wide variety of information sources while combining structured and unstructured data to provide a holistic, 360-degree view of the enterprise.



SBAs handle staggering amounts of data -- petabytes in some use cases -- while simultaneously providing Web-search-style, natural-language query interfaces that appeal to ordinary users. Today's workers, accustomed to fast and easy Google searches on the Web, can now gain the same easy-to-use tools to help them unlock information in the enterprise and gain insights for better decision making.

SBAs have a different information purpose than do BI applications. Whereas internal and external accounting standards demand focused, precise numeric precision in BI applications, many operational decisions require a broad perspective, sometimes using a collection or profile of facts such as dates, contacts, impending transactions, milestones, and opinions to provide a complete understanding. Now that audio and video are becoming common information delivery mediums, the ability to transform such multimedia files into text (and then into analytic data) is becoming important, perhaps even critical in some situations. Emerging technologies, such as voice-transcription software, are adding to the deluge of unstructured data in the enterprise, which continues to grow exponentially each year.



However, not all search-engine-derived technologies are equal. Companies looking to leverage the power of SBAs to improve BI should look for several capabilities in a solution. To most effectively boost BI, SBAs must be able to structure unstructured data (not simply index it), as well as integrate that information into the corporation's existing structured datastores. The key to this ability is semantic search technology, which analyzes the content of unstructured data to make sense of the information and rapidly identify relevant data.



In addition, companies should look for SBAs that feature service-oriented architectures (SOAs) to integrate decision tools for each user, enabling rapid deployment and simple integration within the company's information ecosystem. Effective SBA solutions for BI will also include faceted navigation, as well as the robust data security required in today's corporate environments.



SBAs combine the best of BI and enterprise search to deliver what until today has been an elusive goal for the enterprise -- that is, real-time BI with a comprehensive view of all information sources: structured and unstructured, internal and external. SBA-powered BI offers improved data scope and relevance by making better use of existing structured data and by exploiting new data channels ripe with essential information, such as e-mail messages, Office documents, and PDF files -- the vast amount unstructured data that, until now, was beyond the capabilities of BI.



As unstructured corporate data continues to grow exponentially, traditional BI will be left further behind. The efficient scalability of SBA ensures that corporations will be able to continue leveraging their growing stores of information in order to make business decisions more intelligently and more quickly.



Search-Based BI – the Next Innovation

source: http://www.information-management.com/newsletters/business_intelligence_search_analytics_data_warehouse-10020252-1.html

Search-Based BI – the Next Innovation
By Christian Becht and Marcel de Grauw


MAY 3, 2011 6:20am ET

Print Reprints (http://license.icopyright.net/3.7732?icx_id=10020252) Email inShare.1Since the 1970s, companies have used business intelligence to harness data, but these traditional BI tools weren’t built with today’s 24x7 economy in mind. But the rapid increase in data and online sources means that companies in today's 24x7 economy are faced with the challenges of requiring quicker, more user-friendly and flexible tools to cope with continuously evolving data.

Current BI offerings are evolving toward search-based BI in the future, where BI tools will integrate non-structured and external information in the same way Google indexes billions of documents daily while providing access to millions of simultaneous users.

The Early Days

The evolution of BI in large organizations goes back to the 1970s. In an increasingly competitive and global environment, business managers were looking for tools to support their decision-making processes. These early BI tools were focused on extracting data from source systems and on delivering reports displaying performance indicators; most of the time they were custom-made applications developed by internal IT specialists.



To satisfy the needs of a growing number of business managers, specific queries were integrated in the overnight batch and launched against the production systems. The objective was to get business information out of the production systems in the form of fixed-format standard reports, the so-called “print-outs.” On a regular basis, printed information was manually aggregated and keyed into presentation templates and data sheets. Some years later, the concept of the “information support database” was introduced to offload querying on the transaction systems and to improve the performance of the overall solution.



In response to the growing need for management support and reporting tools, software vendors like Pilot Software, Information Resources and Comshare jumped at the opportunity. The first generation of BI tools is often identified with the term executive information systems. The early BI tools included extract, transform and load capabilities, merged data from multiple sources, used relational databases, including what we later called star schemas, and built cubes for fast data retrieval.



The Second Wave in BI: Data Warehousing

In the early 1990s, the EIS pioneers fell on hard times. The costs of implementing corporate EIS systems were too high, and the required technical infrastructure wasn’t there, so the EIS tools had to include their own. In addition, EIS didn’t target and serve enough end users because of the “executive” connotations. At the same time, new innovations like data warehousing and online analytical processing (OLAP) began broadening the realm of decision support and initiated a larger category of BI tools. The so-called “data warehousing” model was further popularized as a means to describe a new set of concepts and methods to improve decision-making by using fact-based decision support systems.



During the second wave of innovation in BI, the production of management information was being industrialized by means of sequentially scheduled batch processes (information logistics). The entire production process, from the extraction of source data to the generation of reports, was being automated by means of specialized BI tools. The data warehousing model, as introduced in the early 1990s, has shaped the BI landscape ever since. Today the traditional BI model is still the guiding principle for designing new BI architectures in large organizations.



The established data warehousing model is being challenged by new concepts and technologies. Modern business managers are pointing to the shortcomings and drawbacks of the current model, both from an organizational and structural point of view. In other words: the data warehousing model as we know it has become too complex and expensive to maintain, and too rigid to provide the required speed of decision needed in today’s 24x7 economy.



Developing a traditional multilayered BI system is an expensive and labor-intensive exercise. To design and build interfaces, ETL jobs, star schemas, data marts and reports takes a lot of time. In addition, highly qualified experts from various disciplines are required to deliver and build a new version on time. Delivery cycles ranging from six to 12 months are typical because of the various teams and tools involved.



The Next Wave in BI: Information Intelligence

To fulfill its promise and to respond to future requirements, BI needs to become more intelligent, user-friendly and flexible. Today’s BI, based on the data warehousing model, is lacking some very basic features and functionality. Adding another BI tool will only increase complexity and costs and is, therefore, not an appropriate solution. We need to reconsider the basics of the current model and identify areas and technologies with the potential to improve things structurally. Areas to be improved include:



•Predictive analytics,

•Proactive alerts and notifications,

•Event-driven/real-time access to information,

•Accelerated integration of structured or nonstructured new data, either internal or external to the organization,

•Enterprise integration/closed-loop BI,

•Portal integration/mobile/ubiquitous access,

•Improved visualization/rich interfaces to empower business users,

•Management automation/decision engines, and

•Collaborative tools to leverage collective intelligence.

The BI of the future is becoming the brain and the central nervous system of organizations. Management information doesn’t find itself locked in a data mart or in a management report anymore; instead, it is automatically being reinjected in operational source systems to adapt to ever-changing market conditions. The next wave in BI, information intelligence, will be the lifeblood of organizations.



Information Intelligence, or the intelligent use of information, extends BI beyond the traditional data warehouse and query tools to include automated decision-making and real-time/event-driven technologies. Information intelligence is about building smarter business processes and making BI more user driven and flexible.



One of the technologies we believe is capable of transforming future BI architectures is enterprise search engines. Enterprise search engines have the capacity to simplify and improve BI in large organizations. This is because search engines possess the following attributes:



•Flexibility – search engines can handle both structured and unstructured information in various formats.

•The ability to cope with continuously evolving data structures. (Indexing both existing and new data does not require extensive data modeling. This is in contrast with the modeling of the data warehouse, which is time-consuming not only when the model is created, but each time new data is added to the data warehouse.)

•Search engines enable content-driven dimensional navigation. At each step of navigation, search engines propose different possibilities to filter results according to the content of the data sets that are being analyzed in near real time. This feature makes the traditional approach based on predefined data cubes obsolete.

•They are able to analyze data without the need to know the various data types, unlike solutions based on relational database management systems.

•Search engines can work with existing information systems (e.g., data warehouses, data marts, production systems, etc.) and are able to provide a federated view of data with the required level of performance, in contrast to federation approaches based on RDBMS that fail to address performance requirements. At the same time, the federated business view can encompass new data sources and provide cross-domain data navigation.

•Search engines utilize a familiar Google-style interface which empowers business users to retrieve data in a way that matches their questions rather than in a prestructured way that often doesn’t suit their real business needs

•They can fill the gaps in traditional data warehouse architectures when external and unstructured data is needed to support decision-making.

•Search engines include functionality to automatically generate categories and clusters, hence improving the contextualization and meaning of data.

•They aggregate and analyze data, in addition to enabling end users to expose relationships and to find patterns in data without the necessity of the perfectly formulated question or query. Search engines provide a powerful complement or alternative to SQL language that remains at the heart of today’s BI solutions - even though it was created more than 35 years ago.





Toward a Search-Based BI

Based on technologies like Exalead, Autonomy and Fast, billions of documents are being indexed on a daily basis from multiples source systems like enterprise content management, enterprise resource planning, customer relationship management, data warehouses and other legacy systems.



Information is being collected in near real-time and presented to end users through user-friendly interfaces that can be extended using the powerful rich Internet application standards. Because of the nature of enterprise search engines the time required to implement a search-based BI solution is heavily reduced compared to the time that is needed to design and build a traditional BI system. Furthermore, performance is not an issue in search-based BI, neither in terms of number of users nor in volume of data.



Future BI systems, integrating nonstructured and external information, will benefit from the proven scalability features of search engines. Search-based BI is leveraging and not replacing investments in existing BI systems and is capable of getting the long-awaited business benefits out of the investments in existing data warehousing environments.



While search-based BI won’t replace current BI systems in the short-term, search-based applications are being used as a complement to cater for the shortcomings of existing BI systems - such as the ability to answer critical business questions more rapidly and cost effectively. In the long run we will see search-based solutions transforming the BI domain because of its inherent features. The combination of BI and search-based solutions will preserve the strengths of both and mitigate the drawbacks of each.



Christian Becht leads the Business Information Management practice of Capgemini in France. He spent the last 12 years creating and growing the Business Intelligence and Enterprise Content Management practice in France, now with up to 700 consultants. Christian is a telecoms engineer who graduated in 1986 from the French “Ecole Nationale Superieure des Telecommunications de Bretagne”. He began his career in the telecom sector where he designed IT solutions able to handle huge amounts of data in a cost effective way. For more information you can join him at christian.becht@capgemini.com.



Marcel de Grauw MSc. (1970) is a Managing Consultant at Capgemini in Paris. He has been active in Business Intelligence since the early 90s. Before joining Capgemini in 2001, Marcel worked for Philips/Origin in The Netherlands. He is currently working on next generation BI architectures based on Search engines and AI. Marcel holds a Master degree in Business Administration from the Erasmus University in Rotterdam. For more information, you can join him at marcel.de-grauw@capgemini.com