Wednesday, August 29, 2018

“STRATEGY, PLANNING, AND PREPARING




Since its high tide in the 1970s, the strategic planning school, led by writers like Igor Ansoff and Peter Lorange, has fallen out of favor. They were the heirs of von Bülow. Henry Mintzberg has played the role of a polemical Clausewitz, his efforts culminating in The Rise and Fall of Strategic Planning, a book of over 400 pages devoted to a detailed critique of the planners, the final chapter of which also tried to salvage something positive from their methods.8 It is sobering to realize that this book appeared as late as 1994. It is sad that it expends so much effort on describing what you should not do, whereas Clausewitz and von Moltke concentrated on what you should do. Mintzberg points out that “formal planning does not create strategy so much as deal with the consequences of strategy created in ”

“as deal with the consequences of strategy created in other ways,”9 ways he describes elsewhere as “crafting strategy.”10 The order is critical: first the strategy, then the plan.
If the notion of strategy as a plan is moribund, the notion of it as a framework for decision making is gaining ground. The change is being led by practitioners. In an article published in 2001 that rediscovers some of the principles of von Moltke’s essay just 130 years later, Kathy Eisenhardt and Don Sull quote examples of companies including Yahoo! and eBay, Dell and Cisco, Miramax and Nortel as conceiving of strategy as “simple rules” which guide decision making.11 The examples make the idea sound new and modern, whereas it is merely enlightened.
Among those adopting the more enlightened view are planners themselves. Daniel Simpson spent nine years as head of strategy and planning at a $3bn consumer goods company headquartered in the US. Disillusioned by the results of planning and the need to absorb much of the literature ”

“Mintzberg toiled through (some of which, he opines, is “not very helpful” and a portion of which he describes as “complete rubbish”), Simpson concludes that the keys to success are “an overall sense of direction and an ability to be flexible.”12 The example of successful practice he quotes is Welch’s “planful opportunism,” one case in which we know for certain that von Moltke was a direct influence. Welch himself had great influence in this area, not only because of the status of GE and his record, but also because at the beginning of his tenure GE was generally recognized as the leading exponent of strategic planning. Welch’s view was that as a result strategic thinking had almost disappeared, and in 1984 he dismantled the planning system.13
Simpson adds an interesting comment after citing Welch. “I think more successful companies are developed through this sort of planful opportunism than through the vision of an exceptional CEO,” he writes. “They aren’t in the media spotlight as much as companies with the visionary CEO, but they are more common.”14 This is no surprise; exceptional CEOs are by definition uncommon. “However, it also demonstrates that the original intention of the Prussian reformers, to create an intelligent organization whose performance did not depend on its being led by a genius, is as true in business. And if there is evidence that our thinking about strategy is catching up with von Moltke’s, at this point we are still behind. We are extraordinarily reluctant to admit that luck plays a part in business success. The media create a cult of CEO heroes and their salaries are now such that restless shareholders have become rebellious. We would do well to remember that while a leader’s reputation is ultimately based on success, “how much of it is in fact down to his own efforts is very hard to say.”
This is a serious matter. A recent scholarly article argues that the greater a CEO’s celebrity, the greater their perceived control over the actions and performance of their firm. This leads CEOs to continue to take actions associated with their own celebrity, and to create hubris.15 This poses a double jeopardy: the delusion that one can control ”

“external events (i.e., a denial of friction); and the delusion that one is solely responsible for success, with a concomitant tendency to command a great deal more than is necessary (i.e., a reversal of a core principle of mission command). Hubris encourages a return to the deadly cycle of organizational stagnation we examined in Chapter 2. As I pointed out above, because friction is rooted in human finitude, ignoring it is to play at being God. To attribute to CEO-heroes the ability to control events and be immune to good or bad luck is at heart a metaphysical worldview reminiscent of Greek polytheism or even, at the extreme, medieval theology.
Strategy, then, demands a certain type of thinking. It sets direction and therefore clearly encompasses what von Moltke calls a “goal,” “aim,” or “purpose.” Let us call this element the aim. An aim can be an end-point or destination, and aiming means pointing in that direction, so it encompasses both “going west” and “getting to San Francisco.” “The aim defines what the organization is trying to achieve with a view to gaining competitive advantage “How we set about achieving the aim depends on relating possible aims to the external opportunities offered by the market and our internal capabilities. The process of thinking strategically involves relating three points of a triangle, as in Figure 11.”

“A good strategy creates coherence between our capabilities, the opportunities we can detect, and our aims. Different people have a tendency to start with, and give greater weight to, one or other of these three factors. Where they start from does not matter. Where they end up does. The result must be cohesion. If any one of these factors floats off on its own, dominates thinking at the expense of the others, or is simply mismatched, then in time perdition will follow.”

“The strategy triangle confronts us with the first observation von Moltke makes about the nature of strategy: reciprocity between ends and means. Both are ambiguous and interdependent. In most of our day-to-day problems, the end is a given. It is fixed and we just have to work out the means of achieving it. In Figure 11, the two-headed arrows indicate that our consideration of the means (our capabilities and the opportunities we face) codetermines the ends (our aims).
Reciprocity pervades not only strategic thinking but decision making and action. Because the effects of our actions depend not merely on what we do but on the actions of other independent wills, strategy will need to adapt to the newly created situations which result. It is thus a “system of expedients.” The task of strategy is not completed by the initial act of setting direction. Strategy develops further as action takes place, old opportunities close off, new ones arise, and new capabilities are built. The relationship between strategy development and execution is also reciprocal. Doing strategy means thinking, doing, learning, and adapting. It means going round the loop. The reappraisal of ends and means is continuous.
In assessing ends and means, we have above all to be realistic. Developing strategy is an intellectual activity. It involves discerning facts and applying rationality. Leadership is a moral activity. It involves relating to people and generating emotional commitment. Developing a strategy around pre-existing emotional commitments is courting disaster. When people convince themselves that they have the capability to do something that in fact they do not, just because a lot of other people seem to be doing so, or convince themselves that the market will love the latest thing to pop out of R&D, just because their own engineers love it, strategies fail. “When companies set themselves the aim of growing from an also-ran to a market leadership position in two years simply because doing so will boost the CEO’s share options, shareholders’ money is squandered on failed acquisitions and hopeless investments.
Many of the best-known strategy development tools – such as Porter’s five forces and value chain models, the matrices for displaying competitive position used by BCG or McKinsey, cost analysis, supply curves, market segmentation, and so on – are in fact tools for analyzing the situation and trying to work out what drives success. Useful though they are, they do not produce strategies. They help to sort out information, simplify the complexities of reality, and focus attention on the essentials of the situation, internal or external. They are only effective if they generate insight into the basis of competition. “A notion central to Clausewitz’s thinking about strategy was that war aims and the strategy adopted to realize them should be developed from an understanding of what I am calling the “basis of competition,” and what he called the enemy’s “center of gravity.” “Making out this centra gravitatis in the “enemy’s war effort,” he wrote, “to identify its spheres of influence, is a central point of strategic judgment.”16 The term, like friction, is borrowed from mechanics:
Just as the center of gravity is always to be found where the greatest mass is brought together, and just as every blow delivered against the load’s center of gravity is the most effective… so it is in war. The forces of every protagonist, whether a single state or an alliance of partners, have a certain unity, and by virtue of this some coherence; it is where there is coherence that we find analogies to a center of gravity. There are therefore certain centers of gravity in these forces, the movement and direction of which govern other points, and these centers of gravity are to be found where the largest forces are gathered.17”

“So it is in business too. Businesses engage in a vast range of activities. The art of strategic thinking is to identify which of them is the decisive differentiator, the determinant of competitive advantage. It involves mastering and sorting through a vast range of activities and simplifying them accurately down to the essentials which make the difference. The true strategist is a simplifier of complexity. Not many people can consistently do it well.
Clausewitz knew that. Indeed, so rare did he judge the qualities leading to strategic insight to be, that he gave the chapter in which he describes them the title “Military Genius.”18 We should treat this much-abused term with caution. Clausewitz was using it in the precise sense defined by Kant: genius is a gift of nature which intuitively develops the rules of human practices such as the arts.19 Clausewitz’s comments are worth quoting:”

“If he is to successfully prevail in this constant struggle with the unexpected, then two qualities are essential: firstly a mind which even in this heightened darkness is not without some shafts of inner light which lead him to the truth, and then the courage to follow that dim light. The first can be characterized with the French expression coup d’oeil and the second is conviction.20

This sounds a bit dangerous. It could be an excuse for stubbornness, for not listening, for bees in the bonnet and private agendas. That is why it is rare. The key is determination based on insight. Clausewitz realized this:
There are people who possess a highly refined ability to penetrate the most demanding problems, who do not lack the courage to shoulder many burdens, but who nevertheless cannot reach a decision in difficult situations. Their courage and their insight stand apart from each other, never meet, and in consequence they cannot reach a decision. “Conviction results from an act of mind which realizes that it is necessary to take a risk and by virtue of that realization creates the will to do so… the sign of a genius for war is the average rate of success.21”

“The phenomenon of making good judgments in uncertainty has since been the object of careful examination. It is about the use of intuition.
Psychologist Gary Klein has made a study of intuitive decision making. By observing experts in a given field in situations in which they made decisions, Klein realized that they did not follow the conventional “rational model” of developing and evaluating options before choosing between them. They seemed to go straight to the answer, using what appeared to nonexperts, and indeed often to themselves, to be a “sixth sense.” On analysis, the sixth sense turned out to be perfectly rational. It was based on pattern recognition. Through years of experience in their field, experts build up patterns of expectation, and notice immediately when something unusual occurs which breaks the pattern. These signals make the “right” decision obvious to them. It looks to others and feels to them to be intuitive, but the intuition is schooled, and rational. Clausewitz gives it the French name coup d’oeil, the “glance of a practiced eye. Germans more usually refer to Fingerspitzengefühl, the “feeling in your fingertips.” In the Anglo-Saxon world things take place more viscerally – it is “gut feeling.” Whatever the language, schooled intuition is the basis of insight.22 It was this discipline which von Moltke mastered in his domain of military strategy.
Insights into the center of gravity of a business and hence innovative strategies tend to come from people of long experience who have an unusual capacity to reflect on that experience in such a way that they become aware of the patterns it shows. This awareness enables them to understand how all the elements of their experience relate to each other so that they can grasp and articulate the essentials. Because of this, what to others is a mass of confusing facts is to them a set of clear patterns making the answer to many problems obvious. ”

“Hence they have the courage to act. Because they base their decisions on that understanding, and because that understanding is sound, they tend in the long run to get more things right than wrong and so demonstrate the above-average success rate that Clausewitz identifies as marking them out. We tend to speak of them as having “good judgment.” In their field they do. But because it is grounded in pattern recognition, the quality of their judgment is dependent on context and they do not necessarily display it in every area of human activity.23
A short story may illustrate the point.”

“A few years ago, I visited a manufacturer of domestic boilers. At the time, the company was number three in the market and was not only making good returns but gaining share, closing the gap with the number two player. I asked all the top executives why the company was so successful. One said it was the quality of the product – but he admitted that the differences with competitors’ products were small. One said it was the brand – but had to admit that the market leader’s brand was also very strong. So it went on: R&D, technology, production efficiency, delivery times, customer service – all had their advocates, but none in itself felt compelling.”

“My last interview was with the managing director. I asked him once again why the business was so successful. “Let me tell you how our business works,” he said. “Almost all of our domestic business is for replacement of existing boilers. People replace boilers when their existing ones break down. What do you do when your boiler breaks down? You call the installer,” he continued, answering his own question. “When he tells you the boiler is too old to repair because he can’t get the parts, what do you do?” He paused. “I’ll tell you. You do what he suggests. And when you ask him which new boiler to install, he tells you that too. So 90 percent of all purchasing decisions are made by the installer.” He paused to let this sink in. “Our business,” he said deliberately, “is about service to the installer. “But I am the only person around here who gets that. They all think I’m an old man with a bee in his bonnet.” He looked me in the eye. “We are being successful because we offer our installers better service than any of our competitors. But we can do even better. I know that if we gear up the whole company toward optimizing service to the installer, right across the value chain, we can become market leader.”
It all seemed very simple. It made perfect sense. “The company was clearly doing more to enhance service to the installer than any other player in the market. Everyone knew that it was important – but so were lots of other things. The managing director was the only one there who regarded it as essential. He knew every detail of his business, built up over 30 years of experience. He did not only know every tree in his particular wood, he could describe the state of the bark on each one. However, he was the only one who could readily describe the shape of the wood. He had grasped the basis of competition, the center of gravity of the business, and hence the source of its competitive advantage.
This informed all his operational decisions. He wanted to increase the number of visits installers paid to the company’s site – which was already more than any of their rivals – and build a new training center. He was obsessed with the quality of “its installation literature. He was ready to invest whatever it took to increase spare parts availability at the distributors so that installers did not waste time waiting for a part. He wanted the new range of boilers the company was just developing to be energy efficient, quiet, and reliable, but above all he wanted them to be easy to install. And so on. And it was working.
He wanted to run some strategy workshops to focus all his top team on optimizing service to the installer. They were already making their implicit strategy happen, but as it became explicit and the top team grew more aligned, so decision making and execution became more focused. At the time of writing the company has overtaken the number two player, and is closing the gap with the market leader.”

“In this example, service to the installer is the source of competitive advantage my friends are seeking to exploit. Their aim is to achieve leadership of their chosen segments. They have identified becoming the supplier of choice to the installer as an opportunity across the market, and by excelling at that they are unhinging the position of their major competitors. They already have the capabilities to do so, but they are investing further in those capabilities and creating others. They are doing what all successful strategists do, which is to build further on their existing strengths. They therefore have a coherent strategy – they have linked up all three corners of the strategy triangle.”

“Their capabilities took time to build and have become complex and interlocking. They have allowed the company to build a position in the market which is sustainable because they also create barriers around it, making it difficult for competitors to do the same thing as well as they do. The proposition they offer installers is a powerful one. That results in further intangible advantages such as their reputation. Their proposition has become hard to copy, and by continuing to invest in its strengths, the company is maintaining its advantage. Their strategy informs all their decisions and “their operational plans. It is being pursued as a central idea under continually evolving circumstances.
Their competitors are having to play a similar game, because service to the installer is the center of gravity of the business as a whole. Other businesses admit of more than one center of gravity. In the airline business, one can compete on the basis of service, focusing on the business traveler, but in the last decade some have realized that another option is to compete on price, and the low-cost airline – offering a very different value proposition – has changed the business as a whole, based on an insight into another set of market opportunities and a different set of corresponding capabilities. “Centers of gravity are not static. For example, changes in technology have altered the basis of competition in the computer business from the period of the CPU, through the distributed server, to the PC, to the laptop. Failing to shift its position fast enough, the original dominant player, IBM, lost its position, went through a crisis, and has emerged as a survivor in a very different and more diverse competitive landscape.
Identifying the competitive center of gravity is a first step in setting direction and will inform further decisions. The most fundamental strategic decisions are those determining the compass heading and/or destination. From those follow further decisions about investment, resource allocation, and actions. The direction has to be turned into a path, the route of which is always informed by the center of gravity, but which also takes account of changing circumstances. “That means that making the strategy happen will require a whole series of decisions on the part of a wide range of people.
Being made in the context of strategy, those decisions will have the reciprocal relationship between ends and means that is characteristic of it. As they involve overall direction, they will tend to be cross-functional and, as von Moltke observed, they will tend to be “one-offs” because every situation is unique. If we approach them with the natural, intuitive decision-making approach described by Gary Klein, we run a serious risk of getting things wrong. Unless we are strategy specialists (as some consultants are), it is unlikely that our experience base will be appropriate and we may tend to prejudge an issue as being of a certain type. That is the main reason most of the functional executives in the boiler company could not see that service to the installer was the center of gravity. They all knew that it was important. There is an enormous difference between knowing that something is important and realizing that it is the basis of competition.
“Having an inappropriate experience base is dangerous when the nature of the issue itself is at stake. We are also liable to become emotionally anchored on a certain solution or type of solution. We therefore need to put together a diverse team and run a disciplined process of going round the loop, moving from the framing of the question itself, through option generation, to option evaluation, and back to reframing the question. It is a characteristic of high-performance teams that they go round the loop more quickly and reframe more “often than average ones. It is usually reframing that generates creative solutions. It is because it involves systematic, “going-round-the-loop” thinking rather than linear thinking that von Moltke can refer to strategy as a “free practical art.”
In order to provide guidance for decision making under continually evolving circumstances, strategy can be thought of as an intent.”

Excerto de: Bungay, Stephen. “The Art of Action: Leadership that Closes the Gaps between Plans, Actions and Results”. iBooks..

Saturday, August 18, 2018

Chap 2

Machine as Metaphor
During [Sigmund] Freud’s university years (the late 1870s and early 1880s), young enthusiasts in the fuzzier disciplines, such as psychology, liked to borrow terminology from the more rigorous and established field of mechanical physics. The borrowed terms became, in fact, metaphor; and metaphor, like a shrewd servant, has a way of ruling its master. Thus, Freud wound up with the idea that libido or sexual “energy,” as he called it, is a pressure that builds up within a closed system to the point where it demands release, as in a steam engine:
Over the past twenty years . . . neurophysiologists have begun to study the actual workings of the brain and central nervous system. These investigators find no buildups of “pressure” or “energy,” sexual or otherwise, for the simple reason that the central nervous system is not analogous to an engine. They regard it as more like an electronic circuit, such as a computer or a telephone system.15
—Tom Wolfe, “The Boiler Room and the Computer”
Like psychologists, economists were taken with mechanical metaphors, particularly in the aftermath of World War II. Uppermost in their minds were mass-produced machines, such as the T-34 tanks that the Soviet Union used to turn the tide in its struggle against Nazi Germany. Postwar economists naturally thought of an economic problem con- sisting of allocating key resources, such as steel and oil, among alternative uses, primarily aircraft, warships, and battle tanks.
The graduate program in economics at MIT was at first heavily funded by the U.S. Department of Defense, as World War II seemed to show the importance of combining economics with engineering. That combination was particularly useful for addressing problems of constrained optimization. That is, given a fixed capacity to produce, say, steel and rubber tires, what is the optimal quantity of tanks and airplanes to manufacture?
Constrained optimization became a trademark of postwar economic theory. MIT economists modeled both the individ- ual consumer and the individual firm as solving constrained optimization problems. There was even a brief attempt to model all economic policy as solving a constrained optimiza- tion problem using a “social welfare function,” which would substitute society’s values for market prices. MIT and the economics profession as a whole developed confidence that with modeling, mathematics, and statistical information, they could fine-tune the economic machine.
The MIT revolution was led by Paul Samuelson, who in 1970 became the first American to win the Nobel Prize in Econom- ics, and whose textbook dominated the market in the 1950s and 1960s and serves as the template for popular current textbooks.

Samuelson and his successors taught that the economic machine had a gas pedal that could be used to avoid economic slowdowns. That device was “aggregate demand,” which could be increased by the government’s printing money, running a budget deficit, or both. In this economic subfield, known as macroeconomics, the concept of specialization is forgotten entirely. Instead, economists employ an interpretive frame- work in which every worker performs the same job, toiling in one big factory that produces a homogeneous output. Macroeconomics replaces specialization with that GDP factory.
Fifty years ago, those researchers who were not seduced by Freudian metaphors were likely to be enamored of B.F. Skinner, who taught that human behavior could be inter- preted as if we were machines that respond predictably to past experiences of pleasure or pain. However, today, many researchers prefer the interpretive framework of evolutionary psychology. They see the human brain as being endowed with capabilities that evolved in the era of hunting and gathering but that can be adapted to very different environments. Both individual behavior and cultural norms respond to more than just the stimuli of reward and punishment.
Although psychology has moved on, the discipline of eco- nomics has remained stuck in its mechanistic metaphors. As a result, economists engage in an ultimately futile attempt to apply mathematical methods that are analogous to those used to measure a tank’s speed, firepower, and armor. Economists have yet to incorporate metaphors that pertain to the computer or communication networks. They have not come to terms with the reality that an economic system is much more complex than a T-34 tank.
Computer networks, which today offer a powerful meta- phor for thinking about the brain or human society, were not around in that key period in the history of economic thought. Until the mid-1970s, the only computers were mainframes, referred to as “big iron.” Far larger than today’s computing devices, and yet less powerful, computers were classified as heavy equipment. Owning a single mainframe required a massive investment. In the private sector, only the largest businesses could afford them, and only a handful of firms, primarily IBM, could supply them.
Managers thought of mainframe computers as akin to mechanical calculators. Rather than working on general, multipurpose software, most people who wrote computer code were programming the machine to perform a specific calculation.
The advent of the personal computer in the late 1970s and early 1980s brought software to prominence, exemplified by Microsoft. Subsequently, the emergence of the public Internet in the 1990s demonstrated the significance of decentralized networks offering specialized sources of content and connection.
I have come to see software and Internet resources as useful metaphors for the market economy. In my view, it is better to think of the economy in relation to the Internet than in relation to a T-34 tank. A tank performs only a few functions. It is deliberately designed by a small group of engineers. It can be understood and evaluated using a few simple measurements, such as speed, armor thickness, and gun capacity. In contrast, the ser- vices available on the Internet perform myriad functions. The resources on the Internet, and the patterns of specialization and trade in the market, emerge from the actions of countless individuals, not from the minds of a few designers. And the factors that affect the value of market production or Internet resources are many, complex, and not all quantifiable.
The “hardware” of the economy consists of its physical resources and physical outputs. However, as with comput- ers, economic “software” is at least as important. Most of the world’s wealth is intangible. It consists of our individual and collective knowledge. We know how to transform apparently useless gunk, called oil, into energy. We know how to transform an apparently inert element, called silicon, into computer chips that enable us to process information and to communicate more efficiently. As consumers, we know how to work with complex equipment, such as automobiles and cell phones. As workers, we have great stores of industry-specific and task-specific know-how.
Most of the world’s economic backwardness also comes from intangible factors. Where widespread poverty exists, it can be traced to bad governance, violent conflict, counterpro- ductive social norms, and poor education.
Samuelson and his successors created a modern economic orthodoxy that is flawed in several important ways. As we have seen, the engineering approach is best suited to tangible, quantifiable elements, such as the number of hours worked in factories or the number of machines in use. However, eco- nomic reality is more subtle and complex.
The MIT-influenced approach that dominates the econom- ics profession treats individual markets and the economy as a whole as if they were simple machines. It embodies a view that economic behavior can be analyzed and predicted on the basis of mathematical equations. The economist plays a role analogous to that of a mechanical engineer, using models and equations to suggest ways for policymakers to make markets operate more efficiently.

However, economic models of markets are not as pow- erful as the engineer’s model of a machine. Economists’ mathematical modeling fails to come to terms with the complexity of economic phenomena. In the real world, too many factors have to be left out of the mathematical models.
Defenders of the modern orthodoxy will argue that the use of mathematics is a way to force economists to keep track of their assumptions. Formal models make assumptions explicit, rather than hiding them. Mathematical derivations demon- strate how the assumptions interact.
The use of mathematics helps verify the connection between assumptions and conclusions, but it does not guarantee that we are making good choices in our assump- tions. On the contrary, we often make very bad assump- tions, because better assumptions would be too difficult to handle mathematically. Thus, we use “two-by-two” models of international trade, when the reality consists of much more complex forms of specialization. We model “expectations” as a set of identical beliefs held by everyone in the economy, when in fact differences exist among peo- ple with regard to information and expectations. We take market imperfections as given, rather than consider how enterprises and institutions might evolve to address current problems.

One example of dubious analysis based on mathematics concerns the question of how retired people should allocate their assets between annuities and ordinary savings. For example, if you have $300,000 in savings, you could use it to obtain an annuity from an insurance company that will pay you, say, $30,000 a year for as long as you live. Instead, sup- pose that you gradually spend from the savings that you have. In that case, if you live exceptionally long, you face the risk of outliving your savings.
An extensive literature using mathematical modeling says that most or all of retiree savings should be converted to annuities. Economists have even suggested that because few people use annuities, it might be appropriate to force them to do so. For example, Davidoff, Brown, and Diamond write:

The near absence of voluntary annuitization is puzzling in the face of theoretical results that suggest large benefits to annuitization. While incomplete annuity markets may render annuitization of a large fraction of wealth suboptimal, our simulation results show that this is not the case even in a habit-based model that intentionally leads to a severe mismatch between desired consumption and the single pay- out trajectory provided by an incomplete annuity market. These results suggest that lack of annuity demand may arise from behavioral considerations, and that some mandatory annuitization may be welfare increasing.
Knowing of that literature, I had always inferred that annuitization was a good idea, until I observed close relatives reach retirement age. Then, when they asked me for financial advice, I noticed the following considerations:
  1. As people reach their later years, their financial needs are dominated by health issues. That means that con- sumption needs are not smooth. On the contrary, the elderly can face sudden increases in the cost of deal- ing with disabilities (moving to assisted-living facil- ities, or requiring a home health aide) and a steady decline in non-health-related spending (less travel and entertainment).
  2. A lot of scope for insurance exists within a family. If one spouse lives an exceptionally long time, that spouse can be supported by savings left over from the spouse who died earlier. Alternatively, a long-lived parent can be supported by children.
3. If you wind up spending the last few years of your life in a nursing home, then you are not better off for having an annuity with nothing to spend it on.
In principle, mathematical models can be adapted to take into account real-world complications. In practice, however, economists tend to draw strong conclusions from simple models.
When it comes to the financial sector, mathematics has served economists mainly as a blindfold. Engineering models are poorly suited to articulating the role of financial intermediation in the economy:
  • 􏱂  In the engineering model, the essence of economic activ- ity is turning resources into output. However, there is no tangible output from the financial sector to quantify.
  • 􏱂  The engineering approach divides into “partial equilib- rium,” which looks at the behavior of a single market, and “general equilibrium,” which looks at the interaction among all markets. The financial sector is important for the way in which it interacts with other sectors, so that it is not well understood using a partial equilibrium approach. However, most general equilibrium models are posed as mathematical problems that can be solved without any financial sector at all.
In his recent book Foolproof, financial journalist Greg Ip says of economic policy analysts:
Philosophically, they fall into two schools of thought. One, which I call the engineers, seeks to use the maximum of our knowledge and ability to solve problems and make the world safer and more stable; the other, which I call the ecologists, regards such experts with suspicion, because given the complexity and adaptability of people and the environment, they will always have unintended consequences that may be worse than the problem we are trying to solve.
I fall on the ecologist side of this divide. Although an engi- neer thinks of machines as having stable, predictable proper- ties, an ecologist thinks of an evolving, adapting system.
The engineering approach requires a presumption that someone is standing outside the economy—an economic policy adviser—who, with the aid of simple models and equations, clearly sees what it would take to achieve outcomes that are superior to those that would emerge without government intervention. The engineers argue, quite reasonably, that we should not interpret market outcomes as perfect or ideal. However, they implicitly assume, much less reasonably, that the political process aided by economic models, will succeed in correcting the flaws in markets. Moreover, they assume that individuals and organizations acting in the context of markets will be unable to adapt to solve the problems that arise.
There are several fundamental concerns with the presump- tion of a wise, benevolent policy process. It treats the knowledge embedded in an economist’s simplified model as though it were complete knowledge. It ignores the ways that markets might adapt to solve problems. And it presumes that when the political process goes to work on problems, it arrives at solutions flawlessly.
Economists talk about “market imperfections” or “market failure.” However, economic models are themselves imperfect and capable of failure. If you ask different economic experts to predict the effect of a change in health insurance regulation or an increase in the corporate income tax rate, you will get different answers. The answer that appears to have the stron- gest support may turn out to be incorrect in practice.

As outsiders, economists see some of the conditions in a market, but they omit other factors. In that regard, econo- mists are no different from other outsiders. To the extent that there are outsiders who see a flaw in how the market serves consumers, those outsiders have the option of starting a busi- ness to address the problem. That is what entrepreneurs do all the time, and they are the main engine of economic progress.
However, entrepreneurs are often mistaken, and new busi- nesses often fail. By the same token, economists and policy- makers are also capable of making errors. What we should be comparing is not the existing market configuration with an ideal based on a simple model but the market process of error correction with the political process of error correction.
My skepticism of mechanistic, mathematical modeling leads me to reject the implicit assumption of a nearly omniscient economic adviser. I will return to this subject in the section on policy in practice.

chap 1

               21
                SPECIALIZATION AND TRADE
In 2000, Ursinus College, a small undergraduate institution in Pennsylvania, raised its tuition by more than 17 percent. Subse- quently, the number of applicants and acceptances for its freshman class rose. That outcome appears to violate the law of demand, which says that demand goes down as the price goes up. Has the law of demand been found to be false?
Neither I nor any other economist would be willing to con- cede that the law of demand fails to hold. Instead, we would look for factors that might account for the Ursinus College application anomaly. For example, we might ask whether competing colleges raised their prices as much as or more than Ursinus. Perhaps Ursinus College had a successful basketball season, or another factor raised its profile among high school students. Perhaps a new government program increased the subsidies for college students.
In fact, the article on Ursinus College mentions that it also raised its level of student aid by close to 20 percent, and that the majority of students paid less than half of the full price. One can argue that, rather than defying the law of demand, Ursinus was using it. The college was taking advantage of what economists call price discrimination, charging a high price to those students willing to pay while luring the more price-sensitive students with generous aid.
10 There is an exception to the law of demand, known as Giffen’s Paradox. Suppos- edly, when the price of potatoes went up in Ireland, Irish people were so impoverished by that rise that they consumed less meat and more potatoes. In what follows, let us agree to ignore this exception. It certainly would not explain the Ursinus College anomaly.


         
The point is that the law of demand holds only when “all other things are equal.” However, in the real world, other things are almost never equal. In the case of Ursinus College, financial aid policies were not being held equal.
In physics or engineering, when you leave out a factor (such as friction), you do so because you can show that in the context of your analysis that factor will not be important. In economics, we typically cannot do that, because we do not control the environment in which we undertake a study. Many import- ant causal factors will be operating at once, and although we might hope or pretend that none of them matter, we often have no basis for ruling out their importance.
When economists seek to explain phenomena, we usually confront a long list of possibly influential factors. Unlike physicists or engineers, we cannot demonstrate that factors are unimportant in order to justify ignoring them. Instead, we are subject to what is known as confirmation bias. That is, we tend to selectively cite observations that confirm our views, ignoring other factors that might be at work. However, when observations appear to confound our views, we seek out and cite those other factors. If its applications had fallen when Ursinus College raised tuition, we would not have looked for other explanations. However, when demand increased, we are inclined to examine other factors.

Is economics a science? Some people, mostly economists, believe that it is. On the other hand, other people, mostly noneconomists, are skeptical or even scornful of what econ- omists teach.
I think that both camps are guilty of underestimating the challenge of arriving at economic understanding. Those econ- omists who claim the mantle of science are guilty of hubris. Noneconomists who think that their own intuition is superior to economic reasoning are dangerously misguided.
Imagine that you had a scale to measure the carefulness with which someone reasons about a subject. Let that scale run from 1 to 5, with 1 representing the most careless sort of reasoning, filled with superstition and personal biases, and 5 representing scientific reasoning, based on mathematical logic and experi- mental observation. Where does economics fit in?
I believe that good economics is at least a 6! That is, good reasoning in economics requires more careful thinking than good physical science—for two reasons. First, more causal factors are at work in economics than in physical science. Second, although physical relationships are relatively stable, the economy evolves rapidly, including evolution in response to government’s attempts at regulation.
A key component of the scientific method is making state- ments that are verifiable. A proposition can be verified only if

it can be tested against a standard of truth. Putting a propo- sition up against a standard of truth means taking the chance that the statement can be falsified. Thus, scientific proposi- tions must have the potential to be falsified. This philosophy of scientific inquiry is called “falsificationism.”
For the most part, statements that qualify as scientific prop- ositions are falsifiable. They are either mathematical proofs, which can be falsified by showing a flaw in their internal logic, or else hypotheses about what we observe in the world, which can be falsified through careful observations and experiments.
According to that scheme, a belief that cannot be falsified either by logic or by evidence is nothing but dogma. Dogmatic beliefs cannot be falsified, but that is only because you hold onto your dogma regardless of any arguments that can be raised against it.
Reasonable beliefs should not be false, of course, but they should be subject to testing against logic or observation. To put the case for falsificationism another way, one would say that any proposition that cannot be falsified is by the same token a proposition that cannot be verified.
If you hold onto a belief so dogmatically that no evidence could change your mind, then that belief is not falsifiable. Nonfalsifiable dogma is the worst sort of belief. Reasonable people can settle differences of opinion regarding falsifiable

statements. Not so with dogma. If that is the case, then scien- tific argument becomes pointless. That is why scientists pre- fer to deal in propositions that are falsifiable.
However, not all scientific beliefs are falsifiable. A few key beliefs, called paradigms by Thomas Kuhn,11 and which I will call “frameworks of interpretation,” are so fundamen- tal to how scientists view their subject that they are almost beyond question. For example, Darwin’s theory of evolution is a fundamental framework of interpretation in biology. Biol- ogists no longer ask whether Darwinian evolution can explain phenomena. Instead, they talk about how the theory can be adapted to provide explanations.
A framework of interpretation cannot be falsified. How- ever, many frameworks suffer from anomalies. In evolution, for example, some phenomena, such as a peacock’s large tail, would appear to reduce survivability. To address that anomaly, biologists have suggested that the large tail signals strength and attracts potential mates, thereby actually tend- ing to increase the survivability of that characteristic.
The difference between a falsifiable proposition and an interpretive framework is that it takes only one anomaly to reject a falsifiable proposition. A single clear-cut logical flaw serves to falsify a logical proposition or mathematical proof. A single conclusive experiment serves to falsify an empirical hypothesis. However, a single anomaly does not lead someone to abandon an interpretive framework. (Keep that in mind the next time you see someone claim that “this one chart” provides definitive proof for or against a particular economic viewpoint.) An anomaly makes scientists uneasy, but they look for ways to address the anomaly without abandoning their interpretive framework.
Up to a point, scientists will stick with an interpretive framework in spite of anomalies. However, if enough anom- alies accumulate that scientists become uncomfortable with a framework, and they find that an alternative framework addresses the anomalies and is compatible with existing knowledge, then they will switch to the new framework. That switch is what Kuhn calls a scientific revolution.
In general, I shy away from using the term “social science,” because I do not think that economists can aspire to the same level of falsifiability as physicists. I believe that the difference between social science and natural science boils down to this:
In natural science, there are relatively many falsifiable propositions and relatively few attractive interpretive frameworks. In the social sciences, there are relatively many attractive interpretive frameworks and relatively few falsifiable propositions.
The reason that there are relatively few falsifiable proposi- tions in the context of social phenomena is that many causal factors exist, and decisive experiments are rarely possible. Social phenomena are characterized by high causal density, to borrow a term from James Manzi.12
As a result, economics is closer to history than to physics. If a historian wants to examine the causes of the decline of Rome, or the decline of empires in general, he or she will provide an interpretive framework. That framework cannot be falsified, but readers can compare it with other frameworks and make judgments about its plausibility.
For example, consider the phenomenon of the compara- tive salaries of men and women. Economists interpret sala- ries using the framework of human capital. That is, workers bring to the market different levels of ability, training, and experience, and those attributes determine what they are able to earn. Sociologists use a framework that emphasizes group identity, status, and power, with men the more dominant group and women the more oppressed group.
If a study were to suggest that women earn less than men, even when controlling for years of education and other indicators of human capital, then that would be an anomaly for the economists. If a study were to suggest that most of the lowest-paying occupations are occupied predominantly by men, then that would be an anomaly for the sociologists. However, such observations will not prove decisive. By invoking other factors to explain anomalous results, each side can remain unmoved. Economists will not abandon their human capital framework, nor will sociologists abandon their group-status framework.
What economists call “models” are interpretive frameworks. They are presented mathematically, with proofs that connect initial assumptions to ultimate predictions. However, the pre- dictions are not falsifiable. The models’ predictions hold only when other things are equal, and other things are never equal.
For example, consider the very common equation Y 􏱀 f (K,L), which says that output is a function of the amount of capital and the amount of labor. One obvious prediction is that more of either factor will tend to increase output.
That production function is used to interpret data in various contexts, including making comparisons of labor productivity.

For example, suppose that Alan’s lawn service can mow more lawns per worker than Bob’s lawn service. The first variable that an economist will look for to explain the difference is the num- ber of lawn-mowing machines per worker at each firm. If Alan’s service does not use more lawn-mowing machines per worker than Bob’s, then the economist will look at the quality of the mowing machines at the two firms. If that does not explain the difference, then the economist will fall back on “better manage- ment” or some other factor. The less closely that the explanation can be tied to capital, the more anomalous the result will be.
Economists actually try to use the production function to explain productivity differences between entire countries or to explain the historical path of productivity within a country. However, that approach requires taking a weighted average of many different types of outputs and treating the weighted average as if it were a single type of output. Similarly, econ- omists must construct measures of aggregate capital and aggregate labor by taking weighted averages of many different types of each. Many other factors affect aggregate productiv- ity, including endowments of natural resources, government policies, and the diffusion of knowledge. Not surprisingly, in empirical studies, many anomalies can and do crop up, so that the issue of what causes productivity to differ across countries or to change over time remains highly controversial.

Another challenge for economics is that the economy evolves. Consider some of the factors in the relationships between aggregate output, total labor input, and total capital input. Imagine trying to compare the U.S. economy today with that of 50 years ago. We have to take into account major changes, including the following:
􏱂 Many fewer people are in the labor force with less than a high school education, and many more people have at least some college education.
􏱂 The share of output in agriculture and manufacturing has fallen, whereas the share of output that consists of services has risen.
􏱂 Some outputs today, such as smartphones and heart transplant surgeries, cannot be compared with outputs of 50 years ago.
􏱂 The share of workers directly involved in production has fallen. The share of workers who are developing organi- zational capacity has risen.
􏱂 The share of computers in total capital has been rising. The cost of this particular type of capital equipment has plummeted sharply, and its characteristics have changed radically, making it difficult to measure reliably how the value of investment in computers has changed over time.

The economy also evolves as new business models, new production processes, and new institutions emerge to solve problems. The “market failures” identified in economic mod- els are only a small fraction of the imperfections that exist at any one time in the economy. Businesses and other organiza- tions are constantly working on solutions to those problems.
Nobel Laureate George Akerlof famously provided an interpretive framework for the used-car market in which high-quality used cars would be kept off the market, because buyers would have to assume, in the absence of other infor- mation, that all used cars were “lemons.”13 However, that framework assumes that no market adaptation exists to address the problem. The information problem in the used- car market can be addressed in a variety of ways. For example, mechanics can inspect used cars before consumers purchase them. Sellers can offer warranties on the cars. Decades after Akerlof’s article was published, a national used-car dealer called CarMax emerged with a business model based on a reputation for selling high-quality used cars. Other services emerged to make the repair and service records of used cars transparent to buyers.

Markets also adapt in response to our attempts to regulate them. For example, economists have pointed out that the way in which physicians are compensated in the United States, with billing based on procedures, distorts the incentives of doctors so that they tend to perform too many procedures that have high costs and low benefits. However, if that system were changed so that doctors were compensated only according to the number of patients that they see, then we would likely have the opposite problem: to bill for as many patients as possible, doctors would try to avoid doing time-consuming procedures. If doctors were compensated on the basis of patient outcomes, then they would select patients who were likely to have good outcomes, avoiding some of the most difficult patients.
Because of causal density and evolution, economists can- not be certain of the reliability of our assumptions. Thus, any interpretive framework may be inappropriate, depending on circumstances.
Economic models contain many unverifiable assumptions in a context in which plausible alternatives exist. Conse- quently, when we observe, say, contrary to the expectations derived from a model, a decrease in the price of milk, or an increase in the overall unemployment rate, we do not know which of many assumptions was mistaken or which of many alternative explanations accounts for the data.

In physics or chemistry, the number of unverifiable assumptions and alternative models is whittled down through the process of experimental verification. In economics, because controlled experiments are not feasible, such whittling down cannot take place. A particular equation or set of equations becomes popular in the modern economic literature because economists find it interesting or tractable. But it does not have anything like the experimental support that exists for equa- tions in physics or chemistry.
Economists who employ models think of themselves as “doing science,” meaning that they are generating falsifiable propositions. However, in practice, they rarely reject their preferred models. Instead, they explain away anomalous observations. In that sense, they are really using their pre- ferred models as interpretive frameworks.
Even though interpretive frameworks are not falsifiable, that would not matter if the interpretations were never prob- lematic. However, all interpretive frameworks suffer from anomalies, that is, from phenomena that do not easily fit into the framework. Consequently, conflicts between interpre- tive frameworks are very difficult to resolve. As we saw with male–female pay differentials viewed using the economist’s or the sociologist’s framework, each side can point to anoma- lies on the other. What is unfair is to treat the other person’s model as falsifiable, unable to survive even a single anomaly, while you privilege your preferred model by explaining away any number of anomalies. Unfortunately, that sort of asymmetry pervades arguments among economists.
In short, I believe that it is useful to think of economists as constructing interpretive frameworks. Those frameworks are fragile, in that there are almost always anomalies—observations that are difficult to interpret using the framework. Popularity of a framework is not necessarily a sign of its strength. If a few leading professors get behind a particular framework and pass it on through their graduate students, then that framework can dominate the academic journals without being demonstrably superior to other frameworks.
We need to be reasonable in acknowledging the anomalies of our preferred frameworks, and we should be restrained in rejecting others’ frameworks outright on the basis of one or two anomalies. In choosing which frameworks to endorse, we should seek truth without ultimately finding it. Avoid wal- lowing in confirmation bias.

Economists do not deal with a subject that offers clear-cut tests of theories. We have to use judgment in deciding which interpretive frameworks to adopt. That does not mean that you should abandon the attempt to reason carefully and rely on simple intuition. Intuition uninformed by any economic framework is at least as flawed as are the frameworks taught in economics courses.
However, you should be wary of economists who claim scientific certainty. President Harry Truman, weary of economists who say, “On the one hand . . . On the other,” reportedly pleaded for a one-handed economist. That would be asking for trouble.