[Previous] [Main Index] [Next]

Friday, August 8, 2003

IS ECONOMICS A SCIENCE? CAN IT BE?

Thanks to a hacker attack, the initial version of this article --- published May 10, 2003 --- is wandering restlessly amid the pitch dark of a distant and dismal CPU-Purgatory, no reprieve in sight, along with all the other articles that appeared between April 18th and July 1. Revised and updated, that article now makes a re-run appearance here. Of the 120 articles or so published since the buggy prof site went into operation in late January this year, it's probably the most challenging intellectually . . . though, let us hope, not beyond the ability of university students or graduates to follow and either agree or disagree with. Introductory Comments As a discipline with aspirations to to being a science --- or even claims that it already is --- economics encounters a horde of problems that play havoc with the claims and frustrate the aspirations. In the past. Right now. And very likely way into the future.

 

Two such problems stand out, one theoretical, the other practical: [1] The limited ability of economists to cumulate reliable knowledge of economic behavior, with good predictive power: whether of business firms, or nation-wide economies, or the impact for good or bad of public policies, or the global economy --- including how they interact in increasingly complex ways. All this, moreover, in the face of relentless technological changes, some of a radically restructuring nature that drastically change the way we live, work, and fight wars; or of marked shifts in market dynamism from one country or region to another; or of extra-economic events ("exogenous variables" outside formal models, but influencing the interaction of economic variables) like wars, economic sanctions, and disruptions of critical supplies like oil. [2] The discipline's limited ability, which follows directly from the first problem, to offer reliable guidelines to policymakers in the public realm beyond common sense and whatever learning occurs over time for dealing with two big sets of stubborn, repetitive challenges:

fffff  *How to handle the ups-and-downs of the business cycle, still with us four decades after the triumphant Keynesians in the Kennedy era claimed their new statistical models of the economy had given them the ability to model the US economy and extract insights from the modeling to overcome recessions and booms and inflation;

 *Simultaneously, what to do --- however tentatively, but with some capability better than a coin-toss and a prayer --- when big disruptive changes to the status quo occur, particularly as they interact on the levels of business firms, interest groups like trade unions or environmental movements, financial institutions and financial hanky-panky (always a problem in capitalism, including ballooning and then busting stock markets), huge transfers of hot money across borders now facilitated by ICT breakthroughs, globalizing trends related to all this, and of course the behavior of governments and central banks in regard to these complex changes and interactions, politicians worried at the same time, needless to say, about being re-elected if they don't command a country like the Chinese Communists. 

The argument justifying these comments divides into three parts. Each is fairly lengthy, and so to help you make sense of the overall analysis, consider the following summary-outline of what will unfold in those parts.  

 

PREVIEW OF PART ONE

 

Though fairly lengthy, this part --- which lays down a lot of key terms and tries to clarify them --- is indispensable for understanding the argument set out in the other two parts. Fortunately, it jogs along at a fast-stride pace and shouldn't be hard to follow. As an aid in that direction, the summary here will tease out more of the implications in a careful preliminary manner than the previews of the other two parts. Don't be impatient. This curtain-raising task --- which entails some lengthy scenes from part one's coming attraction --- can't be side-stepped. In particular, it unfolds a fair number of comments about recent work in epistemology and the philosfophy of science --- essentially, logical positivism and its subsequent critics from Quine through Kuhn --- that amount in effect to the price of admission for whisking more nimbly over later intellectual terrain in parts two and three.  

 

Paradigm Conflicts

 

The major terms that part one deals with and clarifies are macroeconomics, microeconomics, and the various competing paradigms at work in both sides of the economics discipline. These competitors spawn vying theories across a big paradigm-divide: for instance Keynesianism vs. "new classical" theories, the former attracting mainly liberals and Social-Democrats, the latter conservatives and libertarians. The sharp differences across this paradigm-infused divide don't reflect just explicitly formalized scientific assumptions, plus deducted hypotheses and verification tests in textbook fashion, with the surviving hypotheses feeding back to more refined and effective explanatory generalizations that the theories within a dominant paradigm create. They may reflect such differences. Nobody is denying that. But the key point lies elsewhere: as we'll see, these theoretical disputes along the fault lines of paradigms also reflect moral values, politically charged beliefs, and ideological preferences . . . usually along the familiar left-wing / right-wing divide. True, these ideologically informed disputes no longer turn on sharp, nitty-gritty clashes between socialisms, market capitalisms, and various forms of corporatist-fascisms --- which dominated the emergence of industrial society in most of Europe and parts of Asia and Latin America in the 19th and first half of the 20th century.

 

Fascisms were destroyed in WWII, surviving in modulated form in corporatist-authoritarian ways in East Asia and Latin America until the last decade or two (though they retain their more basic core nastiness in clerical-fascist Iran, fascist-Mafioso Syria, and the unlamented Saddam-Hussein regime in Iraq and in Taliban Afghanistan.) As for the socialist-capitalist debate, they faded in energy and rancor in the EU countries by the 1960s, both left and right generally settling for a compromise interventionist welfare-state, and in Japan by the 1960s thanks to a corporatist one-party dominant system of democracy in Japan and to remarkably dynamic growth from 1950 until the late 1980s. The disputes are nonetheless still alive --- in more modulated form, including in the US --- between left and right over the nature of capitalist markets, the effectiveness and need or not of governmental regulations and fine-tuning macro policies as well as the virtues or harm of redistributive policies in matters of income.

 

Something else worth noting about these disputes: tangibly put, even though the major theoretical conflicts are between vying paradigms on the left and right --- both in macro and microeconomics as we'll see --- within each dominant paradigm, there are likely to be divergent theoretical strands that are mini-competitors with one another too.

 

On the Keynesian side --- which generally assumes that there are major market failures that prevent capitalism from achieving optimal growth and full employment and stable prices without active fiscal and monetary policies --- there can be found original Keynesianism of a fairly radical sort, then the much more moderate neo-classical synthesis worked out by Paul Samuelson in the 1950s (including a spin-off, the Phillips curve), then a revived radical Keynesianism called Post-Keynesianism, and a more updated Samuelson version known as New Keynesianism. Pm the new classical side, there are likewise mini-divergent theories: monetarism (Milton Friedman), rational expectations, "vulgar" supply side (Arthur Laffler: cut taxes and reduce governmental intervention and immediately good things will happen), long-term intelligence supply-side work (Martin Feldstein and others), and real business-cycle theory.   Ideology? A point just referred to in a fast, top-skimming manner needs to be brought to the foreground now and spotlighted briefly.

 

Specifically, though the disputes in economics are carried out in theoretical terms and with widely shared disciplinary methods --- the only way they can in the discipline of economics --- they still largely reflect assumptions about markets and governments and how self-adjusting and optimally efficient the former are or not, and how much government intervention is needed to deal with them. As you'll see later on in part two's argument, these differences boil down to three sets of influences: [1] on one side, the liberal and conservative political divide and where individual economists stand there; [2] on the other, to discipline-sanctioned ways of arguing these differences out --- in particular, by use of concepts like "market failures" and "government failures" --- with the resulting arguments using formal models, hypotheses, statistical testing, and game theory; [3] and somewhere in between these two systematic sets of influences the concerns of individual economists (like all scholars) with their scholarly reputation, prestige, and influence. The three sides --- politics and morality; discipline-sanctioned ways of doing economics; and personality and ambitions --- are hard to separate, and probably impossible here.

 

The theoretical upshot? In the minds of individual economists, the assumptions about market or government failures and how serious each happen to be aren't just infused with empirical observations (facts) and theoretical generalizations. Far from it. Mixed in inseparably in the work individual economists do --- like all scholars, whether they claim to be scientists or not --- are wider, more value-laden matters: in particular, moral and political beliefs and attitudes about such things as inequality or governments or the "best" kind of society and economy all mix inseparably with the empirical assumptions. The point can be carried a stage further. Pivotal assumptions about human nature and human motives and social and political life operate here in economic disputes --- invariably, but in unexamined taken-for-granted ways. How so? Because, quite simply, many of these key assumptions about human nature and politics and social life loom vaguely, in fuzzy manner largely veiled, in the background of theories. They have a powerful impact on the ways economists proceed anyway, operating in the semi- and unconscious parts of an individual economist's mind, where they're influenced by his or her personality and early and later socializing experiences that shape a world-view. Most economists would no doubt even be surprised if you were to ask them about their "world-view." Oh sure, they might say that, yeah, I'm a conservative who believes in markets --- or yep, I vote for the Social Democratic Party in my country --- but hey, these political leanings have nothing to do with my professional work. Are they deceived? Well, aren't most people victims of self-deception anyway?  

 

Paradigm Clarified: Think World-View Yes, world-view.

 

In many ways, as it happens, it's not a bad stand-in term for paradigm: a comprehensive conceptual scheme in a discipline 1) that mixes in explicit and implicit beliefs and assumptions about the world and the subject matter of the discipline itself, 2) that sets out what the major problems are to be studied, and 3) that determines the methods to be used for arriving at hypotheses and verifying or at least in Karl Popper's terms to try refuting them . . . all in a widely shared "language" of analysis, full of concepts that influence how the group's members see and understand the world. And the assumptions, remember, don't easily separate out concepts, empirical or factual views, and morally or politically charged value-laden outlooks of the sort that are inescapable in the social sciences. In explaining scientific work in this manner, analytical philosophy --- no, not Johnny-Come-Later postmodernist irrationalists, usually unable to set out a clear rigorous argument open to exchanges with others in a give-and-take manner (the hallmarks of analytical philosophy) --- was the pioneer here. To explain briefly: In the early 1950s, Willard Van Orman Quine refuted the logical positivist arguments about the world --- above all, the claim that observations, set in language as propositions, can be divided neatly into theory and logic on one side and facts or empirical matters on the other: a view that went back two centuries practically to Kant, with his distinction between the analytical and the synthetic. In doing so, Quine --- born in Ohio --- rejected the influence of his German-born mentor, Rudolf Carnap. . . the most famous member of the Viennese circle of Logical Positivists, profoundly influenced in its origins by Ludwig Wittgensteinwho himself never joined that circle (or essentially anything for long in his fascinating life in Vienna and Cambridge).

 

Specifically, according to Quine, facts and theory aren't easily separable, if at all; theoretical assumptions and premises determine the "context" of what's considered factual, and the two sets of propositions intermingle in inseparable ways. From that viewpoint, Quine's the first deconstructionist . . . though in fact decades earlier, the great pioneers of pragmatism, Charles Sanders Peirce, William James, and John Deweyhad been striving to reject the traditional dualisms of philosophy going back to Plato and reinforced by Cartesian and Kantian and Hegelian contributions in the modern world: mind-body, theory-fact, subject-object in scientific work (the scientists' conceptual apparatus is separate from the subject matter of their study), and so on.   Kuhn and Scientific Revolutions Thomas Kuhn--- a physicist, who became a historian of science and was influenced by Quine at Harvard --- went much further and doubted that the natural sciences themselves ever worked according to traditional views, which the logical positivists in the Vienna Circle and their successors in the US and English-speaking world (usually called logical empiricists) had updated in a rigorous fashion in the first five decades of the 20th century. His book, The Structure of Scientific Revolutions --- published in the early 1960s --- broke entirely new ground, not least because it didn't lay down premises and first principles and then deduce a line of argument about the epistemological status of scientific work.

 

That's what logical positivists and logical empiricists had been doing for decades. Nor was that all. In the hands of logical empiricists like Ernest Nagel, Carl Hempel, Richard Braithwaithe, and Rudolf Carnap --- all very influential in Anglo-American philosophy of science and epistemology after 1945, and especially among social scientists aspiring to turn their discipline into a science modeled after physics --- their philosophical work didn't simply lay down the axioms and deduced rules of inference and hypothesis-making and testing as well as the explantory and predictive functions of theories that allegedly defined the cognitive status of scientific work and how it proceeds; it also became fully normative --- it spelled out the ways in which aspiring scientific disciplines ought to operate if they too were to merit the scientific tag. Kuhn had no interest in such apriori reasoning. It struck him as fruitless, armchair hobbyhorse-stuff. Instead, he looked carefully at the history of physics and the ways scientists actually go about their business --- particularly when major change are afoot in the discipline --- and what he found was startling. Yes, startling. Directly at odds with the received positivist view. As Kuhn saw it, science didn't progress by means of incremental testing of theoretical propositions, from which hypotheses are deduced and shown to be true or not in the world and then adjusted accordingly . . . with a clear distinction between theoretical terms and propositions on the one hand and empirical matters or facts applied to the hypotheses on the other. Instead, he placed at the center of his philosophy of science the notion of paradigms in the manner we've just mentioned: a commitment to a world-view --- a set of shared beliefs and assumptions that scientists in any discipline hold --- which identify the key problems of the discipline, the dominant theoretical terms and alleged laws that govern them, the methods of testing hypotheses about the laws that explain the problems and related subject matter, and other beliefs (usually unstated) that determine how the discipline should behave.

 

For a good clarification of Kuhn's key ideas, seeEmory University Once a paradigm takes root, mainly by solving the major problems of the era that it singles out --- or claiming to if enough work is done by the members of the discipline ---the discipline then is able practice what Kuhn called normal science: it approximates the textbook version of science. Specifically, in these periods of calm normal science --- which are really only interludes, Kuhn claims, between turbulent scientific revolutions of a violent intellectual sort that overturn the previous paradigm in the discipline --- the vast majority of the scientists in that discipline are essentially methodical puzzle-solvers: thanks to their shared paradigm, they already know the "theoretical truth," and their main task is to find ways to bring "facts" closer into conformity with the paradigm's key theoretical propositions. Understood in this way, note that the role of scientists throws doubt the traditional view that they are essentially objective and detached seekers of the truth. Truth?

 

 

Like the pragmatists before him, Kuhn doubted that there's any ultimate truths to be solved. Rather, scientific progress consists of successive revolutionary over-turns of one paradigm after another by new ones. These revolutions erupt, he argued, when the puzzle-solvers run into too many cumulative problems and other puzzling matters that don't fit the dominant theory or theories that the prevailing paradigm entails. The part of the world or universe that the paradigm and its dominant theories are supposed to explain and predict just makes less and less sense. What then? At some point, the paradigm begins to break down and is eventually replaced in a violently turbulent way, intellectually speaking, by an entirely new paradigm that --- even when the conceptual terms look the same (atoms in the 19th century Newtonian world, atoms in the 20th century world of relativity and Quantum theory) --- entails different shared beliefs and assumptions and hence a different meaning for the look-alike concepts.   Kuhn's Radical Impact Remember, on this Kuhnian view, there isn't a cumulative progression in a predictable, rational way from the older paradigm to the new one. The change, to repeat, is revolutionary. There isn't patient accumulation of experimental results and adjustment of reliable theories. There's upheaval; the new theories spawned by the new triumphant paradigm are incommensurable and at odds with earlier theories. Hence the title of Kuhn's best-known book, perhaps the most influential philosophical work in the last century: The Structure of Scientific Revolutions.

 

And remember something else.

 

Despite the popularity of the term paradigm in the social sciences --- the neo-classical paradigm, the Keynesian paradigm, the rational-choice paradigm, the social biology paradigm, the realist paradigm in international relations, the cognitive paradigm and so on --- Kuhn himself made it clear in later editions of his famous book that he doubted whether the social sciences really merited the designation of being scientific. Mainly, in these disciplines, there rage disputes over the definitions of key terms, methodological strivings, underlying beliefs about the nature of humans and society and biological and cultural differences or interactions and so on; but little in the way of cumulative knowledge that "normal science" as an interlude of calm in the periods between scientific revolutions in the natural sciences. To put it differently, no one dominant paradigm --- world-view --- commands full allegiance. Normal science doesn't prevail as it does in the calm interludes, often decades or centuries, as in the natural sciences. From all these angles, Kuhn effected an intellectual revolution of his own. Before his book, epistemology and the philosophy of science were concerned chiefly with spelling out the rules and logic --- methods if you want --- that make science a particularly privileged approach to the world and knowledge about it. Essentially, these views of science were in large part normative. After Kuhn, that all changed. Philosophers and sociologists of science and social science theorists --- very little, by the way, by scientists themselves, busy with other things in the natural sciences --- now found themselves in an entirely new enterprise: accounting for the "how, why, and when the activities of scientists become accepted" as producing knowledge. (The terms in quotes are taken from an article on Kuhn.}  

 

 

 PREVIEW OF PART TWO

 

In this stage of the main argument, a lot of comments are unfolded about real-world trends in economic life, nationally and globally, and how economists react to unanticipated and highly disruptive changes that throw monkey-wrenches into the ways national economies or the global economy are supposed to perform. Their reaction isn't of the sort that marks the natural sciences, at any rate in principle . . . in textbook terms. There isn't a steady systematic re-examination of basic premises and assumptions, or new hypotheses deduced directly from the accepted body of theoretical work, followed by testing and verification and hence adjustments in the core theoretical "laws" that produce a more updated theory with more explanatory and predictive power. Instead, a new paradigm with new assumptions and premises and theoretical generalizations eventually bursts onto the scene in less than fully rational ways, with the old paradigm and the theories it spawns in competition with the new paradigm and its emerging theories. What drives these "scientific revolutions" --- the term is Thomas Kuhn's,

 

Kuhn the most influential philosopher of the last half century --- is instead a groping after new and more illuminating premises and assumptions to handle the unresolved major puzzles and challenges. In economics, the driving forces here are overwhelmingly pragmatic and reactive: economies, even market ones, stop behaving according to existing theories, get thrown off kilter badly by dislocating and disruptive events --- wars, droughts, embargoes (think of OPEC in the 1970s), recurring financial scandals, stock market balloons and busts, new market power of economic actors (giant corporations maybe, trade unions, huge investment fund managers) --- and perform badly in prices and employment and growth. At some point, desperate policymakers --- politicians, central bankers --- grope around for some existing theoretical handle to get a grip on things and put the economy back to something like optimal performance, and if need be, what were at best marginalized theories --- ridiculed or laughed at by most economists just a year or so ago --- suddenly burst into prominence as new guides. Hence a "scientific revolution" in economics as a discipline has occurred. Worse, economics never even achieves --- certainly in macroeconomics, to an extent even in microeconomics --- the status of what Thomas Kuhn called "normal science": one dominant comprehensive paradigm prevailing at any one time, until major puzzles and anomalies arise, over decades or longer, and a "scientific revolution" occurs as the old dominant paradigm is overthrown and a new one emerges, almost always with struggle and resistance.

 

As an example of normal science, think of Newtonian physics right through the 18th and most of the 19th century, then suddenly --- as big theoretical puzzles and challenges emerged in physics that Newtonian theories couldn't handle effectively --- the emergence of a far different and dominant paradigm, still uncompleted, that Einstein created with his theoretical work in special and general relativity and Bohr and others in quantum mechanics. That occurred early in the 20th century, and despite the incomplete search for a theory of general relativity and ways to synthesize it with quantum physics, most physicists would claim they're doing "normal science" of the standard textbook sort. Existing theoretical problems and challenges can be dealt with by existing theoretical law in systematic manner: hypotheses formulated and tested, predictions made and verified, the findings of both requiring at most adjustments in the core body of theoretical laws of physics in incremental ways at the margins. Knowledge is cumulative; changes are minor. Not so economics . . . a point we'll return to momentarily.   Here our argument runs deeper, moving away from a leisurely, loose-gaited analysis of the real-world circumstances that cause economics to be riven by schisms and competing paradigms --- and the reasons "paradigm revolutions" occur --- to a more rigorous philosophical analysis. (Hence, to repeat, one reason for teasing out some philosophical implications earlier in the summary-foreshadowing of part one.) It's inspired by recent work in epistemology and the status of scientific theory that analytical philosophers have developed since Quine's pathbreaking work at Harvard in the 1950s, elaborated in very different ways with big innovations of their own by Thomas Kuhn and by Donald Davidsonand colleagues like Hilary Putnam, often in dialogue with analytical critics like Richard Rorty,

No: no postmodernist hokum.

The revolutionary developments that have overthrown traditional common-sense views of science and more powerfully logical positivism of the early 20th century have come out of the analytical school of philosophy, men and women of high caliber who are careful with their arguments, set them out clearly and rigorously, and are required to engage in constant dialogue with their peers, friendly or critical.\ The upshot of all this? As a discipline with some strengths --- but also far greater shortcomings than most economists are wont to admit, maybe even think --- economics will never be able to master these two sets of problems and drawbacks; never effectively emerge, it seems, as a rigorously organized science with clear and verified laws of economic behavior that aren't just time-bound and space-bound . . . liable to be overthrown in decades or even at times just a few years by relentless technological, economic, and political changes as they interact in ever more complex ways, at home and globally.  

PART ONE OF THE FORMAL ARGUMENT: KEY TERMS CLARIFIED

Yes, fairly lengthy this section; no two ways about it --- but necessary as you'll soon see. Why? Because . . . well, quite simply the argument won't make sense --- particularly some technical aspects --- if you don't have a good foot-hold for your mind at the start. (Are we still at the start? Is this a buggy illusion?) Anyway, please be patient: the buggy prof isn't hopelessly buzzed out in explaining complicated matters in clear terms; not yet anyway. Not entirely. Macroeconomics is the side of the discipline that deals with the broad workings of the national and global economies and their interactions, marked by efforts to explain such crucial aggregate matters as unemployment, prices (inflation or deflation), recessions, long-term GDP growth, and trade flows of goods, services, and investment capital. Invented essentially by Keynes in the 1930s --- with help from pioneers in national income statistics (above all, Simon Kuznets at Harvard) and early refinements by John Hicks of Oxford and Alvin Hansen of Harvard --- macroeconomics has been beset ever since by schisms and a variety of competing paradigms and the theories they spawn. There's hard-core Keynesianism of a radical sort, which entails major reforms in capitalist economies: it still thrives since the 1970s in what's called, oddly, post-Keynesianism.

Keynesianism Updated

There's mainstream Keynesianism, created in the late 1940s and 1950s by one of the half-dozen great economists of the last century, Paul Samuelson at MIT --- also the major pioneer in modern econometrics, the statistical modeling of economic life. Called the neo-classical synthesis, Samuelson tried to reconcile the Keynesian view of active interventionist governmental macro policies --- fiscal and monetary alike --- to keep the macro-economy near full employment and optimal growth, all this on the one side, and on the other the impressive work done at the end of the 19th and into the 20th century of "neo-classical economists" like Alfred Marshall at Cambridge and Irving Fisher at Columbia, John Hicks at Oxford, A.C. Pigou at Cambridge, Edward Chamberlain at Harvard, and Vilfredo Pareto in Italy who showed how markets actually work to create a theoretically efficient output of goods and services for even national economies of hundreds of millions of people. The work of Samuelson and others in his wake --- which eventually helped spawn the major policy guide for hard-pressed politicians and monetary authorities in the 1960s, the Phillips curve: it postulated stable tradeoffs between a little more inflation and a little less unemployment --- was then challenged in the late 1960s and early 1970s by Milton Friedman and Edmund Phelps (Columbia) and overthrown . . . not so much owing to their theoretical apparatus and proof that there weren't stable tradeoffs between inflation and unemployment, rather because after the sharp oil price hikes of the 1970s the Phillips curve no longer seemed to work.

Instead, casting about for new guidance, policymakers here and in Europe and Japan found the earlier work of Friedman and Phelps convincingly closer to the reality of economic life after 1974: both rising unemployment and rising inflation, compounded by much slower economic growth in the industrial world ever since. Thus was born the NAIRU: the Non-Accelerating Inflation Rate of Unemployment --- usually shortened to the natural rate of unemployment: the level of unemployment, say 6.0% in the US and 9% in the EU by 1980, which couldn't be lowered by traditional policies of fiscal stimulation or monetary expansion (lower interest rates) without setting off changes in the expectations of firms and workers and risking accelerating spirals of price rises. Keynesianism, in its various forms, seemed beaten, a special form of Depression-era economics. In its wake, a paradigm of "new classical" macroeconomic theories was born, each of them competing in turn with one another to a degree . . . just as various Keynesian theories had: the radical original sort of Keynes, carried on at Cambridge from the 1940s right down to the present; Samuelson's more moderate synthesis of Keynes and neo-classical economics; post-Keynesianism, a revival of the radical sort on the margins of the US economics profession; and "new Keynesianism," much more influential, which sought to come to terms with the new classical theories and has tried to retrace Keynesian views that there is some combination of big market failures inherent in capitalist markets working against full employment and optimal output at stable prices in the expectations and behavior of individuals and interest groups like trade unions---traditionally a matter of microeconomics.

New Classicism

The new varieties of classical economics are numerous: *monetarism (a creation largely of Milton Friedman, one of the other most influential economists of the last century *rational expectations (created largely by Robert Lucas, his pupil, at Chicago, but also Robert Barro and Thomas Sargent, that insisted that all macroeconomics had to be re-thought at the level of individual firms, individual workers, and groups) *supply-side theory, which has its serious economists like Martin Feldman at Harvard --- Reagan's former head of the Council of Economic Advisers and very influential in the current Bush administration, and more popularizing and probably wrongheaded types like Arthur Laffler who see in big tax cuts immediate windfall gains for economic growth *and more recently real business-cycle theory --- new classical theorists returned to the early 20th century views that markets were generally efficient and largely self-adjusting . . . or would be if politicians and bureaucrats let them operate properly. As it happens, a very powerful school of classical revival armed with new theoretical insights that have had a powerful influence on the US economy and policymaking, acting essentially in reaction to ever larger number of regulations and fiscal-finetuning and taxes and redistributive policies carried out by governments here and abroad:   The theory's original creators, interestingly, included several liberals, above all A.C. Pigou (a contemporary of Keynes at Cambridge) in the 1930s, and later Paul Samuelson, and Kenneth Arrow (Stanford and Harvard) in the 1950s . . . followed by the systematic work of more conservative economists afterwards: James Buchanan, Gordon Tulloch, Mancur Olson, Thomas Schelling, and others.

Eventually, Buchanan won a Nobel Prize for this work --- in the 1980s ---- which challenges the assumptions in interventionist paradigms that markets are prone to systematic "market failures" that only public policy can remedy. The crux argument? Markets aren't so prone this way as thought --- toward monopoly (trade competition is crucial here), negative externalities like pollution, positive externalities that markets operating on self-interest can't themselves create optimally (e.g., education, inoculation against diseases, beautification of rundown urban areas), and public goods like national defense and pure scientific R&D and good law and order or property protection that are subject to collective action problems like "free-riding:" free-riders are non-contributors to a public good who can't be prevented from reaping the benefits of the good or service produced.

That being the case, why would anyone absorb the costs of supplying the good?

 This, as many of you know, is the realm of microeconomics, disaggregated allocative concerns: in particular, given endless tastes and acquisitiveness and assumptions about individual motivations here, how markets will or will not tend toward efficient allocative output --- Pareto optimality in technical terms --- thanks to laws of supply and demand. What neo-classical economists did in late 19th and early 20th century was develop a paradigm and theoretical offshoots for explaining these laws and their outcomes, largely set out by Adam Smith and David Ricardo several decades earlier. Further and highly impressive work continued in neo-classical economics right through the 20th century, with more and more statistical work and game theory added, and --- despite what economists claim is a more general consensus-based dominant paradigm here, which permits "normal science" --- the resulting work is also strife-ridden, dividing roughly economists into two camps: those who see markets as essentially efficient and optimal but ruined by bad government policies in their workings, and those who markets as prone to systematic problems. The latter camp, usually listened to by liberal and social-democratic governments, stresses "market failures."

The former camp is influential in the Reagan and Thatcher era and has been influential right through the 1990s, but especially in the Bush-Jr. era.  

Note Though

True, the key theoretical concepts in microeconomics are used by both camps, such as marginalism, opportunity costs (tradeoffs in scarce resources like capital and labor and time), proper economic incentives (profit-maximization, income maximization, and tradeoffs with leisure) to motivate economic actors, the nature of competitive or non-competitive markets (perfect competition, monopolistic competition, monopoly), externalities, cost-benefit analysis, and Pareto optimality and Pareto improvements to measure efficiency in allocating capital and labor and time and natural resources. Essentially, however, the "intervention camp" in economics stresses that markets --- left to themselves --- can't reach full Pareto optimality: they are prone to systematic failures. Examples of market failure are *degrees of monopoly in markets (the wheat market is one of the few in the real work that approximates the textbook example of large numbers of small producers and free entry and exit --- bankruptcy --- which make it impossible for any group of producers to influence prices and output noticeably); *externalities --- which are costs or benefits incurred by third parties who didn't directly engage in the economic exchange of a market sort. Costs refer to negative externalities; pollution is the standard example, the consequences of which are suffered by localities often hundreds or thousands away from the source, such as a chemical factory. Benefits are created by positive externalities, a good thing but something self-interested market behavior, it's claimed, can't maximize.

An example would be education and job training by firms as opposed to a fully well-trained and hence more flexible and efficient labor force nation-wide; inoculation for public health, which firms or agencies will engage in for their personnel, but not for the entire community or nation; or beautification by some initiative-taking homeowners in a dilapidated neighborhood, which raises the prices of homes for the everyone, even those who keep their own houses derelict-looking. *public goods --- goods that benefit large bodies of people, including non-contributors, such as national defense, or pure scientific research that no firm will undertake but that in later decades may pay off for everyone in applied work, or a good functioning legal system and law and order and property protection. Here non-contributors can reap all the benefits if others take the initiative to pay for creating and maintaining them.

The trouble is, such "free-riding" --- a collective action problem --- can't be easily overcome: since nobody can be excluded easily from enjoying the good (itself jointly consumed and non-divisible once it's been provided: eg, you can't defend Las Vegas casinos from terrorist attacks without defending California or New York), why would anyone ever pay on his own to create it. Hence governments have to use tax powers and create courts and police and armies and fund pure research. *information problems --- from problems like those in medicine that prevent non-scientific specialists to evaluate the strengths of new medicines (hence the F.D.A. and testing) to those that made it impossible in the pre-1960 era for car buyers to know what the actual costs of manufacturing cars were. It took the government to require that Detroit and others put car stickers on with suggested retail prices, which in turn made it possible for Consumers Report and others to decide the true costs and pass on the information to consumers, no longer the prey of unscrupulous car dealers who used to sell the same Cadillac in the 1950s, say, for $3000 in 1950 dollars to one customer and $4200 to the next. More recent work like that of Joseph Stiglitz and other Nobel-Prize winners have uncovered even more complex information problems, along with coordination ones that keep markets, it's said, from operating optimally.  

A Backlash

The effect of such market failures? By the 1950s and 1960s, more and more of our economy in the US --- not to mention the EU or Japan --- had come under the sway of increasing governmental regulations and other interventions, such as nationalization and industrial policies. Similarly, more so in the EU than here --- still here as well --- new legislative efforts were undertaken to redistribute income to low-wage earners or, at least, from the working population to the retired, the temporarily unemployed, the permanently unemployed (welfare), or the sick.

We were a long way from textbook market economies, the interventions justified on dual grounds: the need for the government [1] to overcome these market failures --- which can be technically defined as anything that keeps self-interested market actors (firms, workers, consumers, savers-investors) from reaching Pareto optimality and maximizing efficient allocation of capital, labor, time, and natural resources --- and [2] to use public policies to that end. In this way, sequential Pareto improvements will be made that could move the economy from sub-optimal performance to the Pareto frontier. As guides here, economists in this liberal camp use concepts like cost-benefit analysis, Pareto improvements (anything that technically can leave at least one person in the economy better off without harming the well-being of any other), and first and second-best criteria: eg, choose an interventionist technique that seemed to least interfere with markets, hence use fiscal policy to stimulate full employment rather than trade tariffs or quotas. All this was justified both technically and in the public interest. And the more left-leaning governments were, the more they also concerned themselves not just with regulations to enhance efficiency, but to redistribute income for reasons of justice or equity, themselves of course contestable terms.

In many respects, these conservative criticisms were sound. They can, however, be overdone.

Consider. In the 1930s, conservatives hated FDR --- a traitor to his class. In the 1940s and 1950s they excoriated FDR's welfare policies and hated Keynesian economics: the National Review created and edited by William Buckley made a fetish of this. By the 1980s, that had changed --- thanks not least to the influence of neoconservatives . . . former liberals, disillusioned with the excesses of the 1960s and frightened by the antinomian hate-America radicals. As a result, in the 1980 presidential election, Ronald Reagan made it clear he was himself now in the FDR camp : we needed social security, unemployment insurance, disability welfare, and even temporary forms of welfare for distressed families. What he didn't want --- and later Democrats under Clinton agreed to a large extent --- were the welfare policies of the Great Society Lyndon Johnson era: these, everyone except the radical left agreed by the late 1990s, had led to ever greater amounts of illegitimacy, permanent welfare dependence, single-parent mother-headed families, and more and more urban crime of a violent sort among youth, especially in the inner cities. That was a far cry from the promises in the LBJ Great Society packages. Governmental intervention would uplift the hitherto downtrodden racial minorities --- whose well-being had been improving anyway, not least thanks to the migration of millions of African Americans in the 1930s, 1940s, and 1950s to the desegregated North and better educational and job opportunities --- and turn them all into good solid middle class Americans; just like you and me and your neighbors. Not so. Not less crime, but more. Not less illegitimacy, but more. Not less welfare dependency, but more. Reagan rode to office on this wave of disillusion, which had been compounded by the odd irresolution at home and setbacks abroad of Jimmy Carter's Presidency, including his failure to tame surging inflation. Interestingly, despite pc-dominance in sociology and some new disciplines beyond the pale, even some liberal and left-wing scholars of note agreed, however reluctantly, that LBJ's Great Society had misfired and backfired. Repeatedly.

One left-wing sociologist at the University of Pennsylvania in the mid-90s kept running and rerunning his data sets, but the outcome was always the same: every 10% increase in welfare led nationally to 11% increases in illegitimacy. Ponder the evidence. In the early 1950s African-American two-parent families were virtually the same as white two-parent families: roughly 88% of each group (the white slightly higher). By the mid-1990s, something close to 70% of African-American children were born to an unwedded mother. To repeat, the Penn scholar's regression equations that sought to sort out the independent and intervening variables causing this distressful outcome always arrived at the same conclusion: overwhelmingly welfare-subsidies for illegitimate births and rearing. Illegitimacy, of course, is a no-no term in pc-circles, enough to get you tongue-whipped and curled fists-shaken at UC Berkeley or its imitators. Euphemisms are always supposed to be employed, you understand. And if there are problems with single-parent mother-headed families, it's because poor overburdened mothers aren't getting enough income at taxpayer expense. Or so we're told. We're told wrongly.

Other sociologists --- again, even some on the left who do statistical policy analysis --- showed overwhelmingly in the 1990s that single-parent families produced children with far greater problems in life: in school, in relations with peers, in adulthood and sexual matters and family life, and even in matters of crime. They were also more prone to acting out their problems. And --- the key point --- it wasn't just a question of income either. Controlling for income, the children of, say, mother-headed families at the $200,000 income level with no father around were more inclined to these problems than the children of two-parent families at roughly the same affluent levels. Seen in this light, small wonder either that public choice theory --- developed by A.C. Pigou initially at Cambridge, then later by Samuelson (a liberal) and Arrow (ditto), but refined more and more by Buchanan and Tullock and Olson and others on the conservative right --- reacted against these interventionist policies. Or the public too. The result: in 1996, the welfare system created by LBJ and the Nixon administrations was drastically reformed . . . so far with generally good results --- thanks especially to a booming economy and low unemployment until mid-2001. A good reason further for returning to good job-growth soon.

See the buggy prof analysis and the very good article by Robin Toner of the Washington Post for December 15, 2001: Gordon-Newspost.   Public Choice Clarified How did public choice theorists reply to the obvious existence of market failures of the sort uncovered in the middle of the 20th century and since? In essentially a double-barreled way. [1] First off, public choice theorists denied that market failures were so pervasive or serious: most could be handled, it was argued, by proper face-to-face bargaining among affected economic actors --- a theoretical insight developed at the University of Chicago by Ronald Coase, which won him a Nobel prize later. Thus if downstream households on the banks of a river or nearby are being hurt by a polluting chemical firm 50 miles upstream, the problems here are that the river isn't covered by private property laws, the polluter is treating the river as a free good to do as he wanted and pass the costs on, but those harmed by the negative externality (they don't like the costs) need only organize and bargain freely with the polluting firm and bribe its management to use less polluting equipment. If they won't organize --- at least some major cost-bearers among them --- and offer to offset the pollution firm's costs to them by enough money for the firm to desist or reorganize its production and get new and less polluting equipment, then by definition those harmed have other priorities and the harm isn't as great as it's said. Put your money where your mouth is: not a bad summary of some of Coase's work.

A Caveat 

Note that Coase himself has distanced himself from extreme use here. He warned initially in his work that there might be major "transactions costs" and "information problems" in the way of the householders downstream, first from recognizing the causes of the pollution (information problems), and even more from organizing effectively and paying the costs to bargain with the polluting firm and monitor it ---- matters of transactions costs, which can be burdensome . . . maybe insuperable. And later work by Joseph Stiglitz, Michael Spence, and George Akerloft --- all three winning Nobel prizes in 2001 --- went further and found far greater problems caused by information disparities and inadequacies in markets . . . including what are called agent-principal problems and adverse selection. See Gordon-Newspost Here, to put this bluntly, is a big reinforcing influence on the gulf in paradigms that prevail in micro-economics, one on the left side of the politically related divide and the other on the conservative-right side.   [2] Second, and more important yet, the new public-choice theorists like Buchanan and Tullock showed that even if there are market failures that keep an economy at a less than fully efficient outcome --- sub- Pareto optimality --- governments can't be relied on easily to remedy them. If anything, governments are prone to screw things up.

How so?

Because of pervasive "governmental failures" (remember, the Swedish Academy of Economics, dominated by liberal economists, awarded Buchanan a Nobel prize for illuminating the problems of welfare states overshooting taxes, welfare benefits, and regulations). To be brief, though government failures are numerous --- and will be set out and clarified in a moment --- the chief causes of these failures boil down in the end to essentially a couple:

In the first place, the assumption is made that people are self-interested whatever the realm: the economy or the polity. Simply because a lawyer becomes a governor or President doesn't mean he or she has been transformed in their motive-force: instead of income and wealth being the measure of success, power, prestige, and re-election are. Similarly, another lawyer who becomes a Cabinet Secretary or an agency head hasn't stopped being self-interested either: their coinage now, though, is calculated in budgets, personnel increases, and mission-tasks. Again, individuals as consumers are no different from individuals as voters: they will vote for the candidate or party that promises to maximize their well-being. Even more blatantly, workers and managers in uncompetitive industries --- instead of going out of business --- will organize into powerful interest groups with lots of money and lobby Congress and Presidents to give them subsidies or protection against "unfair" foreign trade. In short, self-interest dominates in governmental policies and administration. Second, the outcome in the political realm is far different, though, than in the market economy.

In the market economy, provided competition is maintained, the outcome is optimal for economic efficiency and long-term growth --- an insight Adam Smith had two centuries or more ago, when he talked about the "invisible hand" working through supply and demand to bring out maximal economic well-being. The invisible hand here is, of course, the price mechanism, provided prices accurately reflect supply and demand and scarce amounts of capital, labor, and natural resources. And the workings of the price mechanism throughout the market economy's supply and demand curves and responses are traced, to repeat, with the neo-classical concepts of marginalism, opportunity costs (tradeoffs), economic incentives (high taxes, to put it bluntly, or excessive inflation or regulation will backfire), cost-benefit analysis, Pareto optimality, and the like. That's the market economy.

But wait! In the public realm, as public choice theorists note, we are dealing largely with monopoly power. A consumer who's unhappy with his or her recent Cadillac can sell it and buy a Lexus. A voter who likes Bush's rejection of the Kyoto treaty in matters of global warming but not his foreign policy in the Middle East can't tell the IRS man that he'll be happy to pay for Bush's environmental policies but has preferred to not pay his taxes that would go for defense. Nor can he decide to use Canada for foreign policy, but the US for defense policy. Public goods like this aren't divisible the way your grocery purchases are into three pounds of Rib-Eye steak, 6 bottles of Italian wine, and two heads of organic lettuce. You're stuck with the Bush policies period, at any rate until he loses another election or serves out a second term.  

Kinds of Government Failures

 The monopoly nature of public policies, combined with the self-interested nature of politicians, bureaucrats, interest groups, and voters, creates then pervasive government-failures. In particular, the following: *Voter ignorance and Imprecise Individual Preferences: a consumer who's about to purchase a luxury car --- a Cadillac or a Lexus or a Mercedes --- will be spending his or her own hard-earned money and will take pains to make a careful choice: consult Consumer Reports, talk to friends and existing owners, go online to safety records, and visit different dealers. He or she has a hefty incentive to act "rationally" here. By contrast, voters know they make little difference: if you stay at home and don't vote, tens of millions will anyway.

Moreover, if you like Bush's environmental policies but Gore's foreign policy commitments, you can't carve your vote into parts the way you can with your grocery purchases. And not only that. You may feel more strongly about foreign policy than the environment --- leaning you toward Gore in the 2000 election --- but dislike his chameleon personality, a man for all seasons; and there's no way to easily aggregate these competing preferences and intensities about them. The result? On all these counts, you have a major disincentive to become as highly informed and make as careful a choice as a voter as you do when you're buying a $60,000 car.

*Special Interest-Group Power: only some groups can easily organize and lobby the government for favors (usually more income than would be the case in unregulated and competitive markets, such as the steel industry getting tariffs last year before the Congressional elections). Almost always, small producers groups --- big firms, especially regionally located, along with trade unions --- can more effectively organize and raise money to become big-clout wielders, threatening to unseat local Congressmen and lose key states in presidential elections. Try, oppositely, organizing the 280 million Americans as consumer-advocates to oppose the steel industry's efforts. Or the gun lobby's. Or until recently, thanks to a few insiders, the tobacco industry.

*The Shortsightedness Effect (as a textbook by James Gwartiney puts it): Faced with voter ignorance and special group power, politicians --- Presidents, Congressmen, Ministers in the EU or Japan who want to be re-elected --- have a hefty incentive to offer all sorts of benefits, including more welfare to large bodies of retirees, or to this locality as opposed to that locality (which will vote for the opposite party anyway), irrespective of how in the long run this will cumulate market inefficiencies (excessive taxes, labor market rigidities since workers won't move from declining industries to newer, more promising ones in other regions, more and more regulations and tariffs and subsidies) and hurt long-term economic growth. Think of Japan, think of Germany. Once re-elected, of course, a President or Premier in his second term might face slower economic growth or higher inflation, but what the heck, he still has four years in which to make a mark, and if he's lucky, the problems won't be severe until his successors take over anyway.

*Monopoly Power and Lack of Competition to Improve Public Policy Performance. Little need be said here. Public policies can't be easily targeted and broken up on an individual basis. Worse, public policies have to be administered by bureaucratic agencies with permanent tenure and little incentive to become more efficient. The advancement of bureaucrats may not even be connected to performance, despite efforts to link it in the abstract. Hence the proliferation of red tape, long lines, lack of responsiveness to complaints, and the like even as tasks proliferate, budgets swell, and promotion goes on. What follows? Given these government failures, new regulations, badly drawn up or excessive old ones, redistributive efforts, and the like might just have the opposite effects that advocates of interventionist policies had espoused to correct market failures and achieve more equity in the distribution of income. Most government policies, it's claimed, either skew income to powerful organized groups that don't deserve it --- this is called successful rent-seeking --- or slow down economic growth and create less wealth in the future. They seem, especially in the EU, to be responsible for more and more unemployment of a permanent sort. All of which brings us immediately to the need for a . . .  

Brief Clarification of Two Key Matters

First, Equity: A Loaded Concept

It needs careful analysis of its various meanings: market outcomes at one end, before taxes . . . and at the other full income equality as a hypothetical spectrum with two poles, and points in between. Not even Communist regimes ever sought to implement the latter: full equality. And even 19th century laissez-faire England had some form of welfare, a variant developed by Milton Friedman --- the negative income tax --- an updated version taking into account market incentives. Something else here. The US and Britain, odd as it might strike you, had by far the most equal distribution of income in the industrial world in the late 19th and first half of the 20th century. As late as 1968, UN comparative income studies showed that the UK came out on top of income equality (after 20 years of high taxation and redistributive policies of a limited sort) and the US second, ahead of the Scandinavians and Germans, with the Latin countries in the EU way behind. Way way behind. Forty years later, the US and the UK rank at the bottom in income equality after taxes and transfer payments. Is this bad? Interestingly, African Americans --- who have parity essentially with European-Americans if they have two-parent families the difference is negligible, especially if age is considered: African-Americans are much younger on the average --- have a higher real per capita income than Swedes. And the late 1990s --- specifically the last five years of the boom that ended in late 2000 --- saw low-wage US incomes rise faster than those of high-income workers, mainly because of labor shortages and the need of businesses, desperate for workers, to raise wages. For a good analysis of trends in US income, and whether inequality is bad or not and what can be done, see the New York Times article and the comments by the buggy professor in the Gordon-Newpost, December 15, 2001

More to the point, high taxes and high welfare burdens in the EU --- combined with regulations that make it hard to lay off workers in recessions, plus trade union logrolling with big firms and big governments that create high minimum wages --- have created rigid labor markets and reinforced disincentives to create new firms or hire new workers, resulting in high levels of permanent structural unemployment. It hits the young epecially hard. And immigrant communities, adding most likely --- there are, alas, few good studies in the EU on this --- to the surging sense of alienation and fundamentalist Islamist sympathies in the rapidly growing and increasingly young Muslim population there.

Second: Government Policies Can at Times Work Well

Yes, even in matters of income, without hurting efficiency --- even improving it. A textbook example is the earned income tax credit, which Congress passed in the Clinton era: low-income workers not only can have their taxes reduced, most --- with children --- are able to get rebates that increase their income. Related to this has been the welfare reform legislation, bipartisan in nature, which Congress passed in 1996: it limited welfare to dependent families, both in scope and duration, and the result was to reduce the welfare roles by around 50% in the next few years, with almost all the new working parents and others finding higher income and jobs with prospects. The results of both these changes? More work, less welfare dependency, more efficiency in the economy, and --- still too early to tell, but likely to spawn results in a decade or two, given what we know about the drawbacks of single-parent mother-headed families --- less youth crime and other pathologies. Again, see the buggy prof's comments and the very good article by Robin Toner in the Washington Post last year: gordon-newspost There have also been experiments --- rare in economics, a deductively operating discipline that deduces hypotheses that are "tested" statistically, with experimental work or even survey data generally scorned until recently --- to see if minimum wage increases will have the bad textbook-predicted outcomes: less youth employment as small businesses, reacting to new wage costs, either cut down on employees or to bankrupt.

True? Yes, obviously if too high.

Think of the scene throughout the EU save in Britain: high legislatively ordained minimum wages, responding largely to trade union demands and left-wing ideologies, have combined with burdensome social security taxes and regulations limiting the ability of firms to layoff workers in recessions --- hence they have little incentive to hire new and costly workers in boom periods that will last only so long --- to create rigid labor markets on the Continent. These rigid markets, in turn, are further anchored in cultural habits that limit labor mobility: northern Germans have been reluctant to move to the booming areas of the southern regions, seen to be inhabited by semi-hillbillies, usually Catholics to boot (Northern Germans are largley Protestant), and worse in East Germany permanent welfare and high uniform minimum wages have created conditions similar in many ways to the real hillbilly areas of West Virginia mountains. The result is more and more permanent unemployment, especially hard hitting on youth hoping to get into the job market. The best explanation: a public choice explanation that sees insiders in trade unions, whether in the public or private spheres, using their market power to extract high wages whose impact is largely felt by "outsiders" without such organizational means: youth, immigrants, older people who'd like to return to the job market. That said, there were some localities in the US that experimented with raising minimum wages, often in consultation with some labor-market specialists like Alan Krueger of Princeton. They were careful not to boost the wages too high and to consider local business demand for labor. On the whole, the outcome was largley desirable: more jobs, more income at the low end, more business profits and taxes. A conclusive set of experiments then? Hardly.

The 1990s, especially the latter half, were the longest boom in US economic history, and in those localities experimented with, businesses were desperate to find new employees. In some places, as in tony Marin County north of San Francisco, entry-level service workers could get $13 an hour --- more than double the minimum wage. And we know that nationally, from 1996 until the boom ended in late 2000, that low income wages were rising faster than those of high-income workers . . . a reversal of trends almost 25 years old. See the stimulating book by Alan Krueger and David Card: Myth and Measurement: The New Economics of the Minimum Wage (Princeton, 1997). Similar findings have been found for the UK, which has freed itself of excessive regulation and welfare-state clamps. Again, nothing conclusive, but also encouraging and with dozens of subsequent follow-up studies by others and the Krueger team as well. For a recent Krueger view of equality and inequality --- a wider but related topic --- see "Inequality: Too Much of a Good Thing"  

Note too the naievete that has marked Republican confidence in financial de-regulation. Some of this de-regulation, initiated by Democrats, was justified (banking competition for instance).

Much of it left huge opportunities for short-term exploitation by corporations, brokerage firms, and others who learned to skirt the edge of the law, engage in creative accounting and mangled reports, and sometimes went further, all leading to the stock-market balloon, then burst . . . the consequences of which we are still suffering from. What ensues? Chiefly that we can't have an effective financial system --- banks, stock and bond markets, brokerage firms, insurance firms, venture capital, accounting firms --- unless we can count on the information they provide being high-quality and honest, with clear transparent guidelines that hold the management responsible. In theoretical terms, we're back to the problems of "market failure": information ones and what are called principal-agent troubles. The latter refer to the pervasive facts about modern complex economies in which giant institutions --- banks, corporations, brokerage houses and the like --- are run by "agents" as managers and CEO's without their actually being the principal owners: stock buyers are; yet the interests of the two groups may not only fail to coincide at times --- what with the huge financial temptations to cut corners and cheat by managers ---- but even diverge. Only effective regulations --- including vigorous efforts by the SEC and others to ensure that transparency and honest accounting are maintained --- can guarantee that such temptations won't be easily succumbed to.

True, there's no way to make such institutions risk-free in their behavior, and greedy stock-buyers bear some of the responsibility for their own misfortunes during the stock market ballooning of the late 1990s. But only part. And as we've seen the last two years, though the US economy itself hasn't performed badly in struggling with the post-balloon adjustments, the stock market --- including high-techs at the Nasdaq --- is far from recovering its early vigor (despite some recent improvements). And it's worth remembering that vigorous efforts by Democrats in the late 1990s to reform the accounting industry were stymied by that industry and its huge clout in Congress and elsewhere. Here again, we're dealing with interest-group power, though the accounting industry's successes at beating off vigorous regulation seem to reflect not just its ability to influence politicians, but also ideological clout on the part of conservative and libertarian zealots --- which then backfired at not just the zealots, but also the accounting industry itself.   Our conclusion? It's two-fold. First, the poor performance until recently of the stock market --- now almost three years on a downslide or stagnant --- may reflect several complex reasons; still, a few important ones seem clearly to bear on the public's justified suspicions that corporate and financial America is not yet fully trustworthy again. On this, see a former Federal Reserve governor who's also a very good economist, Alan Blinder of Princeton: The Gordon Newpost

A good article by the New York Times Louis Uchitelle appeared soon afterwards (late July 2002): Gordon-Newpost And a second conclusion? Only ideologues deny that their positions might be excessive and overlook the complex tradeoffs. It's one thing to criticize overzealous regulations; another to go whole hog and deregulate everything you can, especially financial markets and institutions: banks, stock and bond markets, brokerage firms, accounting firms, insurance firms, venture capital. Markets to work effectively must generate good and reliable information. Period. Relying on competition and "social norms" (be honest, guys!) won't hack it. Not now, not ever. The vulnerable point of capitalism in the past and present and future is almost always its financial excesses: either overboldness and runaway optimism (stock markets) or timidity (bank credit in the EU and Japan as the main source of financing corporate investment and innovation).  

PART TWO: WHAT CAUSES SHIFTS IN ECONOMIC PARADIGMS?

Some Examples of Major Dislocating Changes That Unhinge Economists and Economists

Consider the following disruptive events, constantly besetting our economic lives, some arising directly from within national economies, others produced by globalizing interdendence, others outside economic life per se --- social or political --- but directly or indirectly imparting recurring shocks to the economy, national or global:

--- As examples of revolutionary technologies, which come in clustered waves every 40-50 years or so ever since the industrial revolution started in the late 18th century, think of autos, electrification, and mass standardized production techniques from the 1880s on; then later, from the 1930s on, airplanes and air-travel, mass tourism, mass credit facilities for purchasing houses and other durable goods (right down to your Visa card today for non-durables), the movies, radio, and TV, and the petroleum industry and synthetics; then, starting in the 1970s --- our latest wave --- computers and information and communication technologies, ICT, and their impact on education, family life, business behavior, and warfare, all in just their second generation right now . . . along with the still largely untapped promises of bio-tech and probably the most radical breakthrough technology since fire was discovered a 100,000 years or so ago, nanotechnology. The latter works with materials at their molecular level, capable of producing and shipping what used to be tons and tons of steel sheets in powder form (to be reconstituted later) or new energy or food systems or . . . well, nobody really knows. What we do know is that the US is in the lead in all three areas, UC Santa Barbara and UCLA recently forming a $500 million consortium with private firms for encouraging breakthroughs here.

--- As examples of marked and unpredictable shifts in economic dynamism across countries and from one region to another, think of the waning economic performance of the EU and especially Germany the last two decades as new Asian competitors arose in the Far East and later in India and Mexico and Brazil, with the US pioneering the avant-garde industries in ICT and biotech. Or, even more startling, the sudden reversal in Japanese economic fortunes from a wannabe global leader to a has-been plunging toward the bottom of the industrial league. Or, oppositely, with far fewer regional and global implications, the soaring performance of tiny Ireland, 4 million people, which went from the poorest of the EU countries in the early 1980s to become its richest ($28,000 per capita income in purchasing power parity compared to the EU average of around $23,000 and Germany's, France, and Britain's per capita income only slightly higher).

 --- As examples of supply disruptions, think of OPEC in the 1970s and two oil embargos that sent oil prices skyrocketing, resulting in a new wave of worrying inflation and simultaneous dislocations in employment levels and growth rates, with massive debt problems for almost all developing countries. The slowdown in economic growth was marked, a turning-point in the industrial world: Japanese annual growth was cut by two-third from the late 1970s until the early 1990s, since which time it's been less than 1.0% a year; Germany's and the EU's big country performers experienced a reduction by one-half; and the US, the lead country in per capita income and as such a slower grower since 1945 thanks to what's called convergence catch-up growth on the part of follower countries that have decent economic and financial institutions and public policies and an educated workforce, found its growth was reduced by about a third. Since then, only the US among the industrialized countries at the time has improved its performance, returning to its vigor of the 1950s and 1960s, then surpassing it in the late 1990s, plus Ireland . . . the industrial league now counting Taiwan, Hong Kong, Singapore, South Korea, Mexico (the 11th industrial country in the world, despite its huge regional and ethnic disparities), Brazil, India, China, and much of Southeast Asia. With parts of East Europe soon to join them.  

THE REASONS WHY THE PROBLEMS WON'T BE OVERCOME: A First-Cut View:

The Two Interrelated Problems Restated

The difficulties besetting economics as a scientific discipline are, to repeat and in more rigorous fashion, an intractable duo:

 [1] Economics lacks law-like generalizations about economic behavior that can either anticipate and cope effectively with new and unforeseen changes in the national and global economies, especially of a major dislocating sort --- think of the Depression of the 1930s, trade wars then, or the 1970s OPEC impact on energy prices, inflation, and unemployment, or the financial breakdowns of the 1990s (including hot capital moving rapidly across borders and financial machinations in stock markets and accounting procedures), or radically restructuring technologies that always recur every three to five decades, or big shifts in market dyanmism around the globe as in the emergence of new competitor industrial countries in the Far East, or the impact of immigration and movements of people across borders --- or, to return to the topic here, that economists can quickly assimilate into their accepted theories and then explain them in cumulative scientific fashion through new hypotheses that are confirmed by econometric techniques (statistical tools) or empirical work. The latter --- such as survey techniques and first-hand observations, or economic experiments (e.g., how do managers and entrepreneurs and buyers of stocks really behave, as opposed to rational assumptions? what will happen in localities if the minimum wage is raised?) --- are, believe it or not, scarcely used in economics until recently. Even now they remain tangential to most economic theorizing, which is largely deductive --- a form of assumptions about economic behavior and formalizations in statistical manner that amount largely to inferences and applied mathematics.

[2] Lacking such cumulative scientific knowledge of the assumed textbook sort, economics also lacks sure-fire guidelines for policymaking in the public and private sectors that can be deduced from the body of logically linked law-like generalizations on the theoretical level and lead both the Federal Reserve and politicians on one side, and on the other managers, owners, financial institutions (banks, stock markets, bond markets, insurance agents, brokerage firms, venture capital) to cope effectively with these changes and make rational and efficient decisions to keep the economy humming along at near-peak performance . . . nationally first and foremost, but also the global economy.

At times, amid major flux and dislocations, economics seems even to lack "good" guidelines, and policymakers have to use whatever knowledge they have, their intuitions, their hunches, and their courage (if they have much) to try coping with the new and hard-to-master challenges. Thus in the 1930s, classical economics --- really, to be exact, "neo-classical" economics of the sort developed from the 1880s on until then (marginalism, supply and demand curves, opportunity costs, allocative outcomes) --- was largely helpless in accounting for massive market failures that created and prolonged the Great Depression. The fascist countries overcame it by mass rearmament, for purposes of war. Roosevelt's New Deal, thanks originally to big fiscal policy outlays, helped overcome it here initially, only to see economic growth falter and unemployment rise when the administration grew more tepid about fiscal deficits and industrial reorganization in the late 1930s: in effect, WWII's massive rearmament --- fiscal policy of a radical sort --- alone overcame the Depression's powerful and persistent impact.

For several years, the impact of the 1970s huge oil price rises, thanks to OPEC, had a similar impact in disrupting the world economy and national economies like ours. Inflation surged; so did unemployment --- something not at all anticipated in Keynesian techniques (the Phillips curve tradeoffs here were supposed to be stable and predictable). Simultaneously, economic growth drastically slowed in the industrial world, and in fact only the US has regained its previous growth rates of the 1950s and 1960s, even though lots of newly industrializing countries in Asia and more recently Mexico and Brazil have either created or regained new dynamism here.  

To Explain Briefly

The latter points about the mid-1970s impact and its lingering influences are is worth dwelling on a few moments, nothing more, if only to illustrate the problems of economic guidelines in graphic ways. Since the mid-1970s, the Japanese and the Germans never fully regained their dynamism --- both of them overburdened, as it turns out, with highly risk-averse people, aging, highly cautious populations, and overzealous regulators and high taxes and welfare loads that further brake ruptures with the status quo, all pushed of course in the name of high-sounding motives like social solidarity and social justice even as they have fallen to the bottom of the industrial country league in growth and competitive dynamism. The pushing, you understand, also done by pandering politicians in both countries, who have shied away at the first sign of resistance from tackling these problems head-on --- which means, at bottom, all hokum aside, speaking frankly to the population in each country that their standard of living is seriously in jeopardy and that they cannot sustain ever growing numbers of retirees dependent on state pensions with ever smaller numbers of workers in the next few decades. Nor can they deal effectively with ever higher levels of unemployment: Germany's is again 11%, about half of that long-term structural rather than cycilical, a problem that bears down heavily on young people from 16 years on who want to get jobs and careers going. Japan's, using similar unemployment calculations, is about the sames.

The US's, to give you an idea for comparison, is 5.8% --- and we rightly regard this as too high. Most of US unemployment by far, moreover, is cyclical, a result of the recession of 2001 and decent growth since that hasn't generated new jobs, businesses deciding instead to make do with fewer workers . . . which is a long-term benefit in increased productivity, with pick-up in business activity bound to lead to rehiring laid-off workers to an extent, but with most of the laid-off and new entrants finding jobs in new bursting industries. The US economy is matchless on this score. Since the pivotal turning-point of the mid- and late- 1970s, only the US among big industrial countries has matched and even exceeded its GDP growth rate and per capita income advances it recorded in the 1950s and 1960s. The British, deregulating and reducing their welfare state under Mrs. Thatcher in the 1980s managed to grow faster than the Germans, French, and Italians in the last 15 years or so, which has put them more or less at the level of their per capita income, after three decades of very slow growth for a EU country. The EU average in per capita income, by contrast, has fallen from about 85% of the US level at the start of the 1990s to 65% at the start of this decade, essentially where it was in the mid-1960s. Ireland, an economic superstar for all its pint-sized country --- like Singapore and Hong Kong (around 4 million) --- also changed radically and did better than it had in the 1950s and 1960s, and by a long chalk: in 15 years, it has gone from being one of the two poorest EU countries (along with Greece) to the richest! Sweden, Finland, and Holland, also small countries --- the latter 15 million, Finland 4 million, Sweden 9 million --- have also been doing better than they were in the 1980s and 1970s, but not as well as they did in the 1950s and 1960s.  

A DEEPER CAUSE UNCOVERED: ECONOMICS IS EMBEDDED IN WIDER WORLDS

To put it bluntly

Economies, you see, are part of a larger social and political world, both at home within countries and globally across them, and they are always beset by changes across the boundaries of these different worlds --- including, we should add, technological innovations of a radically restructuring sort like autos or electrification or airplanes or massive credit finance for durable goods or shifts in energy systems or ICT. The latter, as we know, has generated new weaponry and information-age warfare, witness Iraq, just as earlier airplanes and urban bombing and nuclear weapons radicalized warfare in WWII. For that matter, ICT --- new information and communications technologies --- are what facilitated all the hot-money movements of portfolio capital across countries in the late 1990s, their effects still not fully understood, let alone mastered. (Portfoliol capital doesn't give control to its owners of firms: any more than your buying shares in GM give you control over its management . . . unless you can own about 10-15% of total shares. Then you can be put on the board.

As opposed to it is foreign direct investment: FDI. It's what goes along with multinationals, and such companies that use it gain control over the management of established firms abroad or, alternatively, create new firms there.) What follows from the complications of economies being part of larger social and political worlds, as well as interactions with the global economy and other global trends like wars and movements of people? Bluntly said again, this: the macro-economy is never in a stable equilibrium, yet almost all macro-theories since the 1930s, when the discipline first emerged --- pioneered by Keynes and his followers, with the new classical theories of supply-side, monetarism, rational expectations, and real business cycle likewise macro-theories --- generally assume this, even if they do so for different reasons.

Keynesians and new Keynesians still see an aggregate market-failure built into modern capitalism that requires active fiscal and monetary management, perhaps now owing, thanks to more work on the micro-level by pioneer theorists like Joseph Stiglitz (a Nobel Prize-Winner two years ago) that stress greater information problems, coordination problems, transaction costs (costs of economic actors finding one another, bargaining, reaching legal deals, monitoring their outcomes), and principal-agent problems, such as stockholders owning companies, but managers deciding production, wages, incomes, and how to report earnings, including to stockholders. New classical theorists postulate rapidly adjusting markets to any of the changes mentioned in the previous paragraph, while downplaying these latter problems, and find generally that if markets don't adjust quickly in textbook fashion, it's because of bad policymaking in the public realm. Both groups still assume that somehow the problems and their causes can be effectively found, illuminated, and dealt with if only there were the proper political will to implement their remedies. We hope to remedy this dipping, fast-paced superficiality with a more thorough argument here. It will show that economics is never likely to overcome these drawbacks --- never emerge as a scientific discipline akin to physics or chemistry or biology (themselves not really behaving in textbook manner when it comes to shifts of paradigms like those caused by Gallileo or Newton or Einstein or quantum theorists in physics, as Thomas Kuhn, the most influential philosopher of the last half century, showed). It will also show that even in micro-economics, assumed to be a more consensus-driven form of economic theory --- rational choice, marginalism, opportunity costs, the need for proper economic incentives, efficiency concerns, allocative outcomes of decisions made by millions or tens of millions of economic actors coordinated through market interaction, and public-choice views of how the public sector interacts with the private economy --- isn't much more "scientifically" grounded or consensual either.  

PART THREE:THE DEEPER PHILOSOPHICAL REASONS WHY ECONOMICS ISN'T OR WON'T BE "SCIENTIFIC"

So far, so good.

We've clarified some key terms, shown how economics has never fully developed a theoretical consensus in macroeconomics from its birth some 70 years or so ago --- to a large extent too, not in microeconomics either for half a century or so --- and shown too that shifts from one major paradigm to another in both sub-fields of economics are driven largely by unforeseen and disruptive changes in the real economic world, both in and across countries and on the level of the increasingly interdependent global economy, that the "dominant" but still contested paradigm and its spawned theories can't handle. Thomas Kuhn, remember, said the same was true of the natural sciences when they underwent "scientific revolutions."

The difference in economics and all the social sciences is that they never achieve the status of "normal science", the long periods when revolutionary and extra-rational shifts in paradigms in the natural sciences are followed by a new, universal consensus when intra-discipline disputes fade or die off, hypotheses are inferred and tested, and core laws in physics or chemistry are accordingly adjusted in light of these tests . . . and always at the margins. On and on, until suddenly new major problems and puzzles emerge that the dominant paradigm in physics or chemistry can't handle, followed by theoretical turbulence and flux and the emergence eventually, in less than fully rational ways at odds with textbook views of scientific work, of a new revolutionary paradigm that gains consensus once more. But note: however convincing the arguments in parts one and two happen to be, they add up --- to use lawyer-language (oh oh!) --- circumstantial evidence, nothing conclusive. To be conclusive, we need to move on in this part and probe at a deep epistemological level the more basic reasons why economics (and any other social science) won't likely ever be a science similar to physics or chemistry.  

The Start of the Argument

Baldly put, the epistemological naiveti that underlines the growing stress in economics (or political science) theoretical work and methodologies --- especially the preoccupation with formal modeling and statistical testing --- is striking to almost anyone familiar with the work in epistemology and the philosophy of the sciences going back to the path breaking work of Quine in the 1950s, along with the subsequent work of Davidson, Sellars, and Kuhn (even in the latter's more moderate stage toward the end of his life) . . . a demolition of logical positivism or logical empiricism. Nine sets of comments follow, by way of clarification :

 (i) The writings of the first two generations of positivists and logical empiricists --- the differences between them fairly negligible: the latter are mainly American, while the original positivists came out of the Vienna Circle and then emigrated here in the 1930s --- culminated in the works of Carl Hempel and Ernest Nagel: roughly, a switch from an early inductive approach to a hypothetico-deductive theoretical view of the structure of scientific theories, later amended to include probabilistic reasoning. Essentially, stripped to the barebones, the positivist view holds that there is a timeless and unchangeable reality --- or, in the social world, a changing reality that can be predicted in advance --- objective understanding of which can be made and confirmed through the use of proper methodologies in order to prove (or falsify) various kinds of theoretical hypotheses. Use the right methodologies and have adequate evidence (data, information) and accurate truth-telling propositions will emerge. This at any, in boiled down terms --- however simpleminded it seems to those who know the history of the logical positivist debate --- is what "science" now increasingly means to the methodological-minded obsessionists in political science.

 (ii) Simplifying considerably, the positivist approach was demolished by Quine's path-breaking work of Carnap (his professor) and other positivists in three ways:

--1. No strict distinction could be sustained between theory and observation (or facts): what counts as observations of the natural (or social) world depends on a prior theoretical commitment (economists ignore unconscious forces; Freudians see rationality of the economist's sort as superficial and largely a rationalization of unconscious forces people aren't aware of, including a multitude of possible defense mechanisms). This would later lead to Kuhn (Quine's pupil) stressing incompatible paradigms.

-- 2. Theories can't be conclusively rejected in the Popperian sense of falsification (quite apart from being proved by inductive techniques --- the reason for the positivist switch to a hypothetico-deductive view of scientific theories to begin with): that's because, in the Quine-Duhem view, specific theoretical hypotheses deduced from a more general theory are always tested in conjunction with auxiliary conditions, and any discrepancy between what is predicted in experimental conditions or in the real world and the actual results can be blamed on the auxiliary conditions. This would later lead to Lakatos's notion of a protected core of unchallengeable key theoretical premises.

-- 3. All theories are under-determined by their empirical base.

-- 4. On all these grounds, theories can only be accepted or rejected in holistic terms. Kuhn would carry this further to a clash of paradigms, especially when "normal" science breaks down and can't cope with the growing discrepancies and contradictions that the dominant paradigm faces

(iii.) Donald Davidson --- Quine's colleague --- spelled out why for many scholars there is no such thing as a neutral language (read: theory) for investigating the world. All languages --- theories --- commit the investigator to a particular view that discards or ignores "facts" that other languages (theories) stress and to the key variables to concentrate on for constructing causal laws. Hence there is no such thing, strictly speaking, as an objective scientific approach. Kuhn would carry this view further still. (Davidson, it's important to add, continues himself to stress a much different point: despite different languages and hence conceptual schemes (paradigms), objectivity remains an important constraint on the range of disagreement in the scientific world . . . and even among those who share similar meanings in the ways we learn language.)  

(iv.) Wilfred Sellars --- before anyone ever heard of Derrida, and in rigorous analytical philosophy --- demolished the "myth of the given." Specifically, observation is itself a outcome of language: the map --- analytical framework (in the sense used in our own argument) or scientific theory --- conditions how you see and understand the landscape. Kuhn and his followers would go on and conclude that, essentially, there's only maps . . . maps all the way down, to use a certain language, and "there is no independent access to anything mapped" (Note: the words in quotations are from Simon Blackburn, in a long review of a book between Richard Rorty and his critics --- Robert. Brandon, ed, Rorty and his Critics --- "The Professor of Complacence, " The New Republic [8-20-01]," I follow him here because Sellars is a very difficult philosopher for outsiders to understand, given the complexities of his writing.)  

(v.) Whether anyone wants to go further and follow Rorty and Feyerabend and others who find there is no meaning to the term "truth" or even the terms "more or less accurate about reality" other than in the consensus reached by a group of people through discourse and reasoning is another matter. I myself don't. And the key work in the Brandon book that shakes the views of Richard Rorty --- the most famous philosopher in the world probably (now thoroughly disgusted, properly, with the Academic New Left and multiculturalists and cultural identity types whom he denounces as the "School of Resentment . . . politically useless, semi-literate, and tediously self-righteous) --- is James Conant's "Freedom, Cruelty and Truth: Rorty vs. Orwell." As Conant shows in a close reading of Orwell's 1984 using Rorty's logic, poor Winston is left at the mercy of O'Brien, Big Brother, and the Party, with no way of clinging to an idea of objective truth outside what the dominant group decides it is . . . no such idea of truth, no history of the world independent of what the Party and its Newspeak propaganda decides it is." What is clear is that all methods break down at some point (a key idea of Kuhn's), there is no neutral language, theories can't be refuted by falsification with observation-statements, and there is no such thing as an objective world apart from the way we map it. The remaining positivist views embraced by, say, political science "philosophers of science" ---- really, practically oriented methodologists, with little formal training in analytical philosophy (and little knowledge of it, it appears) --- are simply naove.  

(vi.) Quite apart from epistemology --- and probably a total surprise for political scientists who think that economics is more "scientific" discipline because of its greater formal modeling of its theories and its statistical testing --- there has in fact never been any major theoretical debate of significance in economics settled by econometric work or refinements. This is patently the case in macro-economics. What Gregory Mankiw, a prominent neo-Keynesian macro-economist said about the Solow  New Growth theoretical dispute --- it can't be settled by econometric modeling, however refined and however much more evidence can be gathered in the future, --- can be generalized to all the major debate in macroeconomic theorizing.

-- Mankiw argues that the main and insuperable are limited degrees of freedom and simultaneity --- understood to mean not just the need to solve several simultaneous equations at once, but shifts in causation between independent and dependent variables as the parameters of the latter change (Gregory Mankiw, "The Growth of Nations," Brookings Papers On Economic Activity (I) 1995. To these difficulties, add in other methodological problems --- how to code qualitative matters (technology), how to deal with complex dependence structures, the inevitable disputes over the proper functional forms of a specified model, what to do with omitted variables, doubts about exchangeability assumptions (once you've specified a model and pinned down the explanatory variables and achieved outcomes, you shouldn't be able to predict or explain the outcomes any better by adapting your model as Bayesians suggest to experience and growing intuitive knowledge), unmeasured heterogeneity vs. fixed effects models etc, etc . . . not to mention the insuperable arguments that will attend the defense of a theory against contrary findings that take the form of arguing about the soundness of a data set (longitudinal or cross-sectional) --- and you have the limitations of econometrics that tie up with the previously mentioned epistemological arguments against a final definitive empirical settlement of major theoretical disputes. And these problems are above and beyond those that are taught in all statistical courses dealing with multiple regression techniques (beginning, advanced, it doesn't matter): multicollinearity, heteroskedacity, auto-correlation, problems of transforming non-linear equations into various functional forms (log transformations, polynomials, reciprocal models, exponential models), problems of simultaneous equations, time-series and forecasting and MA, AR, and ARMA models) and the like.

A Key Point Again 

 To repeat: all theories, even in the natural sciences, are underdetermined by their empirical observations (facts), and none can be proved or falsified by selecting out this or that set of claims and testing them individually or in sets of claims, just as any contrary empirical findings can be attributed to attending auxiliary conditions, not the hard-core theoretical claims themselves. And at some point, ALL methods (as Kuhn noted crisply) will break down in the face of unmanageable theoretical problems grappling with a complex (and possibly changing) world.   So much for debates in macroeconomics. Is the situation much better in microeconomics? No, not really. Contrary to what is thought by outsiders, similar problems have arisen whenever key theoretical components of the micro-economic "consensus" are actually tested. Believe it or not, the standard key empirical' implications to be drawn from neo-classical demand theory --- the symmetry of compensated Slusky matrix, the homogeneity of degree zero of individual (or group) demand functions, and Walras's law of adding up or even Engel's aggregation --- have, over 70 years of testing, never been found to stand up to the testing.

More recently, the efforts of George Akerlof, Michael Spence, and Joseph Stiglitz --- awarded the Nobel Prize for Economics in 2001 that emphasize the problems of information, agent-principals, and incomplete markets --- as well as the earlier work of institutionalists like Douglas North and Oliver Williamson and even Ronald Coase --- have produced a challenge to the neo-classical assumptions that underpinned the dominant consensus about the competitive paradigm or the neo-classical or Walrasian model, which reached a foundational climax in the work of Arrow and Debreu in 1954 (and later in 1959). These assumptions, to simply somewhat, postulate that large numbers of profit-maximizing firm, each interacting with rational, utility-maximizing consumers in markets --- "for all goods, in all periods, in all states of nature (for all risks), at all locations" --- will bring about a maximization of economic welfare, leading any economy to a Pareto-frontier (including the use of an initial lump-sum redistribution of income and wealth. The work of the economists just mentioned challenges these assumptions at their roots . . . and more recently, they have also been extended into macroeconomic growth theories. And increasingly, as the Nobel Prize awards for 2001 indicated, these challenges to the dominant neo-classical paradigm have moved over the decades from the margins of economic work to its center . . . of somewhere near it anyway.  

(vii.) Something else to ponder follows automatically here . . . the key point about all these theoretical challenges, micro or macro. Originally kept to the margins of the discipline, often for decades irrespective of their logical and empirical merits, none of these challenges has moved toward the center of the economics profession and come to rival or topple prevailing orthodoxies in growth theory, trade theory, or microeconomics general equilibrium theory due to steady, cumulative empirical work that put the defenders of orthodoxy on the defensive and then won them over by force of logic and rigorous testing and persuasion. Had that been the case, the outcomes would have been driven by standard disciplinary procedures taught in graduate schools and adhered to in the better journals: by ever better statistical modeling, or ever richer data banks, or ever bolder empirical testing, or ever richer and fuller explanations of past events and solider predictions of the future. Or some combination of these, needless to add. Little of this has marked the debates here. More specifically, there hasn't been steady intellectual progress thanks to standard econometric modeling and testing --- cumulative developed, objectively assessed --- that then has led to a new consensus in the economic discipline on the value of the challengers' theoretical work, adding up to new paradigms in trade theory, growth theory, or microeconomics equilibrium theory. Instead, the combat between prevailing theoretical orthodoxies and challengers --- between the old-guard and the vanguard paradigms if you will --- has unfolded in line with what Kuhn calls a radical paradigm shift that occurs largely for extra-rational, extra-scientific reasons.

 Meaning?

Essentially this. Whenever paradigm shifts occur in a scientific discipline, they don't occur because there has been steady intellectual and scientific progress of a strictly rational sort as the discipline in question (physics, chemistry, economics, and the like) proclaims . . . based on cumulative, discipline-wide agreement that the vanguard scholars have been doing superior research, whether theoretical or empirical: offering, fundamentally, superior explanations of current and past developments and better predictions than the mainstream "normal-science" spelled out by the dominant paradigm(s). This is a key point. Even when, in time, the explanations and predictions that follow from mainstream paradigm work run into increasing problems --- "anomalies" in Kuhn's terminology --- their adherents, it turns out, don't throw in the towel in the face of vanguard's superior ability to handle such anomalies: they don't concede, in progressively growing numbers, that they've been outdone and bested on scientific grounds and convert to the new paradigm. Instead, much more likely --- as the anomalies in their own deductions and empirical work on these pile up --- the adherents of the status-quo keep adapting or refining their models and theories; keep adding on more and more auxiliary propositions to protect the core postulates and assumptions of their theories; keep on blaming the anomalies they can't handle --- the breakdown in explanations or predictions --- on disturbing conditions, which is precisely what the Quine-Duhem postulate predicts will happen. The models, the theories that they're based on in the dominant paradigm(s), are said to be as sound as ever. What's needed, so the argument goes, are more adjustments and more updating . . . along with better modeling, more research money, more and better empirical data (always expensive to gather) to be run by ever more complex regression analysis, or even more computer power.

An exaggeration? Not really. Scientific work seems to unfold this way. Economics is no different.

An Important Example 

Take growth theory, a subject of major interest for this study. Not long ago, in 1997, Robert Solow --- the Nobel prize-winning theorist of standard Neo-classical growth theory that we've referred to already, dating back to his path breaking work of the mid-1950s, but increasingly challenged by New Growth theory (endogenous growth) since the mid-1980s --- flatly said the New Growth challenge had failed: that, for all its claims, it hadn't proved it could deliver the explanatory and prediction goods it had promised. Failed? Needless to say, the judgment is one that scarcely any New Growth theorists, increasingly numerous in the economics professions, have bought . . . or for that matter policy analysts working for the World Bank.  

No surprise here on this score. Scientific work is, among other things, the work of humans in social networks, and the humans in these networks have all the emotional and other psychological needs that men and women do in all social groups. Reputations are almost always at stake when new paradigms emerge . . . maybe even those of Nobel Prize winners. Access to research grants is also at stake; access to prestigious journals comes fairly quickly into play; access to good jobs in the discipline's job-market might hang in the balance too, especially for new Ph.D.s. There are other status- and power-oriented ploys that al.so materialize, scientists being only humans after all. Take the conflict between Old and New Growth theory again. "Of course, in economics," Paul Romer (the major pioneer formalizer of New Growth theory) observed in 1999, "your ancestors are still around, occupying positions of power in the profession, and they are not always happy when someone comes along and tries to take a fresh look at things." . There are, fortunately, limits on these extra-scientific influences. Personal struggles in economics don't unfold like politics in the Middle East after all --- the ones ultimately with the bigger guns and bombs winning out. Sooner or later, the theoretical and empirical work of the vanguard, which might have been around for decades and was considered offbeat, marginal, or exaggerated in its claims by the old-line adherents --- more or the case of the overlapping work of Stiglitz, Akerlof, and Spence, for instance, in their challenge to existing micro theorizing --- seems eventually, especially to younger economists with less at stake personally in the intellectual status quo, to offer a better map for identifying new problems or solving old ones.

Whether or not the young convert to the new paradigm for strictly scientific reasons is itself another matter. Essentially, as one of the most gifted of recent philosophers of science notes, theories aren't usually rejected because they run into anomalies, and similarly they aren't entirely embraced because they appear to be empirically confirmed. And for that matter, scientists can adopt numerous cognitive positions toward theories above and beyond simply accepting or rejecting them . . . the motivations here hardly confined to a disinterested quest after "truth."   (viii) If the diverse strings of the argument in the last few pages are drawn together, what conclusion emerge? Essentially this. In economics --- and in political science (to stay with these disciplines only, which are the ones drawn on in this study) --- the dominant epistemological underpinnings taught in methodology classes that are supposed to underpin mainstream scientific work seem at odds in most respects to recent philosophical developments in epistemology . . . despite ongoing differences between philosophers about the full implications of these developments. In particular, these developments:

 1. Reject the positivist distinction between theoretical and observational sentences---theory and facts if you like. If anything, whereas the logical positivists singled out observation over theory, Thomas Kuhn, Imre Lakatos, Larry Laudan, Paul Feyerabend, and Richard Rorty and other recent critics of positivism privilege theory over observation.

2. Reject the claim that the transition from one scientific theory to another is cumulative, a result of growing knowledge. When a new theory (paradigm) emerges, it does not preserve intact the logical and empirical content that has been confirmed in the previous theoretical work. In technical language, Kuhn and the others deny that there is "meaning invariance of the observational sentences" (facts) across the boundaries of theoretical change.

 3. Contest the claim that theories are matters that can be logically evaluated by strictly empirical analysis (observational consequences), whether by means of confirmation, verification, or falsification. That's because there are no clear, objective rules or laws of non-relative knowledge and authority. Note immediately that Kuhn, Lakatos, Rorty, Landau, and likeminded thinkers aren't irrationalists. They recognize that theories have to be evaluated, but they claim it's a complex and multifaceted matter that goes way beyond some idealized principles or logic of justification (read: empirical work). More specifically, besides the empirical work, the acceptance of a theory is a many-sided result of careers, prestige, power (who decides what's right or wrong, and the approaches to it), and changing social and historical contexts.

4. Oppose the effort made by Popper and later-day positivists (Popper himself never fully a member of the Vienna Circle) to distinguish between a) the discovery of a theoretical propositions) and b) the justification or validation of it. As Popper saw things, it doesn't matter how theories arise, even if out of religious fervor: what matters is whether they can be falsified or not, and withstand such tests or not. The distinction doesn't hold up. In particular, validation or falsification --- either one --- can't be done proposition by proposition. Quine showed 50 years ago that this was impossible: there is no way to refute a theory by throwing doubt on it proposition by proposition (eg, no way to disconfirm Realism, say, by showing it can't explain a specific case such as American efforts to create the League or UN or, it's said, the end of the cold war.) That's because, in the Quine-Duhem approach, all tests---including predictions---involve other auxiliary propositions, and the failure to withstand them can be blamed on the auxiliary propositions or other disturbing circumstances.

5. Create skepticism about what Rorty calls "foundationalism". Simplified, since Descartes---no, since Plato--there's an assumption that we can distinguish clearly from illusion and false beliefs on the one hand and clear knowledge and true beliefs on the other. This is supposed to be achieved by disinterested, reflective, methodologically sound approaches that will put us in bracing contact with "reality" of an objective sort. Essentially, positivists and others find that all true knowledge can be reduced to privileged epistemic or ontological items, through proper logic and methods of empirical investigation. For some these were "logical atoms" (Russell, the early Wittgenstein), "protocol sentences" (the logical positivists), "sense data" (the early Husserl, some of the positivists who held to a psychological approach, replaced analysis--e.g., A.J. Ayer whom I worked with), or "increasing empirical content" (the claims made by self-anointed arbiters like Gary King about disputes in statistics, among others).

What's At the Core Here? 

Essentially, all these approaches---reflected in the efforts in economics and political science to see "true" scientific work as quantitatively validated (through statistics or logical refinement of formal theory: (read game theory) . . . with all other stuff journalism) --- rests on a belief that there is ONE TRUE THEORY that can effectively mirror the ULTIMATE STRUCTURE of a MIND-INDEPENDENT world. Or at least approach ever closer in probability terms.

Kuhn and others just the heady postmodernists (who usually can't set out clear arguments of any sort, and result to incredible obscurity) see all this as misguided and a reflection of the use of power and group efforts to delimit the scope of a discipline. Most philosophers, by contrast, are locked in debate as to how much of any paradigm shift is brought about by rational debate and how much by extra-scientific concerns of this social and psychological nature 6. Deny that science is essentially an undertaking that pursues disinterested knowledge, focusing on invariable, context-less phenomena like electrons, genes, quarks . . . never mind the very hard to define things that social sciences deal with ('mind", cognitive stuff, interests, groups, institutions, states, power, influence, "information" so beloved by formal modelers) moral judgments, intentions, structural constraints, and the like, along with the all-purpose Kuhn, pragmatists, neo-pragmatists like Rorty, and others, to put it differently, do not find that science gives anyone---physicists, biologists, economists, political scientists, and the like--- unique, privileged epistemic access to some unassailable and objective knowledge, to be arrived at by some neutral language and proper methodologies (read: statistical, game theoretical etc as the preferred power-imposed methods by methodologists and other self-anointed types). If description of the physical world can't be arrived at in this neutral, objective, disinterested way, this is doubly the case with value-laden issues studied by economists and political scientists. .

By contrast, there are philosophers who do not buy the positivist program and reject simple correspondence theories of truth and have doubts about full objectivity, but who nonetheless do believe that what scientific work in the natural sciences undertake has a certain rational and objective nature about it, even when paradigm shifts occur.   (ix). So where are we at the end of this exposition? In a nutshell, at largely pragmatic considerations for judging the utility or not of scholarly work, the present study included. To deny that there is some timeless and unchanging reality we can objectively contact and "prove" such contacts by means of ever greater methodological refinements seems wrongheaded even in the physical sciences (despite what is called "new realism" associated with the work of Roy Bhaskar and others) --- something that is essentially considered little more than special pleading by most analytical philosophers these days, ever since Quine and Davidson and Sellars and others demolished the logical positivist "paradigm.". In the social sciences, where change is pervasive for all sorts of reasons --- structural and unintended, intentional, shifts in knowledge (including social science) that in turn alter behavior and understanding of human actors (reflexivity) --- it seems simply misconceived. But note: to deny that there are true theories that can be distinguished from "inaccurate" theories --- right or wrong, to use different terms --- does NOT mean that all theoretical analysis is on the same footing.

That kind of extreme relativism makes no sense.

What we can do is distinguish between GOOD and BAD theories.

The former have more depth and insight; they anticipate criticisms in advance or adjust for them as the criticisms come forth; they have pragmatic consequences, guiding predictions of a better sort or guiding policymaking more effectively . . . along with the normal criteria of rigor, conciseness, clarity, wide-ranging or narrow, and the like. In the end --- to repeat what was said earlier in the chapter ---: the proof's in the pudding, not in some cookbook recipe or fettered adherence to it by any one chef.