The Flawed Assumptions of Neoclassical Economic Idealisation


Among the flawed assumptions of neoclassical economics is its reliance on methodological individualism as its ‘official’ methodology. This is a research stratagem imported from Greek atomism via Descartes—with his atomistic system and mechanical rules—and then the individualistic political theorising of Hobbes and Locke. In those stories, explanation was to be located in the actions of individual actors interacting in a mechanical fashion rather than in the complex organic interplay of social institutions, groups and individuals. As we saw in Chapter 5, the emerging dominance of theoretical reasoning in political and moral theorising through the Enlightenment marked a sharp discontinuity with the practical approach to political and moral reasoning that had been derived from Aristotle and which characterised the medieval world. One important element in this transformation was the abandonment of forms of explanation based on organic metaphors and the emergence of a mechanical Newtonian metaphor as the dominant form of explanation. This atomism is an essential feature of this form of explanation, in which causal relationships are seen as being analogous to the forces operating in the movement of the planets or in classical mechanics, with individuals taking the place of the planets or of billiard balls and interacting in a mechanical fashion. As we saw in Chapter 5, however, the Newtonian mechanistic world-view has been undermined. Newtonian physics has been completely discredited as an answer to any fundamental question about the nature of the world.

Physics has come to understand reality not in terms of atomism—of discrete particles that can be described independently of all others—but as a complete network, the most basic elements of which are not entities or substances, but relationships. The properties of things are no longer seen as being fixed absolutely with respect to some unchanging background; rather, they arise from interactions and relationships.[40] This abandonment of Newtonianism within its parent discipline should cause economists to pause and wonder whether the Newtonian metaphor provides an adequate master narrative for economics. Having stressed the fundamental importance of our social relationships and our socially constructed moral codes in Chapters 2 and 7, I don’t believe methodological individualism can deal adequately with these continuing social relationships.

In any event, Kincaid warns us that individualism is a fuzzy doctrine: ‘Sometimes it makes ontological claims, for example, that social entities do not act independently of their parts. Other individualists put the issue in terms of knowledge: we can capture all social explanations in individualistic terms or no social explanation is complete or confirmed without individualist mechanisms.’[41]

Kincaid argues that the debate about holism and individualism is primarily an empirical issue about how to explain society. The upshot for Kincaid is that individualism is seriously misguided: ‘When individualism is interesting, it is implausible; when it is plausible, it is uninteresting.’[42] It should already be clear from Chapter 2 that the claim that methodological individualism provides the exclusive proper explanatory strategy in the social disciplines is deeply flawed. It mistakes the biological entity for the complete human. To use a modern metaphor, it mistakes a discrete piece of hardware for the whole system, forgetting that the ‘software’ is an open social construct and that together they form part of a large network. In the spirit of narrative pluralism, this does not mean that methodological individualism might not be useful in some instances. It is up to the analyst using that assumption to demonstrate its usefulness and the ‘validity’ of the results. The Enlightenment tradition from Descartes and Locke onwards to contemporary mainstream economics has just assumed this question away. In economics, this strategy assumes that all individual choices are self-serving and promote individual welfare. Not only does it fail to acknowledge the social constraints on choice, it fails to confront the possibility of mistaken choices and the normative consequences of that possibility. In those cases, one could always respond that people should bear the consequences of their mistaken choices. This, however, is a normative judgement that is open to question and is something economists claim not to be making. It is also a judgement with which the rest of us might disagree—though not necessarily all the time. There is a dynamic element in choices as people learn over time what is important to them in the changing circumstances of their lives. Mistakes are an important part of that learning process.

Individual preferences again

The empty concept of revealed preferences is—as we have already seen, and as Sen confirms—simply ‘a robust piece of evasion’[43] to avoid a serious examination of the formation and nature of ‘preferences’. This is to preserve the ideological usefulness, analytical structure and mathematical tractability of mainstream analysis. Sen calls this theory and the associated rational-choice theory a remarkably mute theory. This is because it explains behaviour in terms of preferences, which in turn are defined only by behaviour. This circular reasoning has no explanatory power. It does, however, require consistency in choice—but that is something that is not observed in practice. Furthermore, there is much evidence, including in economics, to show that in practice people’s choices are often not selfish. For most of us, this would seem to undermine the whole idea. Sen goes on to argue that, while choices based on sympathy for others could perhaps be accommodated in mainstream models, choices that are made on the basis of moral commitments are counter-preferential and cannot be so accommodated. Of course, this is something that most of us knew already, even if we did not know the jargon in which the argument is expressed. At most, only some choices are made on the basis of their contribution to personal welfare. As Sen points out, this conclusion is particularly important in respect of the provision of public goods and in work motivation. In the latter case, it would be impossible to run any organisation entirely on the basis of personal incentives—and they have necessarily to rely on moral commitments and social cohesion in order to operate at all.

Rational choices and optimisation

The above critique undermines the normative instrumental view of rationality used in mainstream economics.[44] This was a view of rationality that was rejected firmly in Chapter 5. Human judgement cannot be reduced to static optimisation. People do often act in a self-interested fashion, but it can be rational to do things that are not in one’s personal interests. Indeed, it is normal to do so. It can be rational to make choices in accordance with moral values and it is normal to do so. It can also be rational to disregard the consequences in making choices. Some things are just not done and some things have to be done regardless. Some economists might respond to these arguments by making a distinction between short-term and long-term self-interest, and then try to accommodate our moral commitments to those long-term interests. Really, this is just further consequentialism and, as we saw in Chapter 7, it is an attempt to legislate a particular moral theory that is part of the same tradition of moral reasoning as mainstream economics. In that regard, it is now obvious that our moral principles cannot be reduced to a single conceptual system and that the moral rules that regulate life in contemporary Western society derive from several incompatible historical sources augmented constantly by contemporary cultural influences. These in turn are different from those operating in other societies. Consequently, the appeal to long-term interests is simply further evasion. One could play that game forever; but the average punter should just decline to play.

Nevertheless, mainstream economists continue to claim that economic agents in their choices optimise the benefits to be derived and that it is only rational to do so. Behavioural economists have, however, demonstrated successfully that everyday human economic behaviour is not consistent with this claim. This demonstration undermines much of the associated analysis. In particular, real human beings simply lack the cognitive abilities to maximise the benefits from their choices. Furthermore, the contexts within which we make decisions are such that optimisation—either ex anti or ex post—is simply not possible. This brings us back to the realisation that human choices involve a dynamic process relying on practical wisdom based on experience, learning about opportunities and tastes and balancing different attainable goals—rather than a crude optimisation process.

Among economists, this realisation has led to a long discussion of ‘bounded rationality’ and of ‘satisficing’—concepts associated with Kalneman and Simon, more dissident Nobel Prize winners. For example, Tversky and Kalneman make a distinction between intuitive judgements and deliberative decisions, demonstrating that even statistical experts make systematic errors in their intuitive probability-based judgements.[45] Even significant research decisions are guided by flawed intuitions. Tversky and Kalneman have undermined the proposition that choices involving risk are made on the basis of a rational analysis of the risks involved. Their prospect theory describes how such choices are made in practice involving a two-stage process: editing and evaluation. In editing, possible outcomes are ordered following some heuristic, choosing a reference point against which to evaluate the possible outcomes. In the evaluation phase, people choose an outcome with the highest utility based on the potential outcomes and their respective probabilities. Importantly, the way people frame an outcome subjectively in their mind affects the utility they expect or receive. What is more, it has been demonstrated empirically that people not only consider the value they receive, but the value received by others.

Similarly, Simon pointed out during a long career beginning with his first book in 1947 that real people have only limited abilities to formulate and solve complex problems. In particular, we have only limited abilities to acquire, process, retrieve and transmit information. In addition, we often have conflicting aims and frequently our goals and the means to achieve them are interrelated and cannot be separated. Furthermore, it is simply impossible to make a logical search though the myriad options open to us, and their consequences. Consequently, we use heuristics or rules of thumb and our emotions as well as logical analysis in our decision making. Any attempt to optimise in practice just leads to confusion. Therefore, we make decisions that are satisfactory rather than optimal. The ‘normal science’ of economics has tried to maintain its framework by introducing the concepts of search, deliberation and time costs into the decision-making process and claiming as a result that satisficing is effectively the same as optimising. In doing so, mainstream economists have trivialised Simon’s devastating theoretical insights in the interests of their research program. To my mind, these are very different worlds and the above attempt to maintain the neoclassical framework is just another form of evasion.

Pareto-optimality and welfare

Neoclassical economics goes on to claim that, subject to a broad range of assumptions, a competitive market will allocate resources between competing uses in an optimal fashion. This is the contemporary version of Smith’s belief in the ‘invisible hand of the market’. Smith believed that God had so arranged creation and human affairs that self-interest balanced by sympathy for our fellow humans would produce the best of all possible worlds. Now, in the absence of God, mainstream economists would have us believe that the rational choices of the individual, exercised in perfectly competitive markets with perfect information and mobility of resources, will have the same effect. This idea of the theoretical primacy of competition and markets is developed in first-year microeconomic courses as an extension of the ‘laws of supply and demand’ and the properties of market equilibrium.

This scheme draws on utilitarianism’s search for the ‘greatest happiness of the greatest number’ to propose that the consumer is motivated to purchase goods by the ‘utility’ she or he derives from it—a reflection of her or his preferences. Then it is claimed that with competition in demand, the benefit or utility received by the individual consuming the last unit of goods—for example, an apple—equals the price she or he is willing to pay. Similarly, with competition in supply, it is claimed the resource cost to produce that last apple equals the price the producer receives. Voluntary exchange between the producer and consumers in a market-clearing auction will then yield a set of prices that equates the marginal benefit of each commodity with its marginal cost. This has the practical effect of assuming out of existence the problems for the achievement of the greatest happiness that result from the existing socio-economic order and its power relationships. To the limited extent that these problems are recognised, they are seen as constraints on preferences that disappear into the background. At one stage, it was hoped that utility might be measured and thus provide an objective measure of the benefit derived. This quickly proved an illusion, leading ultimately to the development of the empty concept of revealed preferences. As we saw earlier, this means that people buy only what they prefer and they prefer what they buy; and, as it turns out, they are not logically consistent in those purchases.[46]

There are even further technical problems with this most basic of models in the marginalist movement.[47] The model assumes that supply and demand curves are continuous and well behaved, but that is not necessarily the case. Prices can also be resistant to change. In any event, can this model be operationalised? The most damaging criticism of these models is that they impose impossible computational demands on individuals and firms. Real people cannot be making production and purchasing decisions on the basis of such computations. Consequently, in the real world prices cannot be established in this manner except in the crudest possible sense. The result is that it seems likely that the marginalism implicit in the model is an artefact of the analytical system rather than an accurate description of real behaviour. The whole model appears to exaggerate the influence of pricing signals on economic decisions, reducing all other influences to ‘costs’, however difficult it might be to attribute a monetary value to those costs. Nevertheless, these concerns are generally ignored.

The first fundamental law of welfare economics is then derived by generalising from such single-commodity models to general equilibrium of the economy, abstracting from much detail. This involves the unrealistic assumptions of optimisation in all other markets and independence from them. We are then told that under very restrictive and unrealistic assumptions, a competitive market equilibrium is ‘Pareto-efficient’ or ‘Pareto-optimal’ or ‘socially optimal’—where Pareto-optimality is defined as that state in which it is impossible to improve the welfare of some members of society without reducing the welfare of others. In practice, those restrictive assumptions are quickly forgotten.

Although superficially attractive as a definition of maximum welfare, Pareto-optimality is more than deeply flawed; it is simply not true. As Blaug writes: ‘Pareto welfare economics…achieves a stringent and positivist definition of the social optimum in as much as Pareto-optimality is defined with respect to an initial distribution of income. The practical relevance of this achievement for policy is nil.’[48]

Bromley is among the many other economists who have attacked the scientific objectivity of Pareto-optimality as a decision rule in policy analysis, seeing it as being inconsistent and incoherent with no special claim to legitimacy.[49] The claim that economic efficiency is an objective measure of objective scientists is simply wrong. Warren Samuels, for his part, has described in some detail the large number of normative assumptions underpinning the definition, showing that the concept of Pareto-optimality necessarily involves moral judgements about the existing distribution of wealth and power and the legal system, which enforces ownership rights.[50] Its imposition as a decision rule in economic policy making—the requirement that ‘economic efficiency’ ought to be the decision rule for collective decision making—is also a normative choice.[51] In short, it is nothing but a pseudo-scientific defence of the economic and social status quo.

This approach also prohibits interpersonal utility comparisons on the ground that there is no ‘scientific’ method for doing so, that interpersonal utility comparisons involve normative judgements and on the principle of consumer sovereignty. The fact that the principle of consumer sovereignty is itself a normative judgement and that we make such interpersonal comparisons every day seems to have passed the economics profession by. In doing so, it contravenes a fundamental insight of the marginal movement in economics: the everyday experience of declining marginal utility—that is, the experience that the benefit derived by any consumer from one unit of consumption declines as the total amount of that consumer’s consumption increases.[52] It is possible, therefore, in practice, to improve welfare by taking from the rich and giving to the poor—regardless of what Pareto’s disciples might claim.

Furthermore, this measure of welfare depends on the subjective judgements of individual consumers and producers; however, we all know from painful experience that not all subjective choices are welfare enhancing—and those exceptions are very important. Furthermore, as is readily conceded by most economists, the price system does not operate as an adequate signalling system for public goods or for goods where there are either positive or negative spill-overs. Those prices never reflect their real social and economic worth and cost. Importantly, while this is a well-recognised phenomenon within mainstream theorising, we give almost no policy attention to the negative externalities associated with advertising. The subjective willingness of individuals to pay for many advertised goods does not necessarily reflect their contribution to welfare.

Finally, in practice, beyond a minimum level, real people do not judge their well-being in absolute terms. Rather, we judge our subjective welfare by comparing ourselves with each other—particularly those in our immediate social circle. The net result is that in a world in which there are differences in individual welfare, there is never an occasion when it is not possible to improve our subjective welfare by redistributing income. Nevertheless, the search for efficiency is usually—and unreasonably—considered by mainstream economists to be the best available decision rule in the circumstances. In contrast—in concert with Blaug and Bromley—I argue that these fundamental flaws mean that Pareto-optimality is not a legitimate decision rule in public policy. Its continued use reflects an improper unwillingness on the part of policy advisers to undertake the messy, non-algorithmic task of judging the likely real welfare consequences of possible actions. This does not mean that welfare improvement is unimportant, only that this is not the way to judge it.

Of course, these criticisms are well known to mainstream economists, but they tend to pass over them in embarrassed silence as they press on with their normal science. The consequences for their normative prescriptions are simply ignored. A cynic might conclude that the whole idea of Pareto-optimality was invented to deflect a strong conclusion from marginalist theory in favour of redistribution to the poor.

General equilibrium

The strongest version of the contemporary economist’s faith in competitive markets is the concept of general equilibrium—the core concept of neoclassical economics, that best of all possible worlds. The idea of a social equilibrium dates back to Smith’s moral theorising, but the direct application of the idea to neoclassical economics originates with Walras. In its modern form it dates from the mathematical modelling of Arrow and Gerard Debreu in the 1950s. One cannot but wonder whether this particular development resulted from a need to find a ‘scientific’ justification for the claimed superiority of the capitalist system in the face of the ideological challenge posed by communism at that time. In any event, it follows from Kurt Gödel’s work on mathematical logic that no such formal logical system can be self-contained and consequently such systems cannot contain within themselves the rules for their application.[53] Nevertheless, the basic idea in its present form is that the prices, consumption, production and distribution of all goods and services in an economy are interrelated, with a change in the price of one product affecting all others. Another way of saying the same thing is that the economy is composed of a set of interrelated markets. Of course, this simply ignores the large part of the economy that is not in the market sector and the social determinants of the operation of the market sector.

In a ‘perfectly competitive economy’, it is claimed, the economy operates so that at a unique set of prices there exists equilibrium of production and consumption that is Pareto-optimal and in which there are no under-utilised resources. That assumption of perfect competition involves the idea that no economic agent has sufficient market power to affect the price paid for goods or services—that is, they are too small to affect the price. While it is central to the whole framework, this is clearly not true. Huge multinational corporations that have very considerable and enduring market power dominate modern economies. They also enjoy massive increasing returns to scale and scope, rather than the diminishing returns assumed in this theory.

In any event, leading British economist Joan Robinson (1903–83) argued in the Impossibility of Competition that there was a logical contradiction in the basic conception of competition as a state of equilibrium.[54] The tendencies for competition to make markets imperfect through product differentiation, towards oligopoly in the presence of economies of scale and for excess capacity to lead to collusion are all rooted deeply in the very nature of the competitive system. As a result, she strongly doubted that it was proper to treat competition as a normal equilibrium state. Further, she was a very strong critic of the value of this formalism and its ‘thicket of algebra’.[55]

Nothing has really changed to undermine this early critique. Rather, it has been repeated regularly since that time. There has been a recent tendency for economists to claim that the potential for firms to enter a market guards us against the exploitation of market power. As entry into markets dominated by one or more large companies can be very costly, however, this claim simply lacks credibility. Similarly, Nicholas Georgescu-Roegen criticised the economic value of such general equilibrium modelling and even its value as a mathematical exercise: ‘There are endeavours that now pass for the most desirable kind of economic contributions although they are just plain mathematical exercises, not only without any economic substance but also without any mathematical value.’[56]

In his more recent critique, Ormerod calls the model a travesty of reality, singling out the assumption of a ‘continuum of traders’, meaning an infinity of infinity of traders—an absurd mathematical assumption necessary for the solution of the equations in the model.[57] In any event, this formal system drastically over-simplifies the complexity encountered in real economies and abstracts from their differing institutional frameworks. In addition, firms in all their complexity do not appear in this model. Furthermore, it is simply assumed that the price system exists and that economic agents are simply price takers without any real freedom of choice. To get over this difficulty, the fantasy was invented of an auctioneer who would set prices but would disallow trade until equilibrium was achieved. In practice, however, people do buy and sell at prices that would not clear the market. Any such trade could undermine the possibility of any convergence towards equilibrium. Furthermore, any auctioneer—and economic agents more generally—faces a task that is a computational impossibility. The whole idea is just more nonsense.

Importantly, it is far from clear whether the model will result in a single stable unique equilibrium.[58] Rather, multiple equilibria could exist and could be path dependent. This possibility alone undermines the generality of the policy conclusions of the mainstream model and opens the possibility of successful coordination by governments.[59] Furthermore, it is not clear how these prices and resource allocations are arrived at or whether, in the event of a shock, the economy will converge back to the same equilibrium. As we have seen, the theory involves the standard assumptions that economic agents are rational, that there are no externalities, that information is perfect and that there is a complete set of markets. The theory also requires consumer preferences to be subject to diminishing marginal utility and that there be no economies of scale. As we have seen, these assumptions are just not credible.

Much of the normal science in neoclassical economics therefore involves trying to relax these assumptions while retaining the fundamental conclusion of the Pareto-optimality of markets and the proposition that they will clear. None of this can deal successfully with the consequences of introducing uncertainty into the model. David Newbery and Stiglitz have shown that, in an uncertain world, a competitive equilibrium is in general not a Pareto-optimum. The real world is an uncertain world. This undermines the whole point of the model, except in extremely restrictive conditions.[60]

The attempt to relax the assumptions of the neoclassical framework looks like medieval scholasticism and is a waste of time. In this respect, Stiglitz sees ‘the pervasiveness and persistence of unemployment’ as the ‘critical experiment which should lead to the rejection of the basic equilibrium model which [depending on how you view it] either predicts or assumes full employment’.[61]

The general theory of the second-best

Further, a powerful neoclassical qualification to the use of neoclassical economics as a policy tool—the ‘General Theory of the Second-Best’, which questions whether any incremental move towards the market idealisation will, in practice, produce an improvement in social welfare (still defined in positivist terms)—is ignored as being incapable of application in practice. Lipsey and Kelvin Lancaster have, however, demonstrated convincingly that all violations of the assumptions of the general equilibrium model would have to be removed in order to be confident that any move towards the market idealisation would lead to increased welfare.[62] Indeed, Lipsey confirms that Pareto and Paul Samuelson had a similar insight.[63] It follows that one cannot assume the contrary in practice. As we saw above, market failures are pervasive in the real economy—or, as Lipsey says, the proportion of economic space affected by distortions is close to 100 per cent. Furthermore, there are no general policy rules that can be applied to piecemeal improvements. This undermines the policy usefulness of the whole idea of market failures.

Information economics

Information economists have unpicked the information assumptions underlying the fundamental theorem of welfare economics—undermining the long-standing presumption that markets are necessarily efficient. They have shown—using conventional analysis—that where information is costly, which it almost always is, appropriate government intervention could make everyone better off.[64] This alone undermines the standard presumption in contemporary policy discourse against government action. This has the effect of confirming the view that market failures are pervasive in real economies—further undermining the policy relevance of the whole framework. Nowhere are the insights of information economics more important than in respect of financial markets—a central mechanism for the allocation of resources in capitalist economies. Stiglitz argues from a mainstream and information theoretical perspective that since financial markets are concerned essentially with the production, use and processing of information, they are somewhat different from other markets: ‘Market failures are likely to be more pervasive in [financial] markets; and…there exist forms of government intervention that will not only make these markets function better but will also improve the performance of the economy.’[65]

For example, given the casino-like atmosphere in stock markets, the prevalence of information cascades, misinformation, insider trading, booms and busts and downright fraud, how anyone can think that capital raising in these markets will maximise welfare is beyond me. If, however, financial markets are not efficient in the neoclassical sense, there is little prospect that the rest of the economy could ever be efficient either. Of course, some might claim that regardless of their short-term deficiencies, financial markets are efficient in the long run, but that is more evasion, as there can be little doubt that the short run does matter in the allocation of resources by these markets.

The separation of efficiency from distribution

One of the consequences of the neoclassical framework is that economists usually claim that issues to do with economic efficiency can be separated from distributional issues. This separation is used to justify their focus on efficiency issues and on the importance of economic growth in policy discourse—leaving it to governments to deal with distributional issues in a somewhat ad hoc and illegitimate manner. Stiglitz has, however, also undermined the practical usefulness of that distinction: ‘Government cannot and does not rely on lump-sum taxes as a basis of redistribution…One of the central consequences of the second fundamental welfare theorem was the ability to separate efficiency issues from distribution issues. In the absence of lump-sum taxes, this separation is not possible.’[66]

The practical consequence is that measures that are claimed to promote efficiency will inevitably have distributional impacts, while measures to promote equity will have efficiency impacts. This lack of separation complicates greatly the task of economic management. This complication has, however, usually been overlooked in the recent priority given to efficiency in the optimistic hope that the benefits of greater efficiency will somehow trickle down to the underprivileged. In any event, this inability to separate efficiency and distribution effects further undermines Pareto-optimality as a minimal measure of welfare.

It should be noted that because of its idealisation of the market and its absurd assumptions, neoclassical economics has almost nothing useful to say about the achievement of efficiency—defined in any realistic fashion—in real economies.

Game theory

In an attempt to bolster its shabby claims to credibility, mainstream economics has taken to playing games based on simple optimisation assumptions comparable with those of neoclassical economics but with uncertainty of outcomes thrown in—claiming that this is a scientific procedure akin to the natural sciences and that it throws light on how real humans behave. As Frank Stilwell confirms, game theory has shown that the focus in mainstream economics on rational individuals is internally inconsistent and descriptively inaccurate. What it really demonstrates is that real people—even in highly artificial games—tend to behave in an altruistic way, assuming that other people are guided by social norms. It has, therefore, strengthened the critique of neoclassical economics.[67]