Previous Next

Agenda - Volume 19, Number 2, 2012

For a Charter of Modelling Honesty

Henry Ergas1

The theory of modelling, and fitness for purpose

In a classic discussion of mathematical models in the social sciences, the philosopher Max Black describes models as metaphors, raising the fundamental, and long-debated, question of in what sense (if any) a metaphor can be ‘true’ or ‘false’ (Black 1962). Perhaps the most sensible answer to that question is by Clarke and Primo (2012), who view models in the social sciences as similar to maps — abstractions that describe relationships between entities in a defined space. As with maps, models are to be evaluated not by their inherent resemblance (or lack of it) to ‘the original field of thought’, but by their fitness for purpose: whether they help us get where we want to go (Clark and Primo 2012: 53).

That purpose differs from model to model but, generally, economic models fall into two broad categories; the theoretical and the empirical. To classify a model as theoretical is not to suggest it is entirely inward-looking, in the sense of being oriented solely to the working out of theory: after all, theoretical models can be predictive, as are many economic models that seek to identify the ultimate consequences for one variable (say, wages) of a change in another (say, the company tax rate). Gibbard and Varian (1978), for example, famously described standard models, such as that of rent controls, as caricatures that explain consequences by identifying and exaggerating, much as political cartoons do, the salient features of a person or situation.

Rather, the difference between theoretical and empirical models is that the latter describe the relationships within a data set, instead of simply that between a set of analytical constructs. In turn, those relationships within the data set may serve a range of purposes, going from exploratory (does class size in primary school affect students’ lifetime income?), explanatory (by what mechanisms does class size in primary school affect students’ lifetime income?), confirmatory (are the mechanisms by which class size in primary school affect students’ lifetime income consistent with conventional theoretical models of human capital?) and evaluative (by how much, if at all, would halving class sizes in primary schools increase students’ lifetime income and would that justify the costs involved?).

The proper interpretation and assessment of answers to questions of this kind relies on searching scrutiny of the model from which they are derived. Scholarly journals promote this through the peer-review process, and by requiring authors to make models and data available to third parties for purposes of replication and testing.

The rhetoric of modelling, and the market for excuses

There is, however, a tension between the professional standards essential to progress in economic modelling and the growth of a new and increasingly crucial purpose of models — the justificatory role. Used in that role, models serve primarily as a means of giving credibility to claims about policies, rather than as a framework for structuring the testing of those claims.

These models are primarily a form of rhetoric — a speech act, to use J. L. Austin’s term — which (again, in Austin’s terms) is inherently perlocutionary: that is, a form of action aimed at securing a particular persuasive effect (Austin 1962: 109). Thus, just as yelling ‘fire’ is intended to induce people to clear the building, so showing that imposing a carbon tax will not undermine economic growth is intended to justify such a tax. And what counts with such speech acts — not least in the minds of those engaged in them — is not what they say, but what they are likely to do.

The production of these speech acts has become part of a market for excuses which deserves more economic analysis than it has received. In it, economic modellers produce ‘results’ which are typically commissioned and distributed by policy proponents (or their opponents) to decision-makers and publics. The policy being advocated (or resisted) is not determined by the ‘results’; rather, the causality all too often runs the other way. To make matters worse, there can be a form of Gresham’s law at work in which bad modelling drives out good, with each poor-quality study conferring a negative reputational externality on the market as a whole. As perceptions of the quality of the average study decline, the gains to producing a high-quality study can decline too, since consumers are inclined to place less weight on models as such, creating an unravelling effect on the market as a whole. Having a model, any model, becomes simply a ‘tick the box’ feature of policy advocacy, quite regardless of the model’s substantive merit.

The greater the difficulty consumers have in distinguishing the quality of modelling, the greater is the risk of ‘junk modelling’ dominating. Increasing the risk is the fact that poor-quality studies can masquerade as good-quality models by replicating their ‘look and feel’: for instance, through sheer size and complexity (as in the reports on the NBN discussed in this forum by Kevin Morgan), technical sophistication (as in the climate-change modelling discussed by Ergas and Robson), opacity and impenetrability (as in the modelling of royalties discussed by Pincus, of company tax forecasts discussed by Davidson, and of the stimulus discussed by Humphreys) or even simply by generating seemingly very large and highly newsworthy numbers (as in the modelling of the costs of congestion discussed by Harrison) — and, most often, by a combination of all of these. The fact that key interpretative economic concepts are often misunderstood and misapplied (as in the modelling of ATM regulation discussed by Green) then allows questionable results to be parlayed into effective advocacy.

A ‘Charter of Modelling Honesty’

Ideally, low-quality modelling would be screened out through the process in which (in Milton’s glorious phrase) ‘[truth] and falsehood grapple’, for ‘who ever knew truth put to the worse in a free and open encounter?’ (Milton 1644: 45). But that requires at the very least a contest that is indeed ‘free and open’ and hence maximises the chances of overcoming proponents’ efforts to insulate their models from untoward encounters with the evidence. Yet as several of the articles attest — including Morgan on the NBN, Ergas and Robson on the carbon tax, and Pincus on royalties — it has become common practice for the Commonwealth government to refuse to disclose to third parties the information needed for models to be adequately tested.

Standard freedom-of-information processes are ineffective in overcoming those refusals. They are costly, requiring the party seeking the information to make a substantial investment of time and financial resources; they are slow, as departments can take many months to respond, by which time the issue is no longer salient; and they are readily avoided, as they cannot deal with situations where some aspects of a model have not been fully documented. Moreover, these processes are subject to substantial incentive problems, as the benefits of disclosure are available to all (since the models being disclosed are public goods), while the costs fall entirely on the party seeking it. As a result, they lead to significant underinvestment in securing disclosure, relative to the levels that would be socially desirable.

But even were freedom-of-information processes more effective than they are, they should be unnecessary. Rather, the default position ought to be that the quality of public policy can only gain from full disclosure of the models that inform it. By allowing those models to be properly tested (and be seen to be tested), full disclosure would not only improve the ultimate quality of those models, but also strengthen public confidence in economic modelling. And last but not least, those models are public goods (in the sense of being non-rivalrous in consumption, so that everyone can consume more of a particular model without anyone consuming less), so it is inefficient to force third parties to devote resources to reproducing them.

Some parts of government, most notably the Productivity Commission, already achieve high standards of transparency in their modelling work. But that is far from being the case for the Commonwealth government as a whole. It would therefore be desirable for a reform-oriented government to adopt a ‘Charter of Modelling Honesty’.

That charter would set a ‘best practice’ standard for disclosure, including how models and associated data were documented and communicated. It would set out timelines and processes for access, including procedures for dealing with confidential or proprietary information. Under the terms of the charter, government would be required to report annually on its implementation, as part of wider freedom-of-information reporting. And the charter would apply not only to models generated internally but also to those on which third parties rely in pressing their case upon Ministers and departments.

The articles in this forum are a chilling warning. They point to a degeneration that threatens the legitimacy and credibility of economic techniques in the Australian public sector. It is time for all economists, and indeed for all those concerned with the quality of public policy in this country, to act to protect public confidence in tools and methods that deserve a better fate than that which this forum paints.


Austin, J. L. 1962, How To Do Things With Words, Oxford: Clarendon Press.

Black, M. 1962, Models and Metaphors, Ithaca: Cornell University Press.

Clark, K. A. and Primo, D. M. 2012, A Model Discipline, Oxford: University Press.

Gibbard, A. and Varian H. 1978, “Economic Models”, Journal of Philosophy 75(11): 664–77.

Milton, J. 1644, Areopagitica: A Speech for the Liberty of Unlicensed Printing, at:

1 University of Wollongong and Deloitte Access Economics, I am grateful to William Coleman for his comments and suggestions.

Previous Next