Learning from agri-environment schemes in Australia
- A range of metrics are used to evaluate and prioritise projects within agri-environment schemes.
- The way the metric is calculated, and the choice of variables included, are important decisions in the evaluation process.
- When funds are scarce, the quality of the metric is important.
- Errors in metric design are readily avoidable.
- It is more important to ensure that high-quality decision metrics are used than to invest in improving the quality of information about projects.
Good decision-making in agri-environment schemes is information-intensive. Environmental managers usually collect and weigh up information on landscape characteristics, ecological responses, human behaviour, and project risk. This information feeds into their decision-making. Environmental managers usually put a lot of effort into collecting this information, but often take a rough-and-ready approach to combining it into a form that is useful for decision-making. For example, for investments made in the National Action Plan, Pannell and Roberts (2010) commented: ‘The processes used by Catchment Management Organisations generally did not involve comprehensive systematic analysis of investment options or project design options.’
Source: Photo by David Freudenberger.
Does this matter? Does it make a difference to environmental outcomes to use a theoretically sound decision metric, compared with a weak decision metric? That was the question we set out to answer by comparing environmental outcomes generated by these two approaches.
What we found, in short, was that it does matter which decision metric you use. Indeed, it can make an enormous difference. As a consequence, many decision metrics used by environmental managers result in us missing out on very large environmental benefits.
What’s in a metric?
What is a decision metric and why are they so important? Around the world, billions of dollars worth of public funds are allocated to environmental projects each year. These funds are scarce, relative to the amount needed to support all possible environmental projects, so prioritisation is essential. This means some projects are determined to be more valuable than others and will receive funding whereas the less valuable projects miss out.
A common approach used by environmental managers to score the projects they have to choose between is to define a set of variables believed to correlate with projects’ benefits and costs, and combine them into a formula or metric so that projects can be compared. Numerical values or scores are assigned to each potential project and these scores are used to rank the projects. For example, the Conservation Reserve Program in the United States combined measures of wildlife benefit, water quality benefit, erosion risk, enduring benefit, air quality benefit, priority area, and cost to evaluate program investments (Hajkowicz et al. 2009).
Of course, there are many different ways the various benefits and costs of a project could be combined, and there are thousands of different decision metrics in practice around the world. Unfortunately, many if not most of these decision metrics have problems in the way they determine the value of the project. Indeed, our analysis showed that the performance of many of these metrics is not much better than choosing projects at random. If that’s the case, there’s little point in wasting your time on using these metrics — which take time and money to generate — because you may as well simply draw projects out of a hat. Commonly used decision metrics have a range of weaknesses, including adding variables that should be multiplied, omitting important variables related to environmental benefits, omitting project costs, or subtracting costs rather than dividing by them (see Box 17.1).
But what do these weaknesses add up to in terms of lost value? Surprisingly, few have undertaken such analysis (see Joseph et al. 2009). We estimated the environmental losses resulting from each of these weaknesses.
The first principle of creating a strong metric is understanding that it reflects a measure of project benefits divided by a measure of project costs. Economists call this metric a benefit–cost ratio (BCR).
There are plenty of project ranking metrics in actual use that don’t do this. Some subtract costs instead of dividing them, and some (remarkably) ignore costs entirely. These are mistakes that are costly to the environment.
To illustrate this, consider the following three hypothetical projects, with the indicated benefits (B) and costs (C). Because the budget is limited, the first project we should choose is the one with the highest benefits per unit cost (the highest BCR), which is project one. But if we rank according to B-C (i.e. benefit minus the cost), the top ranked project seems to be project two, while ranking according to just B (i.e. benefit ignoring costs altogether) tells us that project three is best.
The loss of environmental values from using the wrong metric (i.e. ranking according to B-C or B) depends on how tight the budget is. Assuming that the budget is enough to fund 10 per cent of projects, the loss of environmental benefits is 12 per cent for B-C, and 19 per cent for B (based on simulating 1,000 funding rounds with 100 potential projects in each).
In other words, fixing up the formula is like increasing the program budget by 14 per cent or 23 per cent. It’s much easier to fix the formula than to increase the budget.
The attributes of a robust metric
Pannell (2013) described the requirements for a theoretically sound and practical decision metric for ranking environmental projects. He recommends:
where BCR stands for benefit–cost ratio, and benefits depend on the value (V) of the environmental assets; the likely adoption of new practices or behaviours (A); the effectiveness of the new practices at increasing environmental values (W); the risk of project failure (R); the time lag until benefits occur (L); and the discount rate (r). Benefits are divided by costs (C) to derive the BCR, with higher BCRs demonstrating a more cost-efficient project. All of the benefit-related variables are multiplied, not weighted and added, for reasons explained by Pannell (2013). We obtained distributions for each of these variables from a database of 129 projects that have been evaluated using INFFER (the Investment Framework for Environmental Resources — see Chapter 18).
Essentially, our analysis involved evaluating and ranking projects using Pannell’s metric (given above) and an alternate metric with one or more weaknesses included — for example, , which omits costs (C). By comparing the two results, we estimated the overall loss of environmental values from selecting relatively weak projects using the alternative metric. We tested the metrics for different program budget levels — from 2.5 per cent to 40 per cent of the budget required to fund all the projects. Altogether, the analysis simulated 27 million projects being considered in 270,000 project-prioritisation decisions.
Using weak metrics makes an enormous difference: the wrong projects get funded, resulting in big losses of environmental values. Where funding is tight (as it almost always is) we found that poor metrics resulted in environmental losses of up to 80 per cent — not much better than completely random, uninformed project selection.
The most costly errors were found to be omitting information about environmental values, project costs, or the effectiveness of management actions. Using a weighted-additive decision metric for variables that should be multiplied is another costly error commonly made in real-world decision metrics (e.g. adding cost in the Conservation Reserve Program’s environmental benefits index). We found that omitting information about project costs or the effectiveness of management actions, or using a weighted-additive decision metric (that should be multiplied) can reduce potential environmental benefits by 30 to 50 per cent. Think about how hard it would be to double your budget (achieve a bigger slice of the funding pie), and yet that could be achieved in effect in many cases by simply strengthening the decision metric being used.
What about the quality of the information?
Of course, it’s not just the structure of the metric calculation that could be a weakness in the prioritisation. The quality of the information going into the calculation is also a factor (see Anderson et al. 1977 for a description of the standard theoretical framework for calculating the value of information to an application in agriculture). We looked at the environmental losses resulting from use of poor-quality information, such as inaccurate cost data, in the decision metric. We compared results from prioritising projects based on perfect information and uncertain information.
Naturally, poorer quality information about projects results in some relatively weak projects being selected for funding. Surprisingly, however, we found that the quality of the decision metric makes a much bigger difference to environmental outcomes than the quality of the information used within it.
If a very poor metric is used, then the benefits of improving data quality from high uncertainty to perfect information are remarkably low: 3 to 6 per cent. Improving information quality (e.g. by collecting more or different types of data) only produces benefits greater than 10 per cent if a reasonably good decision metric is used, and even then only if the available budget is tight.
That’s an amazing finding which suggests environmental managers (and policymakers) should be more concerned in the first instance about how they calculate a decision metric rather than funding the acquisition of higher quality (and inevitably much more expensive) information to feed into that metric (see chapters 20 and 21).
Does it really matter?
Our results show that relatively simple improvements to metrics used for environmental decision-making can make a big difference to the environmental benefits generated by funded projects. Environmental budgets are usually small, relative to the problems faced, so good decision metrics are crucial.
It does really matter which decision metric you use. Another way of thinking about this is by considering how much effort people put into increasing environmental budgets. Of course, getting a bigger slice of the budget pie will help in achieving environmental outcomes. However, this analysis suggests that efforts to improve environmental decision processes may be even more beneficial than equivalent efforts devoted to increasing the total environmental budget. With less funding available for agri-environment schemes, the design of high-quality project selection metrics is critical.
Anderson, J.R., J.L. Dillon and J.B. Hardaker (1977) Agricultural Decision Analysis, Iowa State University Press, Ames.
Hajkowicz, S., K. Collins and A. Cattaneo (2009) ‘Review of agri-environment indexes and stewardship payments’, Environmental Management 43: 221–36.
Joseph, L.N., R.F. Maloney and H.P. Possingham (2009) ‘Optimal allocation of resources among threatened species: A project prioritization protocol’, Conservation Biology 23: 328–38.
Pannell, D.J. (2013) Ranking environmental projects, Working Paper 1312, School of Agricultural and Resource Economics, University of Western Australia.
Pannell, D.J. and F.L. Gibson (2014) ‘Testing metrics to prioritise environmental projects’, Working Paper 1401, School of Agricultural and Resource Economics, University of Western Australia.
Pannell, D. and A.M. Roberts (2010) ‘Australia’s National Action Plan for Salinity and Water Quality: a retrospective assessment’, Australian Journal of Agricultural and Resource Economics 54: 437–56.
Pannell, D., A.M. Roberts, G. Park and J. Alexander (2013) ‘Designing a practical and rigorous framework for comprehensive evaluation and prioritisation of environmental projects’, Wildlife Research 40: 126–33. Available at: dx.doi.org/10.1071/WR12072.