Best value performance indicators

The proposals

In the Summer of 1999 a consultation process is planned on a proposed system of so called ‘best value’ performance indicators (PIs) for all statutory Local Authority services in England and Wales, which includes social services, education, housing, environment etc. Preliminary discussions with a number of interest groups have already been held and a summary document of these discussions has been produced for the Department of the Environment, Transport and the Regions (DETR) by the Office for Public Management (April, 1999). The views expressed in this summary appear to reflect current Government thinking and this note is intended to widen the debate from what I see as the somewhat narrow perspectives currently in operation.

Positive aspects

There are positive aspects to some of the points made which should be not overlooked since they could form starting points for improving any final proposals. Thus the summary report regards it as essential that PIs point to ‘questions that need to be asked, rather than providing absolute answers about performance’. Taken literally this would seem to imply that they would not be used judgementally and not be presented as definitive statements about the real performances of Local Authorities. Unfortunately, the report also talks about the PIs as the basis of setting statutory ‘targets’ and this contradicts the use of PIs as ‘screening devices’ that merely raise questions which then require further investigation. The report also talks about the need for authorities to ‘be grouped into comparable families’ thus recognising implicitly the need to make adjustments for background factors such as poverty measures, over which authorities have little direct control. There is no further discussion of how this might be done or what factors are relevant, but this does provide another point of departure for a wider debate than may currently be envisaged by the government.

Problematic issues

A dominant feature of this report is its almost total avoidance of any attempt to define its purposes, the nature of PIs or the evidence that already exists about the effects of introducing PIs, most notably in Education. The report gives no hint of the controversy that has surrounded the use of PIs in education and health and the increasing evidence about the undesirable side effects that can occur when official ‘targets’ are based upon such indicators.

Thus, a great deal of work has been done demonstrating how indicators can be misleading measures of ‘true’ performance unless very careful adjustments are made for contextual factors, such as prior achievement in the case of schools. The report does not seem to regard the collection of appropriate data as really problematic, yet it is likely that data to make adequate contextual adjustments will be very difficult indeed to obtain. The report is also silent about the problem of reliability. From educational research we know that the sampling error attached to indicators for schools are so large that precise comparisons among institutions are infeasible and most cannot be distinguished from the average (see Goldstein and Spiegelhalter (1996), for a technical discussion of these issues). We do not know whether the same problem applies to comparisons of Local Authority services, but it is certainly an issue that needs to be addressed.

Fairness and Equity

Neither of the above words appears in the report. Yet some of the key objections that can be made about the use of PIs revolve around just these two issues. Equity is to do with whether, in an agreed objective sense, identifiable groups are discriminated against by the use of particular Pis. Thus, in education, where the published league tables are based upon ‘raw’ unadjusted test scores and exam results, it is easy to show that socially disadvantaged groups suffer (indirect) discrimination because the schools they attend will more often be among those designated as having low performance or to be ‘failing’. The fact that government legislation promotes the publication of league tables does not diminish their equity consequences.

The issue of fairness, has more to do with perceptions of blame and responsibility. Thus, comparisons based upon raw results are often seen to be unfair because schools view their positions in the league tables as largely a function of the social composition of their intakes, over which they have little control. In the case of a Local Authority, basing a judgement of social service provision on delivery indicators may be seen as unfair if government policies, such as those concerned with resource allocation, are viewed as creating difficulties for an authority and over which they also have little control. Likewise, equity may be an issue if performance indicators are strongly influenced by poverty or population changes.

The report expresses concern that some Authorities may ‘cheat’. By this it is presumably meant that Authorities will try to maximise their performance by ‘manipulating figures’. Indeed, it is very likely that such manipulation will occur. It is commonly assumed that schools attempt to maximise their examination performance by putting resources into areas where the largest ‘payoff’ is likely. In almost any high stakes public accountability system individual actors will attempt to maximise their positions, and it would be very surprising if this did not happen. The question that has to be asked is whether such ‘playing the system’ is a legitimate activity – and I am not here concerned with strict legality but rather with moral legitimacy. In particular one has to ask whether, within a system that is perceived to be inequitable and perhaps also unfair, individuals and institutions have a moral right, even obligation, to disobey the rules. Thus, for example, a school or Local Authority that believes a PI system operates against it because of factors over which it has no control, may consider that it has a duty to its pupils or citizens to defend its position against a biased external system by whatever reasonable means it can. Many might feel that such a moral concern may override doubts about the breaking of externally imposed rules.

In conclusion

So far there are no firm proposals, merely strong indications of government preferences. Of course there is a strong general case for accountability mechanisms for both public and private bodies. The debate is about how to do this in ways that are seen to be reasonable, fair, and can pass tests for equity. My argument is that the kind of ‘joined up naivety’ exemplified in this report does little to advance that debate and obscures the key issues that should be at its heart.

We have a government that claims to be concerned to base its policies upon ‘evidence’. In this area there already is a great deal of evidence that can be used to argue in favour of caution and sensitivity to harmful side effects. It is just possible that government can be forced to listen seriously to this evidence if it is presented coherently and publicly and if the argument is made that in the long run it is in everyone’s interest to proceed with care. It may also be possible to persuade the government that it should, at the very least, set up proper evaluations of any new system, piloting aspects of it before introducing it universally. Local Authorities will need to provide a coherent, well argued response to the consultations on these proposals if there is to be any chance of injecting rational decisions into any new policies.

Reference

Harvey Goldstein. 22 May 1999

Edit this page