Lessons from political opinion polls: using surveys to value non-market goods such as human life

Putting a monetary value on human life supports decision makers in seeking a balance between the cost of a safety measure and the reduction in harm it delivers, but new research suggests that the value given to a human life – the Value of a Prevented Fatality (VPF) – by the UK Government is too low.

About the research

The Government’s adoption of the VPF has led to it being used widely to judge how much ought to be spent on health and safety measures, from road and rail transport, through nuclear reactor protection systems to the National Health Service. The VPF is set at £1.83M (2016 £s) per life saved (or fatality prevented) and is based on a small-scale survey conducted in 1997. New research compares the survey used to establish the Government’s VPF with those used routinely by polling organisations measuring political opinion and throws doubt on the validity of the 1997 survey. The sample size used in the VPF survey was far lower than the number needed for a reliable figure, since a minimum of between 2000 and 3000 people should have been consulted, when the views of only 167 people were sought. The £1.8M figure undervalues the lives of UK citizens as it is less than a quarter of what it should be. This has negative implications for the Government’s policies on safety and health.

Research findings

Lessons from opinion polls

Political polls often claim to achieve a “3% margin of error”. This means that if a survey finds a party has 45% support, the pollsters will be 95% confident that between 42% and 48% of the voters intend to back that political party or candidate. Surveys with just two choices are the easiest to analyse, such as those conducted before the 2016 EU Referendum (remain in the European Union or leave). Opinion polls with multiple options may be analysed in a similar way by framing the question as: “Do you intend to vote for this party or for one of the others?” A straightforward mathematical process can then be used to determine the smallest number of people that must be consulted to give reasonable accuracy.

More than 1000 people need to be consulted to achieve a 3% margin of error in a political opinion poll, but a larger sample size may be needed when surveys ask about continuous variables. For example, the Office of National Statistics (ONS) recently questioned 18,000 people to estimate the spread of wealth across UK citizens in the years 2014 - 2016.

New research shows how to find the minimum sample size for surveys seeking to determine a general, continuously variable quantity, such as the Government’s VPF, with a 3% margin of error as used by political opinion polls. The survey sample size needed to calculate the VPF was found to lie between 2,000 and 3,000 people, compared to the actual sample size used in the 1997 VPF survey of 167. This calls into question the VPF currently used by UK Government departments and agencies to decide how much to spend to protect human life.

When should opinion surveys be used?

Opinion surveys are not the most reliable method that can be used to measure the value of a good and Policymakers need to engage critically with these findings if they are make the most of such research whilst ensuring that rigorous standards have been maintained. There is a generally accepted hierarchy of methods to be used in measuring the value of any good:

1. Market Value, if a free market exists. This is the most reliable method.

2. Revealed Preferences – measuring the value of the good by observing consumers’ behaviour, based on the assumption that consumers have considered a set of alternatives before making a decision.

3. Stated Preferences – Valuation of a good based on the declared inclination of consumers, typically from opinion surveys.

Revealed preference methods, which conform to John Locke’s dictum, “I have always thought the actions of men the best interpreters of their thoughts”, will generally give more reliable results than opinion surveys, as people are not always accurate in the statements they make. But if a stated preference method is to be used, perhaps as a last resort, policymakers need then to engage critically with opinion surveys, allowing them to make the most of the research where appropriate, but also to ensure that the most rigorous standards have been maintained in the gathering of evidence.

Opinion surveys consolidate different people’s judgements of a value into a single figure, used to represent the whole population. But there are pitfalls:

1. Selection of the sample and its size. The new research shows that the required sample size for the survey measurement of the VPF needs to be significantly higher than the roughly 1000 needed for a political opinion poll giving a 3% margin of error. The sample must also be chosen randomly from the target population as a whole. Special measures may be necessary: for example the ONS, in its wealth survey, sampled addresses likely to house wealthier families at a higher rate.

2. Consolidation Process. When the results of the survey are analysed it is essential that any statistical methods used give equal weight to the views of each person in the sample (for example, use of the median effectively censors or trims the views of all respondents except one, or at the most, two). Structural View Independence (SVI) is the key criterion here, requiring that the consolidation process should be devoid of in-built, structural biases that would render the views of some people more important than the views of others. For example, using the Geometric Mean to consolidate the views in the survey will produce a predictably low answer, implying the opinions of people assigning a high value will be systematically accorded less worth than low ones. Meanwhile the Root Mean Square will always give a high answer. Only the Arithmetic Mean (found by adding all the views up and dividing by their number) has been found to satisfy the SVI criterion.

Policy Report 54: December 2019

Lessons from political opinion polls (PDF, 630kB)

Contact the researchers

Professor Philip Thomas

Professor of Risk Management

University of Bristol logo colour

Case Study: LIBOR

Opinion Censoring played a key role in one of the millennium’s biggest banking scandals. LIBORs are a global benchmark interest rate used to set a range of billions of pounds’ worth of financial deals.

In 2007 banks were not lending to each other, leading to the absence of an active market. In these conditions, a survey was used to establish LIBOR.

A group of leading banks submit the interest rates at which they believe others will lend to them each day. The rate at which a bank says others will lend to it can be seen as a de facto measure of trust, reflecting the confidence other banks have in its financial health.

The utility of this survey method was worsened by three factors:

1. The simple arithmetic average was rejected in favour of a censored average

2. Censoring of both high and low values led to half of the sample being discarded

3. Each respondent had an incentive to falsify its view, as the bank would look more credit worthy if it stated a low rate.

The censoring inherent in the LIBOR method made false reporting inevitable. Between 2012 and 2015, Barclays, JP Morgan, Swiss Bank UBS, Royal Bank of Scotland and Deutsche Bank were all subject to hundreds or thousands of millions of pounds in fines for manipulating LIBOR.

The Financial Conduct Authority now wishes to end LIBOR by 2021 because of its over-reliance on expert judgment in the absence of an active market.

Policy recommendations

Economic indicators such as the VPF are a key factor in decision-making on policy from everything from banking to nuclear risk management. Close analysis frequently reveals a lack of rigour in their measurement by survey, leading to serious consequences.

Questions for Policymakers to ask about surveys:

1. Are you sure that a survey is the best method to use to quantify this value? Could this be better sourced through observation of a market or through revealed preferences?

2. Has each opinion in the sample been accorded equal weighting, or have statistical methods used in the analysis censored or trimmed responses? Has the simple arithmetic average been used to consolidate responses?

3. Is the sample size large enough to give an accurate answer for use in policy making? Has it been chosen randomly from across the whole population the policy will affect? Has the sampling avoided the pitfall of being selected from a small and unrepresentative pocket?

If the answer to any of these questions is no, you should make contact with the researchers to discuss the usefulness and limitations of the research for your policy.

References

Thomas, P., 2020, “Minimum sample size for the survey measurement of a wealth-dependent parameter with the UK VPF as exemplar”, Measurement, Vol. 150.

Thomas, P. J., 2018, “Pitfalls in survey measurements of economic parameters”, XXII World Congress of the International Measurement Confederation (IMEKO), Belfast, 3 – 6 September, J. Phys.: Conf.Ser. 1065 072009.

Thomas, P. J., 2014, “Structural view independence: A criterion for judging the objectivity of economic parameters measured by opinion survey”, Measurement, Vol. 47 161–177, January.

Author

Professor Philip Thomas (University of Bristol)

Edit this page