Professor Willy Aspinall
1.1 Old Council Chamber in Wills Memorial Building
This tutorial will be delivered by Professor Willy Aspinall.
Most models and numerical simulators for imitating real-world processes or for estimating risks involve some parameters that lack adequate or suitable data for characterizing their uncertainties in terms of conventional statistical distributions; such limitations can also impinge on meaningful model sensitivity analysis. In particular, misconstruing lower or upper bounds for uncertainty ranges of parameters that are not perfectly known can lead to misleading results and poor decisions. Where data are sparse (e.g. with volcanic eruptions, or induced seismicity) or novel, non-existent or emergent (e.g. carbon-capture and storage, or lethal zoonoses, such as SARS or Ebola), the only recourse for informative guidance on quantitative inputs to models may be expert judgement.
The challenge then is how to conduct an elicitation of a group of (often strongly opinionated) experts, and how to derive objective results by combining their usually diverse – even divergent - views. Simple averaging of experts’ distributions is an obvious (and widely used) approach, but this can result in extremely wide uncertainty intervals and other methods, based on mathematical scoring rules, can be superior: Cooke developed a formal pooling approach – called the Classical Model - with experts weighted differentially, according as their performance on a set of empirical ‘seed’ quantities, whose values are or become known, but are not known to the expert at the time of elicitation. In a range of recent studies, comparison with simple equal weights averaging shows important performance benefits from using the Classical Model for rational expert uncertainty knowledge synthesis. There are, however, challenges in conducting such an elicitation objectively, fairly and reliably, and efficiently.
This tutorial will introduce the algorithmic basis of the Classical Model, precepts for conducting an elicitation, and some case histories. The highlight of the session will be an illustrative elicitation exercise undertaken with participants in the workshop acting as experts. Mention will also be made of a complementary elicitation procedure – call paired comparison with probabilistic inversion (also due to Cooke and colleagues) – by which experts’ qualitative preference choices of options or alternatives can be converted into objective ranking scores with measures of uncertainty; this approach has proved useful for determining research priorities and for ranking science-based policy options, for instance.
A one hour presentation on the in-and-outs of eliciting expert judgements for quantifying uncertainties, followed by a 1.5 hour practical elicitation with workshop participants, and a short closing discussion. Lunch will be provided.
Just a modicum of knowledge of some science and the scientific method! The elicitation practical requires each participant to be armed with a pencil and eraser (seriously!).
Numbers are limited so booking via eventbrite is essential.
1) Cooke, R.M. (1991). Experts in Uncertainty. Oxford University Press, 321pp.
2) Cooke R.M. and Goossens L.H.J. (2008) TU Delft expert judgment data base. Reliability Engineering and System Safety 93:657–674.
3) Bamber, J. and Aspinall, W.P. (2013) An expert judgement assessment of future sea level rise from the ice sheets. Nature Climate Change, 3, 424-427 doi:10.1038/nclimate1778
4) Aspinall, W.P. and Cooke, R.M. (2013) Expert Elicitation and Judgement. In “Risk and Uncertainty Assessment in Natural Hazards.” Rougier, J.C., Sparks R.S.J., Hill, L. (eds). Cambridge University Press, Chapter 4, 64-99.
Please contact email@example.com for more information about the tutorials.
Download the Uncertainty workshops poster (Office document, 328kB).
Tutorial resources can be found here.