Lecture presented by the 2013 Royal Society of Victoria Research Medalist, Professor Mark Burgman FAA, School of Botany at the University of Melbourne
Because they lie at the interface of science and public policy, environmental decisions often depend on the preferences of vocal stakeholders and opinion makers. In highly charged political and social debates, scientists provide the technical understanding, data, expert judgements and prediction of outcomes for alternative scenarios. Inevitably, differences of opinion arise among experts when they estimate facts or make predictions.
However, expert judgements are essential when time and resources are stretched or when we face novel dilemmas requiring fast solutions. In conservation biology, despite being well served by models for assessing the viability of populations and optimal strategies for reserve design, a range of expert judgements remains pervasive.
Typically, experts are defined by their qualifications, track record and experience. The social expectation hypothesis argues that more highly regarded, better-credentialed and more experienced experts give better advice. We know from cognitive psychology, however, that expert judgements are coloured by personal idiosyncrasies, perceptual illusions and context. Values and personal ambitions lead to motivational biases in expert judgement, even when the experts are unaware of them.
To explore these issues experimentally in a range of conservation biology and biosecurity contexts, we addressed several specific questions including: Can frailties in judgement be counteracted with better question design? and, Can experts predict how they will perform, and how their peers will perform, on specified sets of relevant questions?
The insights gained from these experiments helped us to design and test a set of protocols for eliciting expert judgements about facts. An opportunity arose to validate the protocols in a comparative test of their performance against alternative tools including formal statistical models, prediction markets and other recently developed prediction and estimation tools. The validations were devised and managed by the US Office for National Security in a ‘tournament’ known as ‘the intelligence game’.
This presentation outlines the evolution of thinking about expert judgement from its use in species conservation, to its wider use in bio-security. It shows the results of experiments and tests which lead to the proposition that we should treat expert judgement with the same reverence as we treat data. That is, we should deploy tools to collect expert judgements in repeatable and transparent ways, we should validate the estimates with data, and we should invest in tools that improve the accuracy and calibration of expert judgements, improving their performance over time. Reducing the current divergence in expert scientific judgements will improve the certainty of advice we offer to politically and economically constrained policy makers while increasing public confidence in our capacity to make a difference.