x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
19
May
2020

Public Health Decisions when the Science is Uncertain

Liam Kofi Bright, Richard Bradley

Governments across the world have responded to the Covid-19 pandemic with measures that are unprecedented in peace time in terms of the degree to which they seek to reshape the behaviour of individuals and organisations. We now face difficult decisions about when and how to relax social distancing policies. Policy-makers have been drawing heavily on scientific expertise and advice to fashion the right policy; a fact which has also played no small role in the way that the policies have been justified to the public. This advice is in turn supported by epidemiological models of the spread of the virus in populations. Such modelling is however fraught with difficulty. While basic causal relationships are well-understood, the sorts of details required for accurate prediction are not. What implications does this have for the way that policy choice should be made?

Let’s start by looking at an idealised, but widespread, view about how policy decisions should draw on science. We want to choose the policy option with the best outcome. But we usually don’t know for sure what the outcome of any policy will be. So we should choose the policy with the greatest expected benefit. To determine which this is, we need to know two things: the benefit value of each of the different possible outcomes of a policy choice, and the probability of each outcome given the implementation of each policy. 

Determining these two required factors for the purpose of assessing pandemic responses is very difficult, however. We are primarily interested here in the consequences of a policy for people’s lives and livelihoods. Feasibility constraints will require them to be traded off to some degree: policies that save more lives by suppressing the rate of infection cause significant damage to many people’s livelihoods, while those avoiding economic disruption do so at the cost of more lives lost. Economists have tools for making this trade-off, that in practice involve attaching a monetary value to the number of lives (of a certain duration and quality) derived from individuals’ preferences for trade-offs between their safety and their wealth. There’s a lot that could be critiqued in these methods, but we will focus on the second factor – the probabilities of outcomes.

It is here that epidemiological models of the pandemic play a crucial role by supplying predictions about how many people will be infected, how many will be hospitalized and how many will die under various policy scenarios. In the UK the government’s adoption of social distancing measures was strongly influenced by a model of the pandemic produced by Imperial College’s Covid-19 team. But many other models have been developed, based on different hypotheses about pertinent causal variables and the relationships between them (e.g. between infection rate and sociability), or using different estimates of crucial parameters (such as what the fatality rate is amongst the infected) or the state of the population (such as how many people are already infected). These different models give quite different predictions about outcomes of interest. (Here’s a simulator that allows you to see what difference choices of parameter values make).

Underlying these differences is the simple fact that there is a good deal of uncertainty about all of these elements. Take estimates of the infection-fatality ratio. These have varied a lot between countries, with a recorded ratio of over 10% in Italy and just over 1% in Germany, for instance. This probably reflects differences in the amount of testing for infection being conducted as much as anything else. Estimates of the percentage of the population infected vary enormously. A different kind of uncertainty surrounds the question of what factors to model. The initial Imperial model didn’t include the effect of the swamping of health systems on fatalities due to causes other than the covid-19, nor the endogenous effect on social distancing of its spread (e.g. due to people’s fear). Other models incorporate one of these, few incorporate both.

With time estimates will improve, as will the modelling that draws on them. Indeed, it is important that governments take steps to facilitate data-gathering and allow for a more informed scientific understanding of the scenario. In the meantime it is critical that the amount of uncertainty contained in the predictions which models make is adequately captured so that policy-makers know what they are dealing with. Uncertainty about inputs to the models (e.g. estimates about numbers currently infected) can be captured by making probabilistic predictions of outcomes. But we still need to account for the other uncertainties, especially those regarding the models themselves.

One way of doing so is specifying not just a single probability distribution over the outcomes of interest, but a family of them. If we think of each member of the family as being the distribution that we get from particular choices of parameter values and modelling assumptions, then the size of the family gives a measure of how much uncertainty we face about the consequences of our policy choice. By looking at range of associated estimates of the expected benefits of a policy, one gets a measure of how robust an assessment of a policy’s usefulness is to scientific uncertainty.

It might seem that little is achieved, other than complexity, by presenting policy-makers with ranges of estimates rather than a single precise one. But this new way of thinking about our uncertainty allows policy-makers to choose actions that can be expected to yield benefits robustly. Or to choose very cautiously, by favouring actions that have acceptable expected consequences under all estimates. This does not answer the question of just how much robustness to seek or how cautious we should be. But it forces this question into the open, to be answered in a way that is appropriate to the nature of what is at stake, i.e. by debate amongst those affected by the policies and not just implicitly by modellers. 

Decision Theory with a Human Face by Richard Bradley
Decision Theory with a Human Face by Richard Bradley

About The Authors

Liam Kofi Bright

Liam Kofi Bright is Assistant Professor of Philosophy at the London School of Economics and Political Science. ...

View profile >
 

Richard Bradley

Richard Bradley is Professor of Philosophy at the London School of Economics and Political Science, and the author of Decision Theory with a Human Face (Cambridge University Press,...

View profile >
 

Latest Comments

Have your say!