The post Public Health Decisions when the Science is Uncertain first appeared on FifteenEightyFour | Cambridge University Press.

]]>Let’s start by looking at an idealised, but widespread, view about how policy decisions should draw on science. We want to choose the policy option with the best outcome. But we usually don’t know for sure what the outcome of any policy will be. So we should choose the policy with the greatest expected benefit. To determine which this is, we need to know two things: the benefit value of each of the different possible outcomes of a policy choice, and the probability of each outcome given the implementation of each policy.

Determining these two required factors for the purpose of assessing pandemic responses is very difficult, however. We are primarily interested here in the consequences of a policy for people’s lives and livelihoods. Feasibility constraints will require them to be traded off to some degree: policies that save more lives by suppressing the rate of infection cause significant damage to many people’s livelihoods, while those avoiding economic disruption do so at the cost of more lives lost. Economists have tools for making this trade-off, that in practice involve attaching a monetary value to the number of lives (of a certain duration and quality) derived from individuals’ preferences for trade-offs between their safety and their wealth. There’s a lot that could be critiqued in these methods, but we will focus on the second factor – the probabilities of outcomes.

It is here that epidemiological models of the pandemic play a crucial role by supplying predictions about how many people will be infected, how many will be hospitalized and how many will die under various policy scenarios. In the UK the government’s adoption of social distancing measures was strongly influenced by a model of the pandemic produced by Imperial College’s Covid-19 team. But many other models have been developed, based on different hypotheses about pertinent causal variables and the relationships between them (e.g. between infection rate and sociability), or using different estimates of crucial parameters (such as what the fatality rate is amongst the infected) or the state of the population (such as how many people are already infected). These different models give quite different predictions about outcomes of interest. (Here’s a simulator that allows you to see what difference choices of parameter values make).

Underlying these differences is the simple fact that there is a good deal of uncertainty about all of these elements. Take estimates of the infection-fatality ratio. These have varied a lot between countries, with a recorded ratio of over 10% in Italy and just over 1% in Germany, for instance. This probably reflects differences in the amount of testing for infection being conducted as much as anything else. Estimates of the percentage of the population infected vary enormously. A different kind of uncertainty surrounds the question of what factors to model. The initial Imperial model didn’t include the effect of the swamping of health systems on fatalities due to causes other than the covid-19, nor the endogenous effect on social distancing of its spread (e.g. due to people’s fear). Other models incorporate one of these, few incorporate both.

With time estimates will improve, as will the modelling that draws on them. Indeed, it is important that governments take steps to facilitate data-gathering and allow for a more informed scientific understanding of the scenario. In the meantime it is critical that the amount of uncertainty contained in the predictions which models make is adequately captured so that policy-makers know what they are dealing with. Uncertainty about inputs to the models (e.g. estimates about numbers currently infected) can be captured by making probabilistic predictions of outcomes. But we still need to account for the other uncertainties, especially those regarding the models themselves.

One way of doing so is specifying not just a single probability distribution over the outcomes of interest, but a family of them. If we think of each member of the family as being the distribution that we get from particular choices of parameter values and modelling assumptions, then the size of the family gives a measure of how much uncertainty we face about the consequences of our policy choice. By looking at range of associated estimates of the expected benefits of a policy, one gets a measure of how robust an assessment of a policy’s usefulness is to scientific uncertainty.

It might seem that little is achieved, other than complexity, by presenting policy-makers with ranges of estimates rather than a single precise one. But this new way of thinking about our uncertainty allows policy-makers to choose actions that can be expected to yield benefits robustly. Or to choose very cautiously, by favouring actions that have acceptable expected consequences under all estimates. This does not answer the question of just how much robustness to seek or how cautious we should be. But it forces this question into the open, to be answered in a way that is appropriate to the nature of what is at stake, i.e. by debate amongst those affected by the policies and not just implicitly by modellers.

The post Public Health Decisions when the Science is Uncertain first appeared on FifteenEightyFour | Cambridge University Press.

]]>