A lot of decisions about Coronavirus over the past few weeks have been made on the basis of models predicting potential outcomes In fact, it is probably more accurate to say that decisions have been guided by the models than that they have been guided by the science in some unmediated way as though there is a pure science, unmediated with only one possible interpretation.
This has led to some people asking significant questions about how trustworthy the models are and whether we have been right to follow the resulting guidance with very strict lock down rules. One particular model has come in for quite a lot of stick, Neil Ferguson and Imperial College’s model. This was the one that famously predicted up to 500,000 deaths without intervention and 250,00 if we attempted to manage and not supress the virus.
Unherd and the Spectator have been particularly critical of Professor Ferguson’s work with this article in particular calling into question his results. The issue seems be that Ferguson has produced a number of models for potential disease out-breaks over the years with worst-case predictions not coming true. This is seen as undermining his credibility.
Well, there are two problems with that argument. The first is that Imperial College will be making all sorts of predictions about outcomes over time. We cannot conclude anything about heir accuracy from the ones they get wrong without knowing anything about the ones they have got right.
Secondly, that phrase “worst case scenario” gives us a clue. Worst-case suggests opposite to best case. So, for example, the Spectator article states:
“In 2002, Ferguson predicted that between 50 and 50,000 people would likely die from exposure to BSE (mad cow disease) in beef. He also predicted that number could rise to 150,000 if there was a sheep epidemic as well. In the UK, there have only been 177 deaths from BSE.” 
The author asks
“Does Ferguson believe that his ‘worst-case scenario’ in this case was too high? If so, what lessons has he learnt when it comes to his modelling since?” 
It does not take much brain effort to realise how intellectually lazy and non-sensical the question is. The figure is in fact within the range because a lower figure was given too. More than 50 and less than 50,000 did die. Would the Spectator be asking if the best case was too optimistic and low if 30,000 had died.
It would perhaps be helpful to think about how modelling works. I can’t say exactly how the Imperial model functions, but I have experience of statistical models and programmes being used to help in other areas of life.
What a model does is it takes the hard data and this is processed against a number of assumptions. This will enable the modeller to ask “what if…” certain variables are in place. That will give you a range of potential outcomes, best, possible and worst. I always encourage those are making predictions to give the three numbers.
The model can be re-run several times to allow for different assumptions. The most unpredictable factor of all of course is human behaviour. This gives you the full range of possible outcomes. A good model is not just giving one figure but a range of figures.
So it is possible to have a model give a forecast such as the Imperial College one which said the UK could have between 20,000 and 250,000 deaths. However, it is not just the number range that yu need to look at. You will also be looking at confidence intervals.
So, I may think that 20,000 deaths will be the best outcome, 250,000 the worst, and several other figures possible within. However, I will have a different level of confidence for each outcome, a bit like betting odds on horses or the Premier League title.
I may be 60% confident of a 20,000 result and 75% confident of 250,000 but I may be 95% confident that we will see 60,000 deaths and gives me my “possible” result.
Now, human behaviour is unpredictable. We did not know for example, how complaint people would be with a lockdown. Events can be unpredictable too such as whether there will be a second wave or the virus will burn itself out over time. So, what do you do? The answer is that you run the model again later on as you get more information about the unknowns and the variables you were less sure of. A good example of this is the You Gov MRP model for predicting the election result. This was run a second time as we got closer to the election.
Such re-runs with up to date data will enable the modellers to narrow the range and to increase confidence levels. We know now that 20k deaths is impossible, we have passed that mark. However, we also know that 250k deaths is much less likely due to the way people have adapted and also what the graphs are currently projecting. So, you might refine the model. Confidence of 250k deaths is reduced to 40% and 2k to 0% with a probable range somewhere between 40,000 deaths and a 100,000. This can be re-run again as we gather moe data.
So don’t rely slavishly on models. They are fallible. However, do see how much we can make use of them to plan for future dangers.
 The figures here are hyptothetical only.