select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert comment on how R and growth rates are calculated and why different models’ results may look different

A few journalists have asked about why some data sets seem to say different things when it comes to estimates for R number and growth rates, so here’s a comment from Professor Graham Medley which helps to explain why.

 

Prof Graham Medley, Professor of Infectious Disease Modelling, London School of Hygiene & Tropical Medicine, said:

“Different data streams have different properties. The main properties are completeness and timeliness. Completeness refers to the proportion of cases that are found. The most complete data are those related to deaths. Death registration is virtually complete in the UK, and the assignment of cause in the death certificates names COVID where it is clinically important. However, death data is the least timely in that there are long delays before it is complete. By the time that all the reporting delays have wound through, the death data might only be complete for the deaths that happened 6 weeks ago. Symptomatic testing (in pillar 2) is perhaps the most timely epidemiological data – the data are all there after only 3 or 4 days – but are also the least complete. They depend on who decides to get tested, and the operational sensitivity of the tests, so they are more of a “signal” of the trends than a census of what has happened. Data such as hospital admissions are somewhere in between – they are delayed with respect to infection, and fairly complete, but can depend on ability of hospitals to admit patients.

“The difficulty comes in that decisions have to be made on the situation now, not on what it was 6 weeks ago. We have a very good idea of what happened 6 weeks ago from the data, but rely on models to tell us what is happening today.

“Different models have different data streams and combine them in different ways. This accounts for the variability in model outputs. None of them is “wrong” – they are just different views of the same process. Similarly none of them are “right”. And we cannot tell just by looking at the models individually which is right and which is wrong.

“This is why people often combine model results. Just like the concept of “wisdom of crowds” looking at a broad spectrum of models and combining the results is less likely to be seriously wrong, and gives a robust understanding of the state of the epidemic. This is the approach taken by SPI-M (the modelling sub-group of SAGE), and its estimates have been a robust measure throughout the epidemic. With the more recent development of data dashboards and other data analytic tools, the weekly publication of an official R value has become less critical in the past months. However, the idea of model combination remains a “gold standard” of use of model results.”

 

 

All our previous output on this subject can be seen at this weblink:

www.sciencemediacentre.org/tag/covid-19

 

 

Declared interests

Prof Graham Medley: “Prof Graham Medley is Chair of SPI-M.”

None received.

in this section

filter RoundUps by year

search by tag