select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to a systematic review of prediction models for diagnosis and prognosis of COVID-19 infection

A systematic review, published in The BMJ, reports on prediction models for diagnosis and prognosis of COVID-19 infection.

 

Comment from the Risk and Information Group, QMUL and collated by Dr Scott McLachlan, Postdoctoral Research Assistant, Risk and Information Group, Queen Mary University of London:

“Looking at dates on ArXiv papers and the date this was submitted to the BMJ – the authors had less than a week to critically review all of the models (based on last arXiv submission date and BMJ submission date) and write this paper before they submitted.

“The potential for bias in a rapid review such as this is already quite high and this is made higher by the fact that almost all (24/27) of the reviewed papers on COVID-19 were preprints that have not as yet been peer reviewed. There is so much contradictory material between the non-peer reviewed papers that the truth might still be out there. The 27 papers in their review contained 31 different proposed models, and there was no consideration of causal explanations in their review of the models.

“None of the prediction models looked at in their rapid review have undergone thorough testing and external validation – many have not even seen real patient data (i.e. they are built from computer scientist’s understanding of the disease from reading other non-peer reviewed papers). The authors of this paper acknowledge they cannot recommend a single one of these models for clinical use.

“There is no description for why over 2000 papers were dismissed from the study – was it because they did not contain a diagnostic model? The authors have left the reader to assume why these were rejected.”

 

Dr Ray Sheridan, Consultant Physician, University of Exeter Medical School, said:

“It needs to be made clear that the predication modelling studies which have been looked at for this systematic review are using a patients’ data to predict risk of need for admission to hospital, need for ITU, or for death as an outcome. This is not the modelling that influenced the lockdown decision which is modelling disease spread at population level.

“We already use predication modelling to create a score for other diseases eg. The CURB-65 Score is commonly used to assess pneumonia severity on admission to hospital and is used to guide antibiotic choices. A higher score means the patient is more ill. When you use a scorecard based on predication modelling you need to understand its positive predictive values, sensitivity and specificity. CURB-65 is accepted as useful but it is not perfect. It is still important to use the clinician’s common sense and experience, so if a scorecard gives a low score but you think a patient look sicker then the clinician should go with their judgement. This will also be the case for COVID-19.

“The systematic review itself look well done, it’s the individual papers that they review each have potential limitations as all studies in real life do. The main limitation is that they mostly derive from China with only one from outside China. Therefore, there is the potential for different ethnic groups or healthcare systems to be different so any scorecard derived outside a UK population would need testing on a UK population for validity before being implemented. The systematic review is a helpful benchmark though for others trying to generate scorecards for use in their own populations.

“The UK has a study (https://www.nihr.ac.uk/urgent-public-health-research-studies-for-covid-19/the-priest-study/24544) which has just opened to develop an Emergency Department triage scorecard for COVID. That will be really detailed and useful but only in the environment from which data is generated.

“We need good follow up of patients to develop accurate predication modelling as a patient who goes to ITU might be there for 2-3 weeks and you need to give time for any patients not initially admitted to be readmitted (typically day 10) and anything using mortality as an outcome needs at least 1 month if not longer.

“UK doctors aren’t basing treatment decision on current scorecards mainly as any potential drug treatments are really only within clinical trials eg. RECOVERY (https://www.nihr.ac.uk/urgent-public-health-research-studies-for-covid-19/randomised-evaluation-of-covid-19-therapy-recovery/24513).”

 

Prof Derek Hill, Professor of Medical Imaging, University College London (UCL), said:

“Predictive computer models can be really helpful in both managing patients and also developing new treatments. They can help identify patients who are likely to quickly develop serious symptoms and need rapid treatment, and other patients who are going to progress slowly and ‘watch and wait’ is appropriate. For example, disease models are used by drug developers to help them design clinical trials to help test new drugs. And disease models can be used to identify individuals with early symptoms of diseases such as Alzheimer’s  to predict patients that will progress fast or slowly and help personalise treatment. Modern data science methodologies and artificial intelligence are widely used tools to build such disease models.

“The COVID-19 pandemic has led to some very rapid development of disease models. These could potentially help focus scarce healthcare resources on patients most likely to benefit from particular treatments – and personalise treatment based on the symptoms with which people present at hospital.

“But any disease model used to make critical decisions does need to be carefully tested. To state the obvious, if the disease model is flawed, it could end up directing seriously ill patients at the wrong treatment, rather than the right treatment.

“This paper reviews around 30 disease models that have been developed for COVID-19.  The overall conclusions are that these models on the whole are not trustworthy, and could be dangerous to use.

“Care must be taken to ensure that doctors don’t use disease models to make decisions without knowing that these models have been properly tested. If a disease model is used to select treatment for a patient, then from a regulatory point of view,  it is a medical device. And you wouldn’t want to use an untested medical device on a critically ill patient any more than use an untested drug.”

 

‘Prediction models for diagnosis and prognosis of Covid-19 infection: systematic review and critical appraisal’ by Wynants et al was published in The BMJ on Tuesday 7th April.

 

All our previous output on this subject can be seen at this weblink: www.sciencemediacentre.org/tag/covid-19

 

Declared interests

None received. 

in this section

filter RoundUps by year

search by tag