select search filters
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to study looking at possibility of using real-time tracking of self-reported symptoms to predict potential COVID-19 without testing

Research, published in Nature Medicine, reports on the possibility of using real-time tracking of self-reported symptoms to predict potential COVID-19 without testing.


Dr Simon Gubbins, Head of the Transmission Biology Group, The Pirbright Institute, said:

“This study looked at the symptoms reported via a smartphone-based app for participants who tested positive for SARS-CoV-2 and for those who tested negative for the virus.  This showed that loss of smell/taste is a symptom indicative of COVID-19, in addition to more established ones, such as a high fever and a new, persistent cough.  It also allowed the authors to develop a model to predict whether or not someone has COVID-19 based on the symptoms they have.  The accuracy of their model is around 80%.  Although too low to be a replacement for testing, the model would potentially still be useful to rapidly alert someone who reports that they have symptoms indicative of COVID-19 (for example, via an app) to self-isolate and follow local guidelines on what to do if someone thinks they have COVID-19.”


Prof Kevin McConway, Emeritus Professor of Applied Statistics, The Open University, said:

“This is a useful piece of research, and one that uses appropriate statistical methods that are pretty standard for dealing with this kind of data.  The researchers used data on various symptoms, collected from UK users of their smartphone app, to develop a statistical model that predicts the probability that a person will test positive for COVID-19.  Sometimes such a model will track too closely the characteristics of the people who provided the data for it, and therefore not work well in different groups of people – but the researchers checked this by seeing how good the predictions were for app users in the USA.  The model passed this validation test pretty well.

“However, care is always needed in interpreting the results from prediction modelling of this sort.  The research report itself does take appropriate care, but the press release isn’t so careful.  The release says the model predicted whether and individual is likely to have COVID-19 “with nearly 80% accuracy”.  That sounds pretty impressive – but the research report does not say this at all.  It doesn’t even mention the word accuracy.  The trouble with talking about the ‘accuracy’ of a prediction of this sort is that there are two ways the prediction can be wrong.  It can predict that someone would test positive, when in fact they tested negative – that’s called a false positive result.  But it can also predict that someone would test negative, when in fact they tested positive – that’s a false negative.  Both are important.  Scientists and statisticians use various different measures that relate to the chances of false positives and false negatives, including the ‘sensitivity’ and ‘specificity’.  But those quantities are about the performance of the model on people whose test results are known.  It’s more useful, I’d say, to look at two other quantities.  The positive predicted value (PPV) gives the proportion of the people, who were predicted by the model to have a positive COVID-19 test, that actually did have a positive test.  For the US validation data, the PPV was 0.58 – that means that 58% of the people who were predicted to have a positive test, really did have a positive test – and so the other 42%, that were predicted to have a positive test, really had a negative test.  That’s not wonderful performance really.  (It was a bit better in the UK test data, but still only 69%.)  The negative predictive value (NPV) gives the proportion of the people, who were predicted by the model to have a negative test result, that really did have a negative result.  This was a bit higher – 87% in the US data and 75% in the UK data.  But my main point is that it really isn’t useful, and can be misleading, to try to put together these performance measures into a single measure of ‘accuracy’.  The researchers didn’t do that in their report, so I’m disappointed that it was done in the press release.

“Another unfortunate aspect of the press release is that it says, right in the top line and the first paragraph, that the new diagnostic is ‘AI’ or ‘artificial intelligence’.  It’s not, and the research paper does not claim it is.  The paper doesn’t even mention ‘AI’ or ‘artificial intelligence’ at all.  The method used is a standard statistical technique called logistic regression.  The basic ideas go back to the first half of the nineteenth century, and the method was properly developed in the middle of the last century.  It’s no more ‘artificial’ or ‘intelligent’ than a huge range of other standard statistical techniques.  That doesn’t make the choice of this method wrong, or outdated – it just means that it isn’t AI.

“What’s good about the research is that it emphasises a possible important role for loss of sense of smell and taste in screening for COVID-19.  I think that general finding is more valuable than the specific numerical details of the prediction model that was produced.  But there are limitations to the research too, which are generally made clear by the researchers themselves.  All the data came from users of the smartphone app, and they don’t necessarily represent the UK or US populations generally.  And, more problematic, the model had to be produced using data from people who had actually had a standard test for COVID-19.  Not everyone can get such a test – it would be more likely to happen if someone is hospitalised, or is known to have had contact with confirmed COVID-19 cases, for example.  Thus all the predictive model is predicting is the test results of people tested under those circumstances, and we can’t tell how well it would perform more generally.”


Dr Peter Bannister, Biomedical Engineer & Executive Chair, Institution of Engineering and Technology (IET), said:

“This is an interesting example of where statistical prediction could be used in combination with simple patient-reported symptoms (loss of taste and smell) to help manage epidemics such as Covid-19.  However of the approximately 2.5m app users analysed in the study, less than 1% had received a SARS-CoV-2 test.  This underlines the current lack of robust data to track the spread of coronavirus and develop new approaches to containing it.  The authors also acknowledge that the algorithm is more likely to overestimate the number of users who are infected, as physical testing is only carried out where there are clearer symptoms or where the individual has had a high risk of exposure.  A more comprehensive program of testing would need to be in place to validate such an algorithm for widespread use and to handle the inevitable false alerts raised by the algorithm given its reported accuracy of 80%.”


Prof Linda Bauld, Professor of Public Health, University of Edinburgh, said:

“It is imperative that we collect data on COVID-19 in ‘real time’ and in community settings, as most studies to date have focused on hospital patients.  We now know that people who come into contact with the SARS-CoV-2 virus won’t require treatment in hospital, so in the absence of mass community testing we need to harness digital technology to encourage people to record symptoms and report outcomes.  This has limitations – people who download the app won’t be representative of the whole population, recall and reporting might not always be correct, and estimates of accuracy stated by the authors may not be sufficient.

“Those problems aside, this study provides a new means to estimate how many people might develop COVID-19.  This is based on reporting of symptoms that are similar to others who we know had the disease on the basis of a positive test result.  It also suggests that loss of taste and smell is a common feature.  These findings could help health authorities and governments to improve guidance on COVID-19 symptomatology and alert the public to be more aware of this symptom.  It also highlights the value of smart phone apps to help us learn more about COVID-19, and how these apps can complement other essential parts of the public health response including testing, tracking and contact tracing.”


Dr Dennis Wang, Senior Lecturer in Genomic Medicine and Bioinformatics, Sheffield Institute of Translational Neuroscience and Dept. of Computer Science, University of Sheffield, said:

“The performance of their diagnostic model is still quite poor compared to genetic tests that achieve >95% accuracy, however, the symptoms highlighted in their model can be used by the government to prioritise individuals for COVID-19 genetic testing, especially when testing supplies are limited.

“Also it’s important to note that the study only uses a simple statistical regression analysis, not the more advanced machine learning algorithms considered to be AI.  Sheffield has recently completed a study using machine learning algorithms on a smaller Brazilian dataset that achieved slightly better performance:”


‘Real-time tracking of self-reported symptoms to predict potential COVID-19’ by Cristina Menni et al. was published in Nature Medicine on Monday 11 May 2020.

DOI: 10.1038/s41591-020-0916-2



All our previous output on this subject can be seen at this weblink:


Declared interests

Dr Simon Gubbins: “No interests to declare.”

Prof Kevin McConway: “Prof McConway is a member of the SMC Advisory Committee, but his quote above is in his capacity as a professional statistician.”

None received.

in this section

filter RoundUps by year

search by tag