The Royal Statistical Society has published a report looking at the statistical evidence needed to assure the performance of future diagnostic tests.
Dr Thomas House, Reader in Mathematical Statistics, University of Manchester; Dr Elizabeth Fearon, Assistant Professor in Epidemiology, London School of Hygiene & Tropical Medicine; and Martyn Fyles (University of Manchester, Alan Turing Institute); said:
“This is a useful and well-argued report, though while the ‘Royal Statistical Society’ branding is prominent in the report and press release, the working group that authored it consists of only six members. The report combines a large number of recommendations that are not at all controversial while presenting some topics that are still subject to substantial and ongoing scientific debate. Primary amongst these is how the criteria for what the report calls diagnostic testing, and tests for the purpose of reducing transmission are different. Many of the examples given of testing for SARS-CoV-2, and particularly lateral flow tests, are within interventions intended to control transmission, but the report does not discuss in detail designs to assess effectiveness against this outcome. The report is right to stress the need to consider harms as well as benefits of testing interventions, but a pandemic of the severity of the one we are currently experiencing means that we must do this in the context of what the consequences of reduced epidemic control, or the need for more restrictive lockdown interventions, would be. While in usual times, not rolling out a testing programme has relatively predictable consequences, in the current pandemic we know that uncontrolled transmission can lead to thousands of deaths per day, and mitigation methods can include extremely harmful mass quarantine.”
Dr Joshua Moon, Research Fellow in Sustainability Methods at the Science Policy Research Unit (SPRU), University of Sussex Business School, said:
Is this a robust, evidence-based report?
“The report highlights something very important about in-vitro diagnostics: use-cases. For many IVDs, the way in which they are used (e.g. at home versus by a healthcare professional) is incredibly important, as is how the result is interpreted and used (e.g. a test with high sensitivity, but low specificity could be used to screen for cases but should not be used to rule cases out).
“This extends to the report’s point about low prevalence settings.
“The probability of a person having the virus if they test positive is a function of both the chance that they test positive (the specificity of the test itself) and the probability they actually have the virus (the prevalence of the virus).
“As the prevalence falls, the chances of a person testing positive but not being infected (a false positive) increases markedly.
Do you agree with what it calls for?
“Overall, the report’s recommendations make sense but, personally I don’t think they go far enough.
“The recommendations don’t sufficiently capture the complexity in communicating the nuance around these tests.
“Importantly, while ‘good communication’ is necessary, what that looks like varies across community, age, ethnicity, language, and more. Communication strategies needed to fully address this are not just about ‘explaining’ how tests work and what to use them for, but to build trust within the UK’s various communities.
“Further to this, the problems inherent in what people do with a false positive and false negative come back to the structures and incentives around these results. For false positives, reducing the social, financial, and psychological burden of self-isolation is key. For false negatives, continuing to follow the guidance on social distancing and maintaining existing precautions are important and individuals should be encouraged to test at repeated intervals (support in the case of a false positive is also necessary here).
“Testing isn’t just the test and the result but what people do with it.
What are in vitro tests, and which tests does this apply to?
“In-vitro diagnostic tests (IVDs), simply refers to diagnostic tests that are performed outside of the living body. This thus refers to all types of Covid testing from PCR (RNA-based and most common) to antigen (detecting ‘antigens’ for the virus like the spike protein that enables the virus to enter cells) to antibody (detecting antibodies against SARS-CoV-2).
How does this report fit in with other evidence on how useful and accurate testing is?
“At its core, this report is a reiteration of what has been said before on Covid-19 testing systems (including by our project’s pre-print), but provides a novel recommendation in asking that discussions around the impact of false positives and false negatives be included in evaluations of a test’s utility.
“The open question is whether the Covid-19 testing regulation mechanism proposed by the government (https://www.gov.uk/government/consultations/private-coronavirus-covid-19-testing-validation/private-covid-19-testing-validation) is up to the task.”
Dr Stuart Hogarth, Lecturer in Sociology of Science and Technology, Department of Sociology, University of Cambridge, said:
“Given the context of the current pandemic, this report rightly focuses on evaluation of tests for infectious diseases, but the issues it raises have far broader relevance. In particular, this report highlights two very important points: outcomes matter – it is not enough to evaluate the diagnostic accuracy of a test, we need to understand the harms and benefits that result from using a test; transparency is vital – the data generated in clinical research on diagnostic tests should be available for everyone to scrutinise.
“The current pandemic has shone new light on an old problem: the innovation process for the development of new diagnostic tests is poorly organised, under-resourced and lacking in scientific rigour. Over the last two decades a succession of policy reports, often focused on genomics and personalised medicine, have highlighted a series of problems in diagnostic research, including studies that are statistically underpowered and/or prone to various types of bias, insufficient research on clinical outcomes, over-fitting of data in retrospective analyses and a lack of prospective controlled studies.
“With the introduction of a stricter regulatory framework for diagnostics in the European Union, we have the opportunity to raise the bar, and ensure greater public confidence in diagnostic tests, but we must ensure that the UK remains committed to this path.”
‘Royal Statistical Society Diagnostic Tests Working Group Report’ will be published on Wednesday 9 June 2021.
Dr Stuart Hogarth: “I am on the MHRA’s IVD expert advisory group but I am speaking in personal capacity.”
None others received.