select search filters
briefings
roundups & rapid reactions
before the headlines
Fiona fox's blog

expert reaction to study evaluating the accuracy of the AbC-19TM Rapid Test for SARS-CoV-2 antibodies

A study published in the BMJ evaluates the accuracy of the AbC-19TM Rapid Test for detection of previous SARS-CoV-2 infection in key workers.

 

Prof Sylvia Richardson, Royal Statistical Society (RSS) president-elect and co-chair of the RSS Covid-19 Task Force, said:

“Evaluation of tests starts by defining who is considered as a true positive and who a true negative.  These choices will influence the reported sensitivity and specificity.  The choices made by the newly published PHE study more closely resemble a public health context than the choices made by Robertson et al.  The PHE study suggests that the sensitivity and specificity of the Abingdon rapid Covid-19 test is notably lower than the earlier Ulster University study, likely rendering it less effective for the diagnostic purposes that have been proposed.

“At a time when reliable, effective testing is vital, and the Government is proposing to spend millions on novel tests, it is hugely important that independent assessment is available prior to purchasing.  More transparency would help the Government implement a cost effective testing strategy.”

 

Prof Kevin McConway, Emeritus Professor of Applied Statistics, The Open University, said:

“This new piece of research is statistically sound, and it draws attention to some of the awkward and non-intuitive issues about testing.  These have come up repeatedly during the current pandemic, and all that discussion has probably helped the general understanding of what goes on in general with diagnostic and similar tests – but there are still misunderstandings about.

“First point is that this new research relates only to a particular type of antibody test – that is, a test to investigate whether a person previously was infected with SARS-CoV-2, not a test of whether they are currently infected.  I won’t go into the question of how long antibodies might persist in such a person’s blood serum, which is likely to be related to whether they are immune from further infections.  That’s important, but this new research doesn’t really throw light on it.

“Any such test can provide an incorrect result in two different ways.  First consider people who do actually have the disease or condition in question – in this case, people who really do have the relevant antibodies in their blood.  Many of them – we’d hope the great majority – would test positive for antibodies, and they are the ‘true positives’.  But no test is perfect, so there will be some people who have antibodies in their blood but test negative for antibodies.  They are ‘false negatives’ – negatives because their test was negative, false because that negative result is in fact wrong.  Now consider people who don’t have antibodies.  Again, we’d hope that the majority would test negative, and they would be the ‘true negatives’, but again it will very likely happen that some of them would test positive, and they are ‘false positives’.  So there are two different kinds of wrong test result – false positives and false negatives – and they both matter.

“The terms used in relation to these possible errors are much better known than before the pandemic, I think.  There are two heavily used ways of reporting on the chance of errors from a test, because there are two kinds of error.  These are the ‘sensitivity’ and the ‘specificity’.  The sensitivity is the percentage of true positives, out of all the people who really have antibodies (that is, of the true positive together with the false negatives).  The specificity is the percentage of true negatives, out of all the people who really don’t have antibodies (that is, of the true negatives together with the false positives).  Ideally both of these should be high.  There’s a tendency to think of the sensitivity and the specificity to be pretty well fixed quantities for a given test – but that’s not actually the case really.  For one thing, they need to be estimated by carrying out the test in people who are known not to have the condition, and people who are known to have the condition – here, people known not to have antibodies, and people known to have antibodies.

“This study shows clearly that, for the test that was being examined (the “Abc-19TM Rapid Test”), the estimated sensitivity and specificity do differ depending on which groups of people (or blood serum specimens) are used for the estimation.  A previous study, not yet peer reviewed, had estimated the sensitivity as 97.7% and the specificity as 100%.  The 100% specificity would mean that everyone who really did not have antibodies would test negative – that is, they would all be true negatives and there would be no false positives.  (It’s uncommon, though not entirely unknown, for any test to produce no false positives at all.)  The researchers on the new study re-estimated the sensitivity and specificity in what seem to me to be more potentially accurate ways, and they got rather lower results for both sensitivity and specificity.  They actually estimated sensitivity by two different methods, getting an estimate of 92.5% on one and 84.7% on the other.  (In fact, in this particular context, a high level of specificity is probably more important than a high level of sensitivity.)  They estimated specificity primarily by using samples taken and stored long before the pandemic, back in 1995.  The virus that causes Covid-19 did not exist back then, so these samples are known to be negative for antibodies.  On that basis, the estimate of the specificity was 97.9%.  (All these estimates are subject to some statistical uncertainty, which is reported in the research paper, but that won’t affect my discussion.)

“One reason why the researchers in the new study got lower estimates of sensitivity and specificity than those in the previous study is what they refer to as ‘spectrum bias’, or it’s also known as the ‘spectrum effect’.  That has been known about for decades.  In this particular context, it seems to have arisen because the researchers on the previous study perhaps didn’t test enough borderline cases – that is, people in which the level of the substance measured in the test is close to the cut-off value, which distinguishes the positive test results from the negative ones.  That is, there weren’t enough people whose serum measurements were particularly difficult to classify as being positive or negative.  If that really did happen, it would make the test appear to perform better than it actually would perform in practice.

“But however the differences arose, the sensitivity and specificity estimates in the new research still sound very high – 84.7% or even 92.5% sensitivity, and what looks like a very high specificity of 97.9% – that’s less than the 100% figure from the previous research, but it still looks very high.  So what’s the problem?  It’s that these are all percentages of people for whom it’s known whether they have antibodies or not.  But in a real-life testing situation you don’t know whether someone has antibodies or not.  The whole reason for doing the test is to try to find out their antibody status.  The specificity, for example, tells you what percentage, out of the people who truly have no antibodies, are true negatives, and what percentage are false positives.  But those two groups have different test results (negative and positive), and what you know if someone has been tested is that they are negative, or that they are positive, so a percentage that includes both positives and negatives isn’t going to help.  What you want to know, for people who test positive, is how many of them are true positive and how many are false positives.  And the specificity can’t tell you that, because it relates to false positives and true negatives, not to false positives and true positives.  To work out what you want to know, you need to know three things – the specificity, yes, but the sensitivity too, and also the percentage of people, in the group being tested, that do actually have antibodies.  (That’s called the ‘prevalence’, in the jargon.)

“The research results give you estimates of the sensitivity and specificity, but they don’t give you estimates of the prevalence.  However, you can get an idea from population surveys of antibody levels, particularly if they are done with a test that’s considered very accurate.  For England, the ONS Infection Survey (which includes some antibody tests) reported that 5.6% of the community population (aged 16 and over) had antibodies, in September.  The main figures given in the new research are for a prevalence a bit higher than that, 10%, which might be an appropriate figure for a group of people who were at higher risk of Covid or are known to have suffered more infections.  So I’ll use that figure.  The researchers report that, with 10% prevalence of previous infection, about 80% of people who test positive for antibodies would really have antibodies.  (Their exact figures are 83% and 82%, depending on which of their two estimates of sensitivity are used, but these figures are subject to some statistical variability, so ‘about 80%’ is a reasonably summary.)  That means that about four-fifths of people who test positive for antibodies, using the Abc-19TM Rapid Test, will really have antibodies in their blood, and the other one fifth will not.  The implications could be serious, if someone is going to consider people who tested positive as being, temporarily at least, immune from further infections.

“How can it arise that the implications of a positive test result, in terms of the chances that the person really has antibodies, is so much lower than the sensitivity and the specificity?  Intuitively, it’s because of the following.  In the people being tested, quite a large majority don’t have antibodies – in fact with these assumptions 90% of them don’t.  So the people who test positive consist of a large percentage (the sensitivity) of the relatively small number who do have antibodies, who are the true positives, and a small percentage (defined by the specificity) of the relatively much larger number who do not have antibodies, who are the false positives.  It’s not clear instantly how the large percentage of a small number and the small percentage of a large number will compare – you have to work out the numbers – and in this case it turns out that there are more true positives than false positives, but there are still quite a lot of false positives.

“If the prevalence of people with antibodies were lower, say 5%, which could possibly be close to the real position for antibody testing of the whole English adult population, the results would look worse.  With the same sensitivity and specificity, only changing the prevalence, only about 70% of those who tested positive would really have antibodies.  And if the test were being used in a group of people where the prevalence were higher, say 20% or more of them really had antibodies, then over 90% of people who tested positive would really have antibodies, though that still does leave an appreciable number of false positives.

“None of this means the test is useless, by any means.  After all, if all we know about a person is that they come from a group where 10% have antibodies, then we’d assess their chance of having antibodies as 10%.  If they are then tested with this test, and tested positive, we’d know that their chance of having antibodies is about 80%, so we’ve learned a lot about them.  (If they tested negative, their chance of not having antibodies would be about 99%, whereas before the test it was considerably less, 90%, so the test result is still quite informative.)  All that has to be remembered is that the test result, like pretty well all test results, is not perfect, and the chance of a person, who tests positive, being a false positive, does need to be taken clearly into account.”

 

Prof Eleanor Riley, Professor of Immunology and Infectious Disease, University of Edinburgh, said:

“Rapid tests for Covid-19 antibodies have two potential uses: to inform individuals of their prior exposure to SARS-CoV-2 (the “have I had it?” test) and/or for population surveillance to determine what proportion of people have previously been infected, to inform public health decision making.  Since the beginning of the pandemic, immunologists have been warning against the use of antibody tests at the level of the individual whilst recognising their potential value for surveillance (ttps://www.theguardian.com/commentisfree/2020/apr/07/antibody-tests-uk-lockdown-exit-strategy).  This latest study simply reinforces that message.

“The AbC-19 rapid antibody test was initially assessed to pick up 97.7% of past infections (sensitivity) and to give no false positives (100% specific).  However, these tests were performed using serum samples specifically selected to be either pre-pandemic (known negative) or post-infection (known positive).  This does not reflect the real world of someone with an unknown infection history.  This latest study attempts to assess the real world reliability of these tests using samples collected from key workers during the pandemic (some of whom had had a positive swab test for Covid-19, some of whom had symptoms that might have been Covid-19 and many who had no evidence of infection) and from a pre-pandemic blood donor pool (all assumed to be uninfected).  Using this sample set, the assay is estimated to have a sensitivity of 92.5% and specificity of 97.5%.

“Whilst this still sounds quite good, when many hundreds of thousands of people are tested, the number of inaccurate results can start to stack up quite quickly and any individual result has a significant chance of being wrong.  Lack of specificity is the most worrying – if the test tells you have antibodies when in fact you don’t, there is a risk that you relax your guard assuming you are immune, but you aren’t.  Lack of sensitivity – the test tells you don’t have antibodies when in fact you do – is less of an issue in terms of individual risk.

“However, these new data are very useful at a public health level.  If we know how many cases the test is missing, and how many it is wrongly calling positive, we can adjust our population estimates of prior infection accordingly.  As a public health tool, these low-tech, easily deployed and rapid tests could have an important role to play in mapping the pandemic globally as well as in the UK.”

 

Prof Daniel Altmann, Professor of Immunology, Imperial College London, said:

“This BMJ paper reports assessment of the new Abc-19 rapid fingerprint test for antibodies to SARS-CoV-2.  This is one of the rapid, easy to use, pregnancy test-style assays to determine if you had the virus and now have antibodies.  Findings from antibody tests could have big implications for understanding the state of play both at the individual level (‘did I really have it and am I immune?’) and at a population level, in terms of our case prevalence and policies for issues such as lockdowns and healthcare planning – I understand this particular test has been bought to use just for population surveillance.

“The study reports that the test is less sensitive and less accurate than had been thought.  They estimated antibody levels in nearly 3000 key-workers during June 2020.  In most cases, this might typically have been a few months after infections.  The study reports that the test overestimated the presence of antibodies as positive by about 20%.  Also, many testers found reading presence or absence of the band subjective, which would clearly be a further problem.

“Does this mean it’s a ‘bad’ test?  Not really – this bears on the fact that the immune response is complex and highly variable.  The hi-tech tests conducted in research labs to evaluate the details of antibody binding to the virus spike antigen don’t translate readily into a one-size fits all point of care test that can be easily interpreted.  Past lab studies have painted a complex picture – not all who have been PCR-positive make an antibody response and in many of those who have made antibody, the ‘half-life’ is about 2-months, such that levels are disappearing as fast as you try to measure them.  This is all the more so in people who had a milder infection, perhaps never confirmed by PCR.

“Is there still a place then in the armoury for a point of care antibody test such as this?  My answer would be a cautious ‘yes’, but unless the tests can be much improved, any results will need to be interpreted and acted upon with the utmost care.  It’s understandable that we’re desperate for simple, accessible fixes to give us clear, actionable answers, but the immune system is complex and doesn’t always play ball.”

 

Dr Alexander Edwards, Associate Professor in Biomedical Technology, Reading School of Pharmacy, University of Reading, said:

“This is a thorough study and the scale of testing indicates the amount of hard work and thought that has gone into this important research.  The large number of samples tests makes the key findings and data presented statistically valuable.  The strength of this study is that it formally studies different ways to measure the accuracy of antibody tests.

“Overall, the accuracy of a rapid antibody test – the AbC-C19 test developed by the UK Rapid Test Consortium – is very carefully measured and reported to be broadly in line with what is expected for this type of test.  No test is perfect, but if you know how accurate your test is, you can still make great use of it.  The main focus of this study however isn’t so much the accuracy of this one particular test, instead it shows how you can measure test accuracy in different ways and see different performance.  Whilst this is well understood by makers and users of diagnostics, some people are surprised by this.  Tests often work less well in ‘real world use’ than in initial laboratory evaluation.

“If you take a small set of ‘known positive’ samples selected carefully using swab testing, and compare these with negative pre-pandemic stored samples, you can see excellent accuracy.  However, if you take a large batch of unknown samples, and compare the rapid test with two laboratory antibody tests, you find lower accuracy.  This isn’t at all surprising, but it’s extremely helpful to see the data from measuring these different methods systematically.  Neither method is ideal, but both are dealing with the hard reality that (surprisingly) we just don’t know exactly who has been infected, and who has never encountered the virus.  This has always been a difficult problem for diagnostics – COVID-19 is no different – but the urgency and pressure has inevitably meant some early tests have been taken up with incomplete accuracy data.

“One interesting data set presented is the comparison of antibody detected against two viral components (S and N).  Measuring antibody against both of these remains important yet laborious, so the more data we have the better, to see if any differences in protection or disease severity emerge.

“This study was never intended to show if having antibody protects from infection.  But without this type of study to analyse antibody test accuracy, that vital question will always be hard to answer.

“Antibody tests remain the best way to assess population levels of infection, but we do know levels of antibody can change with time.  Again, this type of data measuring antibody test accuracy is essential to guide use in population surveys.  Rapid tests still offer a very convenient format for testing, but may not pick up as many positive samples as laboratory tests.

“Antibody tests should not be used to try to check individual infection status or provide any kind of ‘immunity passport’ – this study also reinforces this point.  This is for two reasons: test accuracy is still not high enough and there still appear to be too many false positives; and secondly, we don’t know how well or how long antibody responses protect from reinfection.

“As with any carefully designed study, there are some limitations, clearly highlighted by the authors, which are inevitable given their specific aim.  The rapid tests were performed in a regulated diagnostic laboratory by trained expert users.  This means the accuracy of the test shown by this study does not tell us if the test will perform the same if used in the field by less experienced users.  Certainly we don’t know if the test would be as accurate if used at home by members of the public.

“The study does show that test results from a few samples are not very clear.  These were mostly samples with lower levels of antibody.  Anyone who has used a home pregnancy tester will be aware that very faint lines can be hard to interpret.  For nearly 1 in 20 tests two people checking the lines concluded different results.  This is what is expected for this ‘lateral flow’ test technology, it is very portable and inexpensive to mass-produce, but is not expected to detect lowest antibody levels.  We might expect even more errors in reading results if used by less expert users.  However, ‘test readers’ are available to record and standardise the test results avoiding this problem – often making use of smartphone cameras.

“With so much talk of vaccines, it’s also worth mentioning that it remains vital to measure antibody levels before and after vaccination, so that we can tell how immunity arising from infection differs from immunity driven by vaccination.  Again, this vital research can only be done if we know how accurate our antibody tests are.”

 

 

‘Accuracy of UK Rapid Test Consortium (UK-RTC) “AbC-19 Rapid Test” for detection of previous SARS-CoV-2 infection in key workers: test accuracy study’ by Ranya Mulchandani et al. was published in the BMJ at 00:01 UK time on Thursday 12 November 2020.

DOI: 10.1136/bmj.m4262

 

 

Declared interests

Prof Sylvia Richardson: “Nothing to declare.”

Prof Eleanor Riley: “Eleanor Riley is a member of the UKRI Covid-19 research taskforce and the UK Vaccines Network.”

Prof Daniel Altmann: “No conflicts.”

Dr Alexander Edwards: “I am shareholder and co-founder a diagnostic technology company (Capillary Film Technology Ltd), developing antibody testing technology.  We don’t yet make or sell in vitro diagnostic products and none of our technology or products is linked to this study.  We are developing experimental COVID-19 tests to explore if our technology could be useful for this infection.”

in this section

filter RoundUps by year

search by tag