A study, published in BMJ Evidence Based Medicine, reports the use of ‘spin’ (defined as the use of reporting strategies to highlight insignificant results, or hide significant ones) in 49% of 116 Randomised Control Trial (RCT) abstracts published in psychiatry and psychology journals.
Prof Michael Sharpe, Professor of Psychological Medicine, University of Oxford, said:
“This interesting and well conducted study of the reporting of clinical trials in a selection of specialist psychiatry and psychology journals found that in more than half of the papers with a non-statistically significant primary end point, the summary was misleading with a tendency to inaccurately conclude that the treatment being studied works.
“Experience strongly suggests that this problem is widespread and not specific to the journals or indeed to areas of research studied here. And that it may be even more pronounced in press releases than in the papers themselves.
“These finding raise a major concern, especially as readers may draw conclusions on the basis of the abstract alone, without critically appraising the full paper. Not actually reading the paper is a major and increasing problem, even in academic circles, as everyone is deluged with information. This study tells us that trying to cope with that deluge by just reading the abstract may be a mistake.
“Authors, peer reviewers and journal editors all need to pay more attention to the accuracy of titles and abstracts as well as the main report. It should also lead us to consider if we simply have too much information and need fewer, but better research studies and indeed scientific journals.”
Prof Kevin McConway, Emeritus Professor of Applied Statistics, The Open University, said:
“I found it pretty depressing to read this piece of research, and what’s even more depressing is that the findings aren’t at all surprising to me. Similar studies of ‘spin’ in medical research papers have been carried out in a range of specialist areas, including cancers, rheumatology, and various types of surgery, as well as more generally covering a range of medical specialties in a single study. Generally, like this new study, they found that ‘spin’ was present in alarmingly large proportions of the papers that were investigated. This new study is carefully done, and carefully reports that it can’t tell us anything about the general position on clinical trials in psychiatry and psychology (because it considered only papers in six top journals) or about research that did not use randomized clinical trials. But all the other research on ‘spin’ indicates to me that there’s nothing special about psychiatry or psychiatry in terms of the distortions that were found. A systematic review on ‘spin’ in biomedical research literature, published in 2017*, found considerable levels of the same kind of spin as in this new study – misleading reporting, in the paper abstract, of primary outcomes that were not statistically significant. It arose in, on average, about 60% of the papers covered by that review, very similar to the level found in the new research.
“How much does this matter? Do misleading statements in the abstracts of research papers actually affect what clinicians do? We’ve got to be a bit careful on that. This research looked only at whether the abstract of a paper was misleading about the findings of the research. If clinicians read more than just the abstract, that might not matter. But the researchers refer to two different previous studies indicating that in many cases, clinicians do read only the abstract. Again that might not matter, if clinicians are reading the abstracts only to decide what is relevant to their practice, and if they read more than just the abstract of research papers that are really relevant to them. After all, a huge amount of new medical research appears every week and clinicians need to be selective to keep up with what’s going on. Here, there is less evidence on what happens. The researchers on this new study do quote two pieces of relevant evidence (and I don’t know of any more). In one study, doctors were given two abstracts to read, one with and one without ‘spin’, and they were more likely to rate the treatment on the ‘spun’ abstract as beneficial and want to read the full paper. That could be distorting, if the ‘spun’ paper looked more attractive simply because of the spin. In the other study, though, no such effect was found. So it’s unclear to what extent ‘spin’ in the abstract does actually affect clinical practice.
“But a warning from the 2017 systematic review is important. That looked at more types of ‘spin’ than in the new study, including ‘spin’ in the main text of the research papers. And it found that, broadly speaking, ‘spin’ was just as prevalent in the main text as it was in the abstract. So, assuming ‘spin’ might affect clinical practice, it’s no good just advising clinicians to read the whole paper, for things relevant to them. We need to be sure that clinicians can spot ‘spin’ when they see it.
“Of course, it would be best if the ‘spin’ just wasn’t there at all. But that’s going to be difficult to achieve, given the pressures on researchers to publish research, ideally in leading journals. Consciously or unconsciously making the results seem rather ‘better’ than they actually were is an obvious reaction to that pressure. The researchers on this new study exhort journal editors and peer reviewers to be vigilant for spin. But given the huge numbers of manuscripts submitted to the leading journals, and the competition between journals to get articles published quickly, it’s not always easy for editors and peer reviewers to dedicate enough time and energy to get these decisions right in every case. Quite often the decision to reject a paper, for a particular journal, is taken on the basis of the abstract alone. Researchers know that, and that may increase the pressure to make the abstract impressive enough to get beyond that stage. This problem of ‘spin’ is not going to be easy to fix.
“When it comes to informing the public, and not just doctors, about research results, even more pressures come in. There’s good research** on how different types of ‘spin’ can appear in press releases, intended to inform journalists, and through them the public, findings about research. The level of ‘spin’ there was considerable, though it didn’t appear so often as in this new research – but the researchers looking at press releases were concerned only with exaggerations in the press release that went beyond what was in the research paper, which, it appears from this new research and similar studies, could have been exaggerated already. We’ve all got to be careful out there!”
* Chiu K, Grundy Q, Bero L (2017) “Spin” in published biomedical literature: A methodological systematic review, PLOS Biology, https://doi.org/10.1371/journal.pbio.2002173
** For example, Sumner P. et al. (2014) The association between exaggeration in health related science news and academic press releases: retrospective observational study. BMJ; 2014;349: g7015–5.
Prof David Curtis, Honorary Professor, UCL Genetics Institute, said:
“This study of papers published in top psychiatry journals shows something which is perhaps not particularly surprising – that authors tend to impart a positive spin to their papers. The abstract of the paper is freely available for everybody to see and indeed most people will only read the abstract without looking at the detailed results and discussion which may be more objective. I doubt that this applies only to psychiatry journals. My bet would be that they would have obtained similar findings if they had studied papers published in general medical journals. Although they are right to say that reviewers should try to make sure that abstracts more accurately reflect the content of the paper, I’m not sure that this is really such a serious problem. Although doctors may look at the abstracts of papers I’m not really sure that they use these to make important clinical decisions. When a doctor is trying to decide the right thing to do or if a committee is drawing up guidelines for clinical practice then they look much more carefully at the detailed contents of the paper, not just the abstract. The other thing to bear in mind here is that the study looks at a very specific set of papers – the ones which are published in top-ranking journals but which have negative results. Rightly or wrongly, most papers with negative results will not be published in these journals so one is looking at an unrepresentative sample. Yes, it would be nice if the abstract reflected the content of the paper in an unbiassed way but we’ve all learned to take abstracts with a large pinch of salt so overall I do not find this report to be especially worrying.”
‘Evaluation of spin in abstracts of papers in psychiatry and psychology journals’ by Samuel Jellison et al. was published in BMJ Evidence Based Medicine at 23:30 UK time on Monday 5th August.
Prof Michael Sharpe: I have published a number of clinical trials. I have no other conflicts.
Prof Kevin McConway: “Prof McConway is a member of the SMC Advisory Committee, but his quote above is in his capacity as a professional statistician.”
Prof David Curtis: No conflicts of interest