This is a guest post by Fiona Lethbridge, Senior Press Officer at the Science Media Centre.
‘E-cigs are just as bad for your heart as smoking fags as they damage key blood vessels, say experts’.
‘Mediterranean diet better for the heart than taking statins, major study suggests’.
‘Red Bull gives you better mental health? Additive in Red Bull ‘EASES’ the symptoms of psychosis’.
‘How a glass of red wine every night could cut risk of diabetes’.
‘Another reason to ditch Diet Coke! Low-calorie sweeteners ‘could trigger deadly diabetes’.
‘Giving babies Calpol could raise risk of teenage asthma’.
These are just some recent headlines from the national news media. Some even made the front page. They are also all about very, very preliminary research that hasn’t yet been through the normal checks and balances of science – neither peer reviewed nor published.
They are stories from scientific conferences – international meetings where scientists get together to discuss their work, seek feedback from their peers, and forge collaborations. Early-career scientists are often encouraged to attend conferences as part of their training and to present their work to senior colleagues from across the world. They are undoubtedly important and useful events for scientists and for science.
But are they fit for public consumption? Lots of conferences employ press officers to do PR and get media coverage for the conference. And journalists are given that rare opportunity to escape their desks for a few days to go and find stories (and, having been allowed to go, must get some). But the research being presented at conferences has not been through peer review – it hasn’t been checked for errors, nor had its stats scrutinised, nor been given the once over by an experienced scientist in the field. (There are some rare exceptions where research is presented at a conference and simultaneously published in a journal, so it has been fully through peer review – those exceptions are not what I’m talking about here.)
We know peer review isn’t perfect of course – bad science does make it through to journal publication, and retractions do still have to be made. A quick scan through a handful of SMC Roundups will show there are plenty of limitations and caveats, and sometimes serious flaws, in peer-reviewed papers. But as the saying goes, peer review like democracy is the least bad system we have. It exists for good reason and represents the self-correcting nature of science. Critically, peer review often has the effect of toning down the authors’ claims. Conference talks are science that hasn’t yet been corrected – like unmarked homework, something you probably wouldn’t submit for your exam. And – quite rightly given what conferences are for – the research often comes from students and junior researchers. I gave a talk at a conference as a PhD student – I quickly finished making the graphs on the plane over, and I never published. It’s widely believed that most research presented at conferences never makes it to publication.
The science and health journalists in the UK are good and responsible and will often ask us to gather comments from senior scientists about new research being presented at conferences, which we are of course very happy to do. But those scientists often say how difficult it is to provide a decent analysis of the work because there is so little information to go on. Some get exasperated. Unlike with journal publications where the full paper (methods, stats analyses, figures and tables etc.) is available often along with supplementary material giving more detail about how the work was carried out, the conference research will consist of a poster presentation or a short abstract summary, very light on detail. Often scientists are reluctant to comment for this reason – we have to persuade them that without their comments journalists will only have the short abstract or poster to go on and will have no option other than to write it up uncritically. What journalists want is a third-party scientist to say whether the research is any good or not and what its implications are – a very fair ask, but scientists are just unable to do this.
Here are just some examples of what scientists have said in SMC Roundup quotes about press-released conference abstracts:
“This research is unpublished, so has not been through peer review, and only a small amount of information is available; for example we don’t know about the quality of the studies included… it’s difficult to draw any conclusions at this stage”;
“It really is a mug’s game trying to make a careful assessment of a study like this, where no full research paper is yet available, and the study has not yet gone through full peer review by other experts in the field.”;
“There is not enough information presented here to be at all sure that the reported findings are conclusive. We can’t tell whether this research is of a high quality or not, or whether the data is solid”.
This last one, about red wine and fertility, was covered prominently in five national news outlets.
Prof Sir David Spiegelhalter blogged about one example he was particularly upset about – a conference poster linking pyrethroid pesticides to autism. In the end this one didn’t make the news – it was widely press released, third-party experts were highly critical of it (of both the details that were given and those that were missing), and UK journalists did not write it up. A good outcome, and more evidence that journalists are responsible and spend a lot of time sorting the wheat from the chaff. But are conference proceedings even ripe enough to be picked?
Most conferences have a committee of scientists who review abstracts before the conference programme is put together – but this often involves picking abstracts for novelty or quirkiness, and does not represent a review of the quality of the work (which again would be impossible to do given the scant information presented). Allan Pacey, Professor of Andrology at the University of Sheffield, said, “The role of an abstract review committee for a conference is not the same as the role of an editor of a journal. In deciding which abstracts should or should not be presented at a conference, or which should be given a poster presentation rather than a podium slot, the rule base is very different. The abstract review committee is trying to find a blend of abstracts that might fit the conference theme, or be able to be grouped into sessions to make for an interesting conference that people will want to attend. It may need to give an early career researcher the opportunity to present their early work and get feedback, or there may be a prize session of the best abstract from a particular group (e.g. nurses or PhD students) to encourage them to participate in research. Abstracts may be submitted from researchers working in emerging fields who need the feedback from the conference delegates to know where to take the work next or get some rapid informal peer review. All of this has to be balanced with the abstracts that sound the most promising given more prominence in the programme than the ones which sound more routine. It’s in no-one’s interest to reject abstracts and inhibit people coming to the conference as the collaborations and connections which can be made can lead to something which is much better. None of this is done with media in mind.”
Research presented at conferences is often partially formed and asks more questions than it can answer.
So, what’s the solution? How can we help the public navigate how much confidence to place in such early, unfinished work? In my ideal world, work that is at such a preliminary stage wouldn’t be press released at all and we should all wait until journal publication. If these abstracts and posters are good, they will go on to be published in journals and will at that point get the media coverage they deserve. But I realise this system has existed for years, and so unfortunately that seems unlikely to happen.
The next best thing is to encourage press officers and journalists to report conference research responsibly – and to remind scientists and students that their very early work could end up in the news. At the very least, conferences shouldn’t be churning out simple ‘A causes B’ stories based on data which might never see the light of day in a peer-reviewed journal. It would be great if it wasn’t just sexy, quirky stories that got picked but rather the more involved research that is of importance to the field and reflects the direction of travel of research in the area. Good conference press officers (and there are plenty of those) will help ensure their press releases are cautious, knowing they are publicising work which may not stand the test of science. But we still get plenty of ‘A causes B’ stories from conferences, and that risks misinforming the public, who may make behavioural or lifestyle changes based on what they read.
And good science journalists (and there are plenty of those too) will recognise that posters and abstracts presented at conferences are merely talking points and discussions of work in progress rather than anything the public can rely on to inform their lifestyle choices. My ideal world needn’t mean an end to media coverage of conferences – if journalists could spend two or three days with scientists at the conference and get to the bottom of a detailed story or new field of research, that would be fascinating. But the pressures on news journalists make that near to impossible and more often than not abstracts are turned into stories without much further investigation, and in many cases without the journalist attending the conference at all.
Recent evidence shows scientists are still held in high regard by the public, with no sign that they’ve had enough of experts – but to maintain these high levels of trust scientists need to think about what the public might do with findings that reach the news headlines. We already know this matters. There is evidence for example that following negative media stories about statins, fewer prescriptions than normal were picked up. I would like scientists to think twice about whether their work is ready to hit the news, and to consider waiting a few months until their findings are published.
The conference problem is one of the reasons we developed the labelling system for press releases – with the ‘not peer-reviewed’ label designed specifically for press releases on conference proceedings. We developed this system having been asked to in a report by the Academy of Medical Sciences after their survey found that only 37% of the public said they trusted evidence from medical science – we don’t know why it’s so low, but could a contributing factor be confusion around the claim and counter-claim they read in the news (e.g. statins safe then not safe, several butter U-turns, the safety and the danger of HRT…)? This noise is not just the fault of conferences of course – even published science is iterative and can be contradictory. A newly published tiny observational study one day might have the opposite finding to a newly published huge meta-analysis of RCTs the following week, and those could be covered with equal prominence. But at least those studies have been finished, peer-reviewed and accepted for publication by a quality journal.
Some journalists may refuse to cover work that hasn’t been peer reviewed. But if journalists do cover these stories, ideally it’d be clearly signposted to the public that the science is much more preliminary than they might think. In reality I’m not sure how much information “presented at a conference” gives to readers – better still would be to state what that means: that the work is unchecked and unpublished. In the same way as many scientific organisations are agreeing that preprints should not be press released because they are not yet fit for public consumption, similar considerations should be made for conference proceedings, which are at an even earlier stage.
Conferences are vital for the scientific community and could be a great place for journalists to get new leads, to delve deep into a particular field, and to make contacts. But we should all want science coming out of conferences to be treated more cautiously than peer-reviewed, published research. If not, we’re doing a disservice to the public struggling to navigate what to eat, drink, and take. A conference poster can never give that sort of advice.