This is a joint post by Fiona Fox, Chief Executive, and Fiona Lethbridge, Senior Press Officer at the Science Media Centre.
At a recent talk about the SMC an audience member asked me which one thing I would change about the media’s coverage of science. Now buy me a glass of wine and I could probably list 10-20 things that would look very different in my ideal world. Forced to choose one however my answer focused on media coverage of new studies which would allow the public to better understand which ones are significant and which less so. Too often science which is at a very early stage – like a conference abstract or preclinical trials in mice – is reported with the same prominence and excitement as large studies conducted in humans and published in top journals. And all too often we see a single observational study, which can never show that X causes Y, reported as if a link is proven and awarded the same front page prominence as a gold standard RCT or meta-analysis.
Pundits wearily remind us that the public are sick of the contradictory claims that litter the headlines. Statins, HRT and screening are curing us one week and killing us the next. That reflects how hard it is for the scientifically untrained public to judge which new findings can be relied on when making important decisions about health. Not entirely surprising then, that when the Academy of Medical Sciences (AMS) decided to tackle the issue of how the public can assess medical evidence, their survey revealed that only 37% of the public said they trusted evidence from medical research. There may be plenty of good reasons for that but the figure stands as a stark message that medical science needs to up our game.
With that in mind the AMS report recommended a new labelling system for medical research press releases that will encourage the research community to communicate new findings in an accurate and measured way. The Science Media Centre was invited to develop the labelling system and we accepted the challenge with enthusiasm. As with all the 30 plus recommendations in this important report – ranging from involving patients in research and improving patient information leaflets in medicines to better training for researchers in stats and research methods – this new system alone will not fix anything. But there is a chance it will act as a kind of anti-hype device with scientists, press officers and journalists all ensuring that the claims made in the press release match the labels at the top and are proportionate to the stage of the science. Of course, if a conference abstract describes something quirky or surprising it’s still likely to pique the interest of news editors; but given that most abstracts never make it to publication, responsible scientists, press officers and journalists should all want that science to be covered more cautiously than published findings.
The labelling system will enable institutions press releasing their findings to show journalists at a glance what kind of experiment took place, what stage the research is at and whether it was peer reviewed. The hope is that the system encourages the good practice we already see in many press releases and helps science reporters to instantly see information which will influence how they report new findings and perhaps even help them to stop excitable editors from splashing on preliminary or limited studies. It could also act as a tool for press officers if they need to push back against eager scientists who may be over-claiming for their research.
How does the system work? There will be usually three (and sometimes just two) labels at the top of press releases on health and medical science – either just above or just below the press release headline, and clearly visible so journalists can see them quickly without having to search for them in the body of the press release.
The first label will be either ‘peer-reviewed’ or ‘not peer-reviewed’ – peer-reviewed means the whole study has been through independent, external review by experts as part of a journal publication process. The ‘not peer-reviewed’ label would apply to anything else – including posters being presented at scientific conferences, and opinion pieces.
The second label gives the type of study – if it’s a systematic review, randomised controlled trial, observational study, literature review, case study or opinion piece, for example.
The third label shows whether the study was in people, animals, human embryos or cells. If the second label on a press release shows the study was a literature review or opinion piece (i.e. not presenting any new data), it doesn’t get a third label.
That’s it! The rest of the press release will be unchanged, and the labels won’t be obstructive and won’t change the email subject line. The three (or two) labels together should give journalists at a glance an idea of the state and stage of the research, whether the findings indicate correlation or causation, its relevance (or otherwise) to humans, and whether it’s been through the normal checks and balances of the scientific process.
The labelling system isn’t a kite mark. The existence of labels on a press release is not a marker of a good quality study or press release – that is beyond the scope of the project. The labelling system is also not a value judgment on the robustness or quality of the paper in question. It is an objective and factual description of the stage of the science, pure and simple. The labels aren’t intended to be a hierarchy – we all know there will be excellent observational studies and poor RCTs. But we should all expect a non-peer-reviewed conference poster about an observational study to be treated differently from a peer-reviewed meta-analysis of RCTs; if it’s not, we’re doing a disservice to the public trying to navigate where the best evidence lies.
Many will point out that the best scientists, press officers and journalists are already acting responsibly in the way they convey new findings to the wider public and that is happily very true. But we believe the new labelling system encourages that good practice where it exists, helps implement it where it doesn’t, and sends the message that responsible and measured reporting of medical research findings is something that medical science values.
The SMC has now run a pilot of the labelling system with several research press teams and discussed it with many of the UK’s leading science journalists and we are delighted to say that the system is popular. Press officers report that it is easy to use and a helpful nudge to making these aspects of a press release very clear, and journalists say that it will help them to find information more quickly, crucial in time poor newsrooms. Given this positive feedback, and the fact that even the less enthusiastic press officers say they cannot see any harm from the system, the SMC is delighted to announce that it will begin to be rolled out in science press offices from the beginning of July this year.
What some are saying about the labelling system:
Dr Richard Horton, Editor of The Lancet, said: “It’s a no brainer.”
Prof Sir David Spiegelhalter, Winton Professor for the Public Understanding of Risk at the University of Cambridge, said: “Science follows a steady path from early exploratory work, say on mice, right through to systematic evidence reviews that can directly inform policy. I hope that these labels can clarify the stage of a study, and so help prevent the sort of exaggerated claims that might make people suspicious of science stories.”
Vicky Allen, Science Correspondent at the Daily Mail, said: “The new press release labelling seems like a great common-sense idea.”
Martin Bagot, Health and Science Correspondent at the Mirror, said: “Can only help – anything that makes things clearer sounds good.”
Josh Gabbatiss, Science correspondent at the Independent, said: “It looks great to me, the labels are clear and I’m all for a system which keeps press releases and news articles proportionate to the science.”
Tim Mayo, Press Officer at the University of Reading, said: “The labelling system has been a useful way to highlight some of the key features of scientific studies, both to journalists when used in press releases, and in our own reporting of research on our website. Our academics have been happy to adopt the labelling system and it has helped us to have discussions with them about how we handle research communications with integrity. It has also been helpful for discussing research communications more widely, such as how we plan for non-published research findings. We have found that the labelling has been particularly useful when working with general news reporters without a science specialism, who might not be as familiar with scientific research papers or journals.”