Skip to content

Unknown unknowns: Is there a selection bias against null results?

11 Sep, 2014

neg image

As a large funder of biomedical research, the Wellcome Trust is keen to ensure that the findings of that research are widely and openly shared. There is a body of evidence that indicates a bias against writing up and publishing certain types of result. Jonathon Kram and Adam Dinsmore, from the Wellcome Trust evaluation team, discuss why this could create a barrier to scientific progress…

There is a lot of pressure on academic researchers. Rightly or wrongly, it is widely held that researchers must publish prodigiously and secure funding consistently to advance their careers and that their research outputs must achieve high ‘impact’ to be of value.

An unintended consequence of this ‘impact culture’ may be a publication bias against research findings whose implications are not immediately obvious, or which may be deemed less novel, interesting, or eye-catching. Some research suggests that this bias may extend to research that yields negative and/or null findings.

There are a few definitions of ‘negative finding’ currently in use, but the idea of significance is central to the concept. A negative result fails to reject the null hypothesis –the proposition that no significant effect is present – within the bounds of an experiment and to a level of statistical rigour. This can occur when a treatment is found to have no greater efficacy than placebo, or when no difference is found between two distinct groups of people (e.g. conservatives and liberals) on some variable of interest.

That said, the term is also used to refer to findings of slightly different but related kinds, including:

  • No correlation found

    No correlation found

    Incremental improvements: Minor refinements to previously reported hypotheses (e.g. where a previous result can only be replicated in certain conditions).

  • Refutations: Results that defy expectation, whether that’s the previous scientific consensus or a researcher’s original intent.

A bias against sharing statistically insignificant results (alongside incremental refinements and refutations) could have detrimental effects to the progression of science and our knowledge. Researchers could waste time pursuing hypotheses which have already been disconfirmed. Clinicians could make treatment decisions based on incomplete evidence. The time and energy of human participants could be wasted in research studies which were never subsequently reported.

Worryingly, there is some evidence that this is already happening.

Hopewell et al (2007) found that research papers which reported statistically significant results were approximately four times as likely to be published as their negative counterparts. The work of Daniele Fanelli (2012) suggests that the proportion of papers publishing negative findings may be decreasing over time; in 2007 just 14% of papers indexed by ISI which declared to have tested a hypothesis reported a negative finding, down from 30% in 1990. Recently Franco et al (2014) found that negative findings are less likely to be written up for publication than more positive results.

Several attempts have been made to encourage the publication of negative findings. BioMed Central’s Journal for Negative Results in BioMedicine has published articles which either report negative findings or discuss their place in science since 2002, while Open Access publishers PLOS One and PeerJ encourage the submission of any scientific results including negative findings.

Away from conventional publication the online datahub FigShare actively encourages the upload of datasets containing null and negative findings, without requiring that the researchers responsible for them prepare a full manuscript for submission.

Credit: Wellcome Library

Credit: Wellcome Library

As a research funder, we are keen that all research findings be accessible and presented to the world; including those that describe a breakthrough, disprove a theory or demonstrate a null result. Everyone in research has an opinion of what makes “good science”, from the researchers through to the readers of a paper. Researchers, journal editors and peer-reviewers all make value judgments about what’s worth telling the rest of the world about. However, if there are processes in play that inadvertently (or advertently) discriminate against the availability of a certain type of result and its data, we risk needless repetition of research and, potentially worse, proliferation of apparent knowledge that may be wrong.

Through the open access movement and the potential of digital technologies, we have the opportunity to enable accessibility to all the findings of research and reduce research waste. If the studies cited above accurately reflect a bias against negative findings in the biomedical literature, then it is the duty of all stakeholders in the delivery of science and knowledge to engage with the issue and explore ways of correcting it.

There is no benchmark to measure the progress of science against but there are hints that there may be a lot of wasted effort caused by our attitude towards negative findings and their unloved siblings. It is only by building a stronger evidence base around negative findings, and exploring the factors that influence publication bias, that we can explore the severity of the problem and make steps towards improving the health of science.

The Wellcome Trust policy position on research involving human participants states that negative results and full disclosure are expected. We are committed to maximising the availability of and value of research data.

No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: