Reproducibility is a well-known problem in scientific circles; many academics and research firms find it persistently challenging to replicate potentially exciting pre-clinical research data, leading to wasted resources and an erosion of the credibility of medical research.
Journals, scientists, institutions and pharmaceutical research sponsors could all benefit from tackling reproducibility issues so that science can advance faster and with fewer false starts.
Last year, scientists Dr Gordon Lithgow, Professor Monica Driscoll and Patrick Phillips explained in a paper for the journal Nature how replicating some of their medical research took four years. Their study was the first to clearly show that a drug-like molecule could extend the lifespan of animals; but, although they tested their results during multiple experiments using roundworms, no other labs could replicate their study and to this day they still do not know why this happened.
The mystery around reproducing medical research can have major consequences, with resources wasted on doomed development projects and the integrity of basic research brought into question. So, what can be done to tackle medical research result replication issues?
Research flaws: the human element
Reproducing medical research has never been easy, with numerous contributing factors that can impact results. These factors include technical difficulties, problematic statistical analysis, poor controls, selective reporting, and researchers not providing the necessary detail in the materials and methods sections of papers that would allow replication in other labs.
Sometimes results are initially found to be repeatable, but then fail to be validated in additional models, which are required to progress to the next stage of drug development.
Perhaps the biggest problem caused by medical test replication issues is the waste of time and money that should be being spent furthering medical developments.
“One of the main concerns regarding the issue of reproducibility in science is the large amount of wasted resources that it can cause,” says GlobalData medical device analyst and former medical researcher Alison Casey. “As funding and time are limited for scientists in both academia and industry, studies following up on erroneous information will invariably consume resources that are needed to drive forward other, more promising projects.”
Result replication issues also cause a lot of fear in the scientific research industry as no one wants their paper or work to be discredited and therefore it is not favourable for scientists to highlight that their work may be difficult to reproduce.
Unfortunately, this can result in poor communication and more time-wasting. These replication fears are exacerbated by the ‘publish or perish’ incentive structure of scientific research, which creates a bias for scientists to say they have found positive results and sometimes present sensationalised findings.
What needs to be done to improve validation of scientific results?
Results from a survey published by the journal Nature in 2016 indicated that improvements in experimental design, statistical methods and mentorship could help tackle the reproducibility problem.
Challenging the ‘publish or perish’ structure in scientific research could also allow scientists to be more open and transparent with their methods and results. Reducing the pressure for scientists to complete studies as quickly as possible could also help them obtain the most accurate results possible. Journals tend to favour papers which contain positive or novel results but if this content preference became more neutral then researchers could be more motivated to publish null and replication results as well as successful new studies.
“Changing the way in which scientists are judged, as well as the factors allowing them to progress in their careers, could have a strong and beneficial impact on the trustworthiness of science,” Casey says.
A lot of journals, such as Nature, are already trying to improve transparency and encourage peer review by taking steps such as requiring all big data sets to be made available to the public as supplementary tables or through repository sites. The requirements for method sections in academic papers are also becoming more standardised, with clarity surrounding specific details on reagents, animal strains and antibodies being more frequently requested.
Is the problem being sensationalised?
While research suggests that more than two-thirds of scientists have tried and failed to reproduce the experiments of fellow researchers, it’s also true that the issue may be prone to sensationalised reporting.
“It should be noted that the shocking statistics frequently cited regarding the reproducibility crisis often focus on exciting, landmark studies describing completely new approaches,” says Casey. “These studies do not have the backing of numerous complementary studies, which tend to support more progressive advances, and thus may be less likely to hold up to further scrutiny.
“Some scientists have also stated that ironically the narrative of a ‘replication crisis’ is itself sensationalised and not necessarily supported or justified. It has been pointed out that two of the biggest publications indicative of a replication crisis display a worrisome lack of transparency, whereby the authors refused to reveal which high-profile studies they failed to replicate.”
Perhaps the reproducibility issue has been inflated due to a lack of evidence. But, even if that is the case, the reported failings of research into the ‘replication crisis’ may unwittingly reinforce the wider point. Pressure to publish, rushed work and a lack of transparency from studies and journals all contribute to flaws in basic research.
Academic peer review is a fundamental practice to scrutinise research that may underlie multi-million dollar investments in the life sciences sector and to some extent, the flags raised around reproducibility – as well as the concerns raised about that criticism – provide evidence that this safeguard is doing its job.
Nevertheless, the broader goal of holding pre-clinical research to account in a transparent system still needs to be reiterated, and journals need to pay attention to failed studies as well as potential breakthroughs to avoid encouraging biases among scientists. After all, when costly drug development programmes are on the line, bad research is almost certainly more damaging than no research at all.