It has increasingly been recognised that many published results in scientific literature cannot be repeated. This problem – often referred to as the reproducibility or replication crisis – represents an important and pressing issue that unquestionably needs to be addressed.

Journal articles indicate that when scientists deliberately try to replicate academic findings, only 11%–39% are successful, although it should be noted that some of these failures are likely due to technical difficulties or previously unappreciated facets of what are often complex scientific problems.

Furthermore, models utilised in pre-clinical research are far from perfect and often cannot fully capture the inherent heterogeneity of any given disease. As such, variations across studies using different pre-clinical models are to be expected and sometimes provide useful information regarding the underlying biology of a disease, which can be capitalised upon in future work.

Now that the reproducibility problem has been highlighted, scientists are working together and debating methods to improve the reliability and integrity of scientific research. What has yet to be established is the impact this issue has had on patient care.

One obvious concern regarding the reproducibility problem is that it leads to large amounts of wasted resources. Exciting pre-clinical findings, especially those published in high-impact journals, will often be cited regardless of whether the results have been validated or not. This can lead to the initiation of new projects and follow-up studies, which will ultimately not be successful but still consume much-needed time and resources.

Another big issue is the disillusionment that some academic and industry scientists are developing regarding the scientific literature. Many scientists are frustrated and remain sceptical about new discoveries unless they can successfully test the findings in their own systems.

There are also high-profile reports from companies such as Amgen and Bayer HealthCare stating that in the vast majority of cases, published data fail to line up with their own in-house validation. Instead, inconsistencies often arise that either prolong the validation process or result in the termination of projects. It has also been suggested that the poor validity of candidate drug targets contributes to the low success rate of Phase II clinical trials, especially in areas such as oncology.

Many have stated that open access research and loosening of the traditional ‘publish or perish’ incentive structure represent logical first steps towards alleviating the reproducibility problem. It is generally acknowledged that the constant pressure to publish in high-impact journals can cause scientists to sensationalise their findings, in an attempt to make them sound more interesting.

We live in a digital world where the ways in which we communicate are constantly changing and information can be more and more readily shared. It will be interesting to see how science adapts to these changes and whether new technologies can help improve the review, validation, and prioritisation of scientific data.