Much of the narrative on pharmaceutical drug development is dominated by discussions about the complexity and stringency of the regulatory submissions and approvals process, especially in regions like Europe and the US. As such, it's tempting to assume that results of clinical trials are so closely monitored that the available data on approved therapeutics is implicitly trustworthy.
However, as high-profile recalls and health scandals over the years have proved, even the most demanding regulatory system offers no guarantees. Alongside traditional regulation, systematic reviews and meta-analyses represent a key pillar of evidence-based medicine. In essence, these reviews carry out independent critical appraisal of an exhaustive range of published clinical trial data to provide definitive, evidence-based judgements of a treatment's benefits and potential risks.
But as fundamental as systematic reviews are to the integrity of clinical data, and by extension public health, they can only be effective to the extent that clinical data is made available to them. While practices such as trial registration and data sharing have been made mandatory in the US and other countries, a clutch of new studies published in the British Medical Journal (BMJ) at the beginning of 2012 have provided a potent reminder that unpublished clinical trial data is an issue which still plagues the drug development world, and is a clear obstacle to the systematic review process.
Unreported clinical trial data: a persistent issue
In the case of the Northwick Park Hospital drug trials in the UK in 2006, during which six healthy trial volunteers were left critically ill, some with multiple organ failure, responsible reporting of data at an earlier stage could have prevented a near-fatal disaster.
In summing up the findings of the various studies in the January 2012 BMJ review on missing data, a BMJ editorial by Dr Elizabeth Loder and Dr Richard Lehman effectively highlighted the problem that unpublished trials can cause. "What is clear from the linked studies is that past failures to ensure proper regulation and registration of clinical trials, and a current culture of haphazard publication and incomplete data disclosure, make the proper analysis of the harms and benefits of common interventions almost impossible for systematic reviewers," stated the editorial.
"Our patients will have to live with the consequences of these failures for many years to come."
This is a problem that has been encountered first hand by the Cochrane Collaboration, a non-profit systematic review organisation which numbers more than 28,000 members in 100 countries and compiles evidence on a range of therapeutics. These reviews are stored and made available through The Cochrane Library, a massive network of databases.
Despite modern software advances, creating an overarching clinical data management strategy remains hugely challenging.
Dr David Tovey, The Cochrane Library's editor-in-chief, understands that missing data can render systematic reviews redundant or even actively misleading through the absence of pertinent evidence. "It clearly completely undermines [the review] if either there's missing data because it's never been published, or because the trial it was based on was published but the reports didn't include relevant data that could have been included but weren't," he says.
"So when we say, 'we've looked at all the evidence, and the evidence tells us that this is effective or this isn't harmful or this is a trade-off', we can be presenting a very skewed picture if we're unaware of evidence that is somehow missing."
Considering its potential to corrupt medical information passed on to doctors and patients, missing data has proved to be a surprisingly persistent problem in clinical research. Indeed, one of the Cochrane Collaboration's founders, Dr Iain Chalmers, published a report strikingly similar to the current crop of BMJ studies way back in 1990. "Substantial numbers of clinical trials are never reported in print, and among those that are, many are not reported in sufficient detail to enable judgments to be made about the validity of their results," Chalmers wrote at the time.
Why does clinical trial data go unreported?
Some of the reasons for missing trial data are obvious - publication bias (the tendency to publish positive trial results and ignore negative ones) might be the best-known cause of data discrepancies, whether it's drug companies making sure their drug is presented in the best possible light, or academic journals choosing not to publish failed trials because they are unlikely to provoke any significant interest (although Tovey believes that the best journals have now generally stamped out that mindset).
But in a complex world where ethics, data management and commercial considerations collide, explanations will never be cut and dry. Simple inefficiency can be as much to blame as avarice or ambition.
"It isn't just drug company trials that aren't published," states Tovey. "If we think about the largest clinical trial ever conducted, the DEVTA trial, which looked at deworming and vitamin A in children in India - a million children randomised - it's never been published. It finished in 2006 or so. And that wasn't because a drug company was trying to withhold data because it was publicly funded. Researchers move on, they die, somebody else tries to pick it up and can't find all the data - maybe the storage hasn't been good enough."
The costs of unpublished clinical trial data
New studies published in the British Medical Journal (BMJ) at the beginning of 2012 have provided a potent reminder that unpublished clinical trial data is an issue which still plagues the drug development world.
To the casual observer, the concept of unreported clinical trial data and thwarted systematic reviews might seem wrapped up in the somewhat abstract world of academic debate, but previous experience has shown that unpublished data can lead to huge financial waste, or more importantly, risks to human health.
The Cochrane Collaboration has been heavily involved in the debate over the efficacy of neuraminidase inhibitors, most prominently marketed as Tamiflu, by Roche. Although the initial Cochrane review of the drug agreed that it was effective at reducing the risk of serious influenza complications, in 2009 Cochrane's team of review authors received correspondence from a Japanese reader who told them that for a key ten-trial meta-analysis of neuraminidase inhibitors, eight of the trials had never been published, meaning there was a serious lack of data.
"So they started to look much more rigorously and sceptically into this, and what they found was a whole heap of trials that had been conducted but had never been published," explains Tovey.
Robots are already part of the pharma industry's development process, but could they ever take over completely?
"I was part of that project, and I would meet the team and they'd say 'we've found ten, 20 unpublished trials,' and then it would become 30 or 40. We ended up with a huge number of trials, most of which had never seen the light of day. Of course, if you only publish the tip of the iceberg, you're getting a much skewed picture of what's going on.
"In the case of neuraminidase inhibitors, billions were spent stockpiling [the] darned things, and we still don't really know if they're effective at doing the most important thing, which is reducing serious complications. Maybe that money could have been spent on something that was actually more effective. I think there are a lot of examples where sub-optimal care was given, and if that's given to enough people there will be fatal consequences; people will be seriously harmed."
Indeed, Tovey mentions other high-profile cases, such as selective serotonin reuptake inhibitors (SSRIs), where a possible publication bias led to an exaggeration of the drug's efficacy and played down an alleged heightened risk of juvenile suicide.
"If you think about the Vioxx example [a now-withdrawn anti-inflammatory drug] I think there was clear evidence that the risk of myocardial infarction was seen in the initial data but that was suppressed," Tovey continues.
"Vioxx was fantastically widely prescribed, and therefore probably tens of thousands of people had the consequence of having a heart attack that they probably wouldn't otherwise have had."
Transparency and integrity: solving the problem
Most research experts and systematic reviewers agree that complete transparency of clinical trial data - whether successful or not - is the cornerstone of an optimal, evidence-based method of monitoring drug efficacy. It's a goal that relies on more effective governance of research on a national or even international level, as well as more meaningful enforcement of data sharing regulations. After all, one of the recent BMJ studies revealed that despite the publication of US trial results summaries being made mandatory in 2007, only 22% have actually done so.
Loder and Lehman laid out their recommendations for ensuring data transparency in their BMJ editorial. "This may require the global organisation of a suitable shared database for all raw data from human trials - an obvious next step for the World Health Organization after its excellent work on the International Clinical Trials Registry Platform Search Portal," they wrote.
"Concealment of data should be regarded as the serious ethical breach that it is, and clinical researchers who fail to disclose data should be subject to disciplinary action by professional organisations."
Tovey, meanwhile, offers a powerful argument that failed trials should be published just as consistently as those that are successful. In the case of the Northwick Park Hospital drug trials in the UK in 2006, during which six healthy trial volunteers were left critically ill, some with multiple organ failure, responsible reporting of data at an earlier stage could have prevented a near-fatal disaster.
"That drug had been given to a person before and it had had the same consequence," says Tovey. "Clearly the researcher thought, 'this drug is doomed, I don't need to write this up because it's a very damaging drug.'
"But of course the failure to pass on that knowledge was disastrous down the line. So there should be a compulsion that whatever happens in a trial, that information will be shared appropriately."
Clinical trial transparency is a simple premise which has proved extremely difficult to implement in reality. If the risk of costly and potentially tragic pharmaceutical errors is to be minimised in the future, the consensus seems to be that regulators and international bodies must work together to develop a unified, compulsory system for all trial data to be stored and disseminated for in-depth review.
When data, or the lack of it, has the power over life and death, the reporting of that data to the public becomes a moral imperative. It's only fair that the world is provided with a system that has been built to reflect just that.
The idea of vaccinating drug addicts against substance abuse is nothing new.
Patients who don't take their medication as prescribed cost the healthcare industry billions each year.