At the Pharma Meets AI conference in Barcelona, Spain, in April 2026, discussions highlighted that while AI adoption in drug development is accelerating, trust and governance are becoming critical barriers to scale. Despite advancements in predictive modelling and automation, concerns around quality, bias, and model reliability continue to limit widespread deployment.
Dr. Debarshi Dey, head of data science at Galapagos, emphasised that AI must move beyond experimental use cases and be embedded within real decision-making frameworks. He outlined three key areas where AI is driving impact: prediction, personalisation, and productivity, ranging from early response forecasting and adverse-event prediction to biomarker-driven patient selection and workflow automation. However, in high-stakes environments such as drug discovery and clinical development, even minor inaccuracies can have significant downstream consequences, making trust in AI outputs essential.
A key challenge lies in ensuring that AI models are trained in high-quality, representative datasets. Biases in clinical, genomic, or real-world data can lead to misleading predictions, ultimately impacting decision-making across the pipeline. As a result, there is increasing focus on establishing robust validation processes, clear context of use, and continuous monitoring of model performance.
Regulatory bodies are also evolving their approach, shifting from passive oversight to more active enablement of AI, with an emphasis on auditability, transparency, and reproducibility. This reflects a broader industry shift towards treating AI not as a one-off deployment, but as a continuously governed system.
As AI adoption matures, the ability to build trust through strong governance will be critical in enabling its transition from experimental tools to a core component of decision-making in drug development.


