
AI is increasingly being adopted by sponsors for various applications in clinical trials, including optimising trial design, identifying suitable patients and analysing data, among other uses.
While both the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have released guidance on the use of AI in clinical trials, the technology is still leaps and bounds ahead of regulators.
In January 2025, the FDA released a guidance titled ‘Considerations for the Use of AI to Support Regulatory Decision-Making for Drug and Biological Products’, which provides recommendations on the use of AI to produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality for drugs.
Meanwhile, the EMA has released a reflection paper on the same topic, entitled ‘The Use of AI in the Medicinal Product Lifecycle’, which paper discusses how AI and machine learning (ML) systems used in clinical trials should meet GCP [Good Clinical Practice] guidance from the International Council for Harmonisation (ICH).
The paper also emphasises that if the use of AI/ML carries a high regulatory impact or poses significant patient risk – and the method has not been previously qualified by the EMA for the specific context of use, the system will likely undergo a comprehensive assessment. In such cases, the EMA will require detailed information about its use to be included in the study protocol.
According to the GlobalData report ‘The State of the Biopharmaceutical Industry – 2025’, AI has the potential to significantly reduce pharmaceutical R&D costs by streamlining drug discovery, optimising clinical trials, and minimising costly failures through data-driven predictions and effectiveness assessments.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataWhile the report highlights AI is more adopted in the preclinical setting, 10% of industry experts in a survey believe AI will become a key driver in developing new treatments in both preclinical and clinical trials this year.
GlobalData is the parent company of Pharmaceutical Technology.
Where is AI advancing fastest?
The FDA’s January 2025 guidance is a “great start”, according to Monica Chmielewski, senior counsel and healthcare lawyer with Foley & Lardner LLP. She sees it as a clear sign that regulators are aware of the growing use of the technology in clinical trials. However, she notes that the space is evolving rapidly and regulators will likely always remain slightly behind the pace of AI development.
George El-Helou, pharma analyst for GlobalData Strategic Intelligence, agrees with Chmielewski but has some concerns: “I’d say that this guidance isn’t comprehensive yet. The good thing about them is that they address things like data transparency, data integrity and algorithm validation.
“However, there remains a lack of clear, enforceable frameworks governing the use of AI across various aspects of clinical trials, particularly in areas such as trial design and patient recruitment, which are both critical components. Overall, it appears that industry innovation is currently outpacing regulatory developments. While the gap between innovation and regulation has been gradually narrowing, regulatory approaches still tend to be reactive rather than proactive at this stage.”

Orr Inbar, CEO of QuantHealth, an AI company that provides a platform to simulate clinical trials, notes the importance of clarity around generative AI (genAI) applications, which are becoming more heavily adopted.
One area where genAI is being adopted is for regulatory compliance. However, it is also being used for optimised trial design and data interpretation, which is primarily where more guidance is needed, as they hold significant weight when it comes to seeking approval. Therefore, something both Inbar and the industry are hoping for is clarity to ensure compliance in these areas.
El-Helou emphasises the importance of considering data protection/privacy laws: “Companies need to make sure that it is secure and not possible for data to be leaked or sent anywhere it shouldn’t be.”
Foley & Lardner LLP partner Kyle Faget adds that the FDA is trying to address AI usage, but due to the speed at which it is evolving, it is difficult for the agency to keep up.
Faget identified data bias as a key issue that poses challenges in the absence of regulation. He emphasises the need for improved management of privacy and data security within the software, stating it is an area that growing concern within the industry.
Another application could be in patient recruitment to ensure the trial population is representative of the larger target population, El-Helou adds.
The analyst commented: “That is one aspect that they need to get right to reduce any bias. It will help to make sure sponsors understand all the adverse events that the drug may have and how effective it is for that target population.”
Faget agrees that patient recruitment is a good application for AI, which requires regulation of some form, not only with data collection from participating patients but also by supporting predictive modelling and real-world evidence (RWE) trials.
Inbar hopes that the FDA will lead the charge on this while working with industry and leading AI consumers and developers to develop the right frameworks in this space. While it hopefully won’t come to a misuse of AI in clinical research, Inbar believes that if this happens, it will accelerate in-depth regulation of its application in the research space.
Possible deregulation could put laws into state hands
At the start of his presidency, US President Donald Trump signed an Executive Order about “removing barriers to American leadership in AI”. Faget and Chmielewski both agree that this could lead to deregulation in this space, and laws around AI could subsequently be considered state-to-state.

Chmielewski believes the most likely area where states will start regulating will be privacy and security, but said that this could create barriers for sponsors down the line.
“It will be a challenge,” Chmielewski admits. “Companies are already dealing with HIPAA on a national level, and there are some individual states like California with their privacy regulations too. While it will be difficult, sponsors are already accustomed to having to address various state laws in the conduct of trials, especially in decentralised clinical trials, where sponsors have to be aware of and comply with individual state laws addressing the use of digital health technologies and telemedicine.”
While companies will be trying to manage on a state-to-state basis, there could also be differentiation on a global scale, which could cause barriers in later-stage research.
Battling with different regulations, however, is something that the pharma sector is well practised in, especially in big pharma that run late-stage global studies. As a result, this could make it easier for sponsors to better interpret differing regulations and overcome this barrier, adds Inbar.
“This is a muscle that big pharma has already developed, but now with AI, they will have to reel in the technology set to develop that muscle and work alongside the clinical ones,” says Inbar.
“Digital teams are gaining more prominence because of these tools and the impact AI is having on clinical development.”