Insights & Events
May 13, 2026

The use of AI in the life sciences sector and the implications of GDPR

Artificial intelligence (AI) systems are increasingly becoming an effective and valuable tool in the life sciences sector. Their application ranges from the more efficient use of clinical and research data, to accelerating drug discovery and development, supporting innovation in medical devices, and assisting with diagnostics, patient monitoring, and personalised medicine. Used appropriately, AI has the potential to improve outcomes for patients while reducing cost and inefficiency across healthcare and research.

However, the adoption of AI in life sciences is heavily dependent on the availability of large volumes of data. In some cases this data will constitute personal data and, in many cases, special category personal data under the UK and EU GDPR, such as health data and genetic information. This raises significant data protection considerations – including those set out below - that organisations must address at an early stage.

Data Protection considerations under GDPR

Lawful processing

A particular challenge in the life sciences sector is that a significant proportion of the data being processed will fall within the special categories of personal data listed in Article 9 of the UK GDPR. Special category personal data may only be processed where a specific condition under Article 9(2) applies, in addition to a lawful basis under Article 6.

In practice, personal data will often have been collected for a primary purpose, such as the provision of healthcare or participation in a clinical trial and use in an AI system is likely to be a secondary purpose that may not have been considered at the time the personal data was collected for the primary purpose.  Assuming the data is personal, it will be necessary to ensure that both a lawful basis and a special data condition will apply to the secondary purpose, and therefore it may not, for example, be sufficient to rely on explicit consent given for processing connected with the primary purpose.

The revised scientific research condition at Article 9(2)(j) of the UK GDPR may be helpful. However, use is subject to further steps being taken, which are likely to include:

  • Conducting a Data Protecting Impact Assessment (DPIA); and
  • Ensuring that appropriate safeguards set out in Article 84C of the UK GDPR are put in place. These safeguards may include measures such as data minimisation, pseudonymisation, strict access controls, and clear governance arrangements.

Other UK GDPR principles

In addition to ensuring that personal data is processed lawfully, other overarching principles set out in the UK GDPR, as updated by the Data Use and Access Act (DUAA), may be highly relevant to the use of AI in the life sciences sector.  For example, organisations may need to ensure that potential risks of output bias are considered to ensure that processing is fair, and that stringent security measures are applied given elevated information security risks associated with AI systems to comply with the integrity and confidentiality purpose. Conducting a DPIA is likely to be essential to ensure that risks are minimised, but ongoing monitoring of AI system use to ensure that the UK GDPR principles are respected will also be key. 

Automated decision-making 

A further key issue arises where AI is used to support or inform decision-making. In the life sciences context, AI outputs may influence decisions that have a direct and significant impact on individuals, such as medical diagnoses, treatment recommendations, or eligibility for clinical trials. In these scenarios, the regulatory risk is heightened.

Article 22B of the UK GDPR provides that individuals must not be subject to a significant decision based solely on automated processing of their special category health data. The implication for the life sciences sector is that, even in areas where AI can offer the greatest efficiencies and clinical value, outputs will typically need to be reviewed, overseen, and validated by a human decision‑maker exercising independent judgment. This requirement for meaningful human involvement may limit the extent to which AI-driven processes can be fully automated, particularly when compared to other sectors where automated decision-making can be implemented with fewer regulatory constraints.

EU AI Act

Alongside GDPR, the EU AI Act introduces an additional regulatory layer for AI systems. While the EU AI Act will not apply directly in the UK, it has significant extraterritorial reach. UK businesses developing or deploying AI systems may still fall within scope where those systems are placed on the EU market, or where their outputs are used within the EU. As a result, UK life sciences businesses innovating in the AI space will often need to consider compliance with the EU AI Act in parallel with UK regulatory requirements.

The AI Act categorises AI systems by risk, including:

  • prohibited AI practices
  • high-risk AI systems
  • limited-risk AI systems
  • general-purpose AI models

Many AI systems used in the life sciences sector, particularly those used for medical devices, diagnostics, or clinical decision support, are likely to be classified as high-risk, although there are certain exemptions for AI tools for scientific research. High-risk AI systems will be subject to enhanced requirements, including data governance standards, risk management processes, human oversight, record-keeping, and transparency obligations. While the AI Act does not replace GDPR, there is significant overlap in practice, and organisations will need to ensure compliance with both regimes in parallel.

Going forward

At the end of 2025 the UK government introduced its new AI for Science Strategy, recognising that AI is shifting from being an advanced research tool to becoming an active participant in scientific discovery and innovation. It is clear that AI will continue to have a significant impact on sectors such as life sciences and that the policy will encourage this.

Life sciences organisations may welcome developments in AI and implement these systems to benefit from the efficiencies and innovation they can offer. However, it is crucial that businesses are aware of the relevant regulatory frameworks and take appropriate steps to ensure compliance. In particular, undertaking DPIAs will be vital, both to identify and mitigate risks associated with the use of personal and special category data, and to demonstrate accountability under GDPR when deploying AI systems.

Life sciences organisations may welcome developments in AI and implement these systems to benefit from the efficiencies and innovation they can offer. However, it is crucial that businesses are aware of the relevant regulatory frameworks and take appropriate steps to ensure compliance.
View Source