The use of artificial intelligence in life sciences: recent developments and guiding principles

The use of artificial intelligence in life sciences: recent developments and guiding principles

Select committee report on robotics and AI

Software using artificial intelligence (AI) appears to be playing an integral role in medical device and drug development by significantly reducing the time, cost and resources taken to bring products to market. In fact, AI and machine learning (ML) may well transform health and social care generally, including in areas such as patient monitoring and doctor assistance.

There is a plethora of discussion and debate in this area on both the risks and benefits of AI. We have recently published a note looking at some of the points all businesses should consider when using AI systems.

In this note, we share some recent developments that seem particularly relevant to the life sciences and biotech industries and provide links to some useful resources.

A quick reminder – what is AI?

We have provided a general definition of AI in a previous note but in summary, AI is usually used as an umbrella term for a range of technologies of which typically include some form of ML, which is the use of computational techniques to create statistical models using (often) large quantities of data (sometimes called “big data”).

As data-driven systems, AI models can produce learnt outputs instead of being pre-programmed to determine specific outputs. Continuous learning AI modules can, as the name implies, learn and make judgments on data which challenges any previous assumptions. In this way, AI can discover trends that may otherwise be hidden within the data, making it particularly useful for drug and medical device development which has historically relied upon trial and error at great cost.

The sheer quantity of data which can be absorbed and processed through AI systems can be used by the AI models to generate algorithms, which in turn have the potential to transform medical device and drug development.

Recent industry developments

It is anticipated that AI will lead to large contributions in the biotech sector – by some estimates up to USD50bn over the next 10 years.

It was recently announced that biotech company Insilico Medicine has begun mid-stage human trials of a drug designed by AI. Insilico Medicine commented the drug, INS018_055, was the first entirely “AI-discovered and AI designed” drug to begin a phase two clinical trial and represented an important milestone for the industry.

Regulatory change and guidance

The AI Act

In efforts to address AI technology generally, the EU has issued a proposal for harmonised rules on AI. In summary, this proposal includes specific legal and regulatory obligations for those manufacturing, importing or distributing AI systems on the EU market. The proposal also sets out certain "blacklisted" practices and identifies "high-risk" systems, the latter is likely to have particular relevance for the life sciences and biotech industries.

High-risk AI systems

High-risk AI systems include systems where:

  • The AI system is intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid, and
     
  • The AI system is intended to be used as a safety component of a product, or is itself a product, covered by particular harmonisation legislation or the product is required to undergo a third party safety assessment covered by particular harmonisation legislation.

The proposed harmonised legislation includes Regulation (EU) 2017/745 on medical devices, Regulation (EU) 2017/746 on in vitro diagnostic medical devices and Regulation (EU) 2016/425 on personal protective equipment. The proposal’s current form is likely to mean that product providers that include AI systems in products intended to be placed on the EU market will be required to undertake substantial conformity assessments and meet other strict compliance obligations.

ICO guidance

The Information Commissioner’s Office (ICO), the UK’s data protection regulator, has recently published some general guidance on using AI systems. Broadly, the guidance sets out general principles that businesses using AI should follow including ensuring proper accountability structures are in place and ensuring meaningful information is provided to individuals whose personal data is processed. These requirements are underpinned by data protection laws including the UK GDPR and EU GDPR. We have written a previous note on this ICO guidance which may be useful for life science businesses.

Medical devices – guiding principles

To lay the foundation for the evolving AI and ML medical device field, organisations in the US, Canada and the UK have jointly set out 10 principles to guide the development of Good Machine Learning Practice for medical device development.

Broadly, these principles cover the integration of AI/ML technologies into the clinical workflow with fundamental security practices which are well represented demographically, while tailoring AI/ML models to the available data to provide contextually relevant information. Training and test datasets are to be maintained separately to ensure independence of one another, and focus will be placed on performance of humans involved in the team rather than just the performance of the AI model in isolation. The principles envisage that AI models will also have the capability to be monitored to maintain safety and performance following deployment.

The principles are set out in full here.

Contact our experts for further advice

Search our site