ICO issues guidance on artificial intelligence: life sciences businesses, take note

ICO issues guidance on artificial intelligence: life sciences businesses, take note

 Meeting Room 7 - The patent licensing podcast - Should I be worried about...?

We have previously written on the evolving nature of the UK AI regulatory landscape for life sciences businesses in our article "UK AI Regulation Update: A New Rulebook?” The Information Commissioner’s Office (ICO), the body responsible for regulating data protection, has recently published updated "Guidance on AI and Data Protection" (guidance) in response to pressure from the industry to clarify its stance on the use of artificial intelligence (AI). Some of the updates are highly relevant for the life sciences sector and we set out some examples below.

What is AI and how is it being used in the life sciences sector?

AI is described by the ICO as an umbrella term for a range of technologies of which typically include some form of "machine learning", which is the use of computational techniques to create statistical models using (typically) large quantities of data (sometimes called "big data"). AI has many applications in the life sciences sector including the identification and diagnosis of different diseases, accelerating the rate of medical breakthroughs, and improving productivity. Specifically, AI is being used to discover new molecules for drug discovery, in the automation of clinical trials, in operations to improve efficiencies and to predict supply chain demand. AI is also being used directly with patients, for example, to customise patient messaging and to analyse patient feedback and complaints.  

Accountability and governance 

Organisations, including life science businesses, are accountable for their AI systems’ proper functioning and ensuring proper governance is in place. Data protection laws make it clear that in most cases a data privacy impact assessment (DPIA) should be undertaken before any personal data is processed using an AI system. The ICO’s updated guidance is more specific on the information that should be included in a DPIA in the context of AI. When considering the impact of an AI system, organisations should consider both (i) "allocative harms", which are the result of a decision to allocate goods and opportunities amongst a group; and (ii) "representational harms", which occur when systems reinforce the subordination of groups along identity lines. For example, an AI system for diagnosing malignant tumours may be ineffective if data was only captured from certain ethnic groups. Whilst not solely a data protection issue, it seems that the ICO is taking a more general view of fairness, looking at the AI system as a whole. This means that life sciences businesses using AI systems must consider their data sets and models thoroughly to mitigate potential biases, implement proper risk mitigation measures and properly document decisions.

Transparency in AI

Life science businesses using AI must provide meaningful information to individuals regarding the use of AI systems. The updated guidance includes a new chapter which supplements the main guidance on "Explaining decisions made with AI". It includes high-level content on the transparency obligations that apply towards individuals when an organisation processes personal data using an AI system. This includes explaining, before any data is processed by the AI system: the purposes of processing, retention periods, and who the data will be shared with. The ICO acknowledges the difficulty faced by organisations in explaining AI systems to individuals because of their inherent complexity. Life science businesses need to work hard to make stakeholders aware of AI systems and ensure a proper understanding of the outcomes. Including a privacy notice on a website without taking further steps is unlikely to be enough to evidence compliance with the transparency principle as AI systems continue to develop.

Lawfulness in AI 

Showing that the particular processing is lawful and establishing a lawful basis for processing the personal data is required under data protection laws. This is particularly key for life sciences businesses where health data may be the subject of the processing. The guidance has been rewritten with new sections added on: (i) AI and inferences and (ii) special category data and assessing an appropriate lawful basis. The guidance confirms that inferences made using AI about individuals or groups will be personal data if it relates to an identified or identifiable individual. In addition, an AI system may allow an organisation to guess information about someone which would constitute Article 9 special category data (for example information about someone’s health or ethnicity), but it would depend on the certainty of the inference and whether the organisation is deliberately drawing the inference. What this appears to mean is that if an organisation has an AI system that can effectively "turn" personal data into special category data by drawing inferences with some degree of certainty, it will need to take into account the lawfulness of processing that special category data, including identifying an appropriate lawful basis. 

Fairness in AI and the AI lifecycle

Life sciences businesses using AI must be able to show their use of AI is fair and respectful. The guidance includes new information on ensuring fairness in AI processing operations. The new content includes guidance on ensuring fairness in processing operations as part of an AI system and how automated decision-making and profiling apply in an AI context. The updated guidance includes some useful considerations and technical approaches to mitigate bias in AI systems, which will be useful for life sciences businesses. There is also new information on (i) fairness considerations across the AI lifecycle; (ii) sources of bias that can lead to unfairness; and (iii) possible mitigation measures.

Need assistance?

With the life sciences sector looking to embrace AI, organisations should ensure they thoroughly understand the risks and consider data protection throughout the lifecycle of the AI system. Businesses will need to think carefully about the data privacy consequences of using personal data in an AI system, including proper risk mitigation measures and maintaining proper records. Our multi-disciplinary life sciences team has extensive experience advising businesses in the sector on all aspects of data protection, and we would be delighted to answer any questions you may have.

Contact our experts for further advice

Search our site