The Information Commissioner’s Office (ICO) has issued updated Guidance on Artificial Intelligence (AI) and Data Protection (guidance) in response to industry pressure to clarify its stance on AI. An overview of some of the key changes is set out below and the updated guidance can be found in full here. The ICO anticipates making further updates to the guidance as AI evolves.
Accountability and governance
This is an old chapter with some new additions. The updated guidance is more specific on the information that should be included in a data privacy impact assessment (DPIA) in the context of AI. When considering the impact of an AI system, organisations should consider both (i) "allocative harms", which are the result of a decision to allocate goods and opportunities amongst a group; and (ii) "representational harms", which occur when systems reinforce the subordination of groups along identity lines.
Transparency in AI
This is a new chapter which supplements the main guidance on "Explaining decisions made with AI". It includes high-level content on the transparency obligations that apply towards individuals when an organisation processes personal data using an AI system. This includes explaining, before any data is processed by the AI system: the purposes of processing, retention periods, and who the data will be shared with. The ICO acknowledges the difficulty faced by organisations in explaining AI systems to individuals because of their inherent complexity and offers some suggestions to remediate this issue.
Lawfulness in AI
This section has been rewritten with new sections added on: (i) AI and inferences; and (ii) Special category data and assessing an appropriate lawful basis. The guidance confirms that inferences made using AI about individuals or groups will be personal data if it relates to an identified or identifiable individual. In addition, an AI system may allow an organisation to guess information about someone which would constitute Article 9 Special category data, but it would depend on the certainty of the inference and whether the organisation is deliberately drawing the inference.
Accuracy and statistical accuracy
This section has been rewritten and focuses on the controls organisations can implement to ensure fair processing under data protection laws. Importantly, the guidance is clear that where outputs are a statistically informed guess, the organisation’s records should be clear that the guess is not a fact, but instead only may be true. Organisations should also put in place procedures to monitor the accuracy of records to monitor so called "model drift" and make corrections where inaccuracies are identified.
Fairness in AI and the AI lifecycle
There is new information on ensuring fairness in AI processing operations. The new content includes guidance on ensuring fairness in processing operations as part of an AI system and how automated decision-making and profiling apply in an AI context. The updated guidance includes some useful considerations and technical approaches to mitigate bias in AI systems. There is also new information on (i) fairness considerations across the AI lifecycle; (ii) sources of bias that can lead to unfairness; and (iii) possible mitigation measures.
New additions to the glossary have been made including definitions for "algorithmic fairness" and "post-processing bias mitigation".
We expect further ICO updates and clarifications as AI continues to evolve.