The Information Commissioner’s Office (ICO) published updated draft guidance on automated decision‑making (ADM) and profiling on the 31 March 2026. The need for updated guidance reflects both the rapid uptake of AI‑driven systems and recent legislative reform under the Data (Use and Access) Act 2025 (DUAA). The guidance provides clearer regulatory expectations for organisations using automation to make decisions about individuals, particularly where those decisions may have significant legal or practical effects. You can read the guidance here: Automated decision-making, including profiling | ICO
Under the UK GDPR, ADM refers to decisions made entirely by automated means, without meaningful human involvement, which produce legal or similarly significant effects for individuals. Consequent to the development of AI‑related provisions, there has been an increasing uptake by organisations of AI‑driven systems, with automated decision‑making being used in ways that can affect individuals’ access to employment, credit and insurance to name but a few examples.
One of the most important clarifications in the updated guidance concerns “meaningful human involvement”. The ICO makes clear that superficial or token human oversight will not be sufficient. To take a decision outside the scope of “solely automated” processing, a human decision‑maker must have real authority, understand the basis of the automated output, and be able to challenge or override it. Simply rubber‑stamping an algorithmic recommendation will not meet this standard.
The guidance also reflects the UK’s evolving regulatory position following the DUAA. While the previous framework imposed tight restrictions on solely automated decision‑making, the reformed regime provides organisations with greater flexibility to deploy automation, including AI systems, provided appropriate safeguards are in place. These safeguards include transparency with affected individuals, mechanisms to request human intervention, and opportunities to contest decisions.
Transparency is a consistent theme throughout the guidance. Organisations must be open about when ADM is used and provide individuals with clear, meaningful explanations of how decisions are made and how they may impact them. This remains a challenge for complex AI models, but the ICO emphasises that explainability is a legal requirement and not optional.
Recruitment is identified as a particular risk area. The ICO has recently focused on the use of automation in hiring processes, acknowledging efficiency benefits while warning of risks such as bias, discrimination and lack of accountability. Employers using ADM in recruitment are expected to carry out thorough data protection impact assessments (DPIAs) and regularly monitor systems for fairness and accuracy.
Overall, the updated guidance signals the ICO’s intention to support responsible innovation while reinforcing that automation must be deployed in a way that protects individuals’ rights. Organisations using ADM should take this opportunity to review governance arrangements, human oversight processes and transparency materials to ensure continued compliance.