The ICO’s new “tech futures: Agentic AI report” marks an important moment in the regulatory conversation around advanced AI systems. The ICO describes agentic AI as systems that go beyond generative AI by combining language capabilities with tools, memory and adaptive decision making, so they can plan and act to complete open ended tasks. This contrasts with everyday large language models, which generate text in response to prompts rather than acting independently. The ICO notes that these agentic capabilities could support a range of uses across commerce, government services, cybersecurity and areas of daily life.
The report also makes clear that these opportunities must be matched by strong governance, transparency and clear organisational responsibility for how agentic systems process personal data. The ICO highlights several data protection risks linked to agentic systems. These include more complex questions about controller and processor responsibilities where multiple providers contribute to an agentic AI supply chain, increased automation that may lead to automated decisions with legal or significant effects, the potential for purpose creep where systems are designed to complete open ended tasks and the risk that agentic systems could access or process more personal information than is necessary for the task. The ICO also notes that the complexity of these systems may make transparency and the exercise of information rights more difficult.
For organisations considering the use of agentic AI, the ICO’s message remains positive. The regulator recognises the potential for innovation and highlights that organisations will need to address the data protection risks identified in the report. These include having clear purposes to avoid purpose creep, applying data minimisation so that only the information needed for a task is processed and ensuring appropriate accountability arrangements, as AI agency does not remove human or organisational responsibility for data processing. By addressing these areas, organisations can support both effective use of agentic AI and responsible data protection practices.
Organisations will also need to monitor developments in domestic and EU legislation and guidance in this rapidly evolving area. While agentic AI is not addressed expressly in the Data Use and Access Act 2025 or the EU AI Act and related EU AI Omnibus reforms, both regimes contain themes that are relevant to the issues highlighted by the ICO, including automated decision making and strengthened data protection duties. As these frameworks continue to develop, businesses should keep a close eye on future updates. Stevens & Bolton has recently examined the key reforms introduced by the Data Use and Access Act 2025, which may be of interest to organisations considering how emerging UK legislative changes interact with evolving regulatory expectations around AI.