The EU AI act: Key considerations

The EU AI act: Key considerations

Generative AI - ICO position on web scraping

In this briefing we address the applicability of the EU AI Act (AI Act) along with other considerations where contracts involve the supply or use of AI in so far as it affects English law.

The AI Act and its scope

The AI Act is an EU Regulation which is directly effective in the EU and came into force on 1 August 2024 with its rules becoming applicable at a later date. It applies to the following:

  • Organisations or individuals that supply AI systems on the EU market (regardless of where they are established or located).
  • Organisations or individuals that use AI systems (for business purposes) and are located or established in the EU.
  • Organisations or individuals that are located or established outside of the EU, for example in the UK, if the output of the AI system is used (or intended to be used) within the EU.

Roles and responsibilities

The AI Act has the potential to impact on a range of different businesses depending on their role in the AI lifecycle. The AI Act imposes obligations on six categories of entities: providers, deployers, importers, distributors, product manufacturers and authorised representatives and they are defined collectively as "operators". Different obligations attach to each operator role, with providers being subject to the strictest requirements. 

A provider is an individual or organisation that develops or commissions the development of an AI system and places it on the EU market or puts it into service in the EU under its own name or trademark, whether for payment or free.

A deployer (or user) is defined as an individual or organisation using an AI system for a professional activity.

It should be noted that the role of provider can be transferred to another operator. A deployer may become a provider, for example if it acquires a third-party AI system and alters it to change its purpose; or adds its own trademark (even if it does nothing else) – known as “white labelling”. The prescribed provider/deployer roles under the AI Act are important to get right as they affect the obligations prescribed under the AI Act – such as those pertaining to transparency and discussed later.

What is an AI system?

An AI system, as defined under the AI Act is:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

In determining whether an AI solution is an AI system under the AI Act, it may be useful to consider the following:

  1. Is the system capable of adapting after deployment?
  2. Is the system autonomous? i.e. there is no human intervention.
  3. Is the system capable of inferring from an input, how to generate an output?

The European Commission (EC) has published detailed guidelines on the definition of an AI System which should be considered, these are available here.

What does the AI Act require?

A risk-based approach

Obligations under the AI Act are determined by the risk category and appropriate consideration and categorisation should be undertaken:

On 2 February 2025, the first provisions of the AI Act started to apply. These include:

  • Prohibited practices (discussed above).
  • A duty to introduce AI literacy into an organisation; ensuring staff have sufficient knowledge and understanding of AI, typically involving implementing AI governance policies and training programmes.

Transparency obligations

From August 2025, the AI Act imposes specific transparency obligations on both providers and deployers of AI systems to ensure ethical and responsible use. The exact obligations vary depending on type of AI system risk profile in scope.

  • Providers must for example: inform users when they are interacting with an AI system, unless it is obvious to a reasonably well-informed person and ensure outputs are marked and detectable as artificially generated.
  • Deployers have obligations to inform individuals when they are exposed to AI systems used for emotion recognition or biometric categorisation.

These (and other) requirements should be considered alongside data protection legislation and the transparency obligations thereunder.

Other considerations for AI

In addition to the AI Act, there are other considerations in contracts for AI:

Data protection

An AI system usually requires vast amounts of data to operate. If personal data is to be processed during the lifecycle of the AI system, consideration shall be given to data protection practices such as: the legal grounds for processing personal data; a legitimate interests assessment; a data privacy impact assessment; and transparency requirements.

Contractual protection on use of AI

Deployers and providers will look to obtain some level of comfort in areas such as: IP infringement, output use, accuracy of input and output data and compliance with laws - with contractual liability apportioned in the contract.

Ownership of output

The AI Act does not stipulate ownership of output data, and the law is yet to be established on copyright and outputs from an English law perspective. It is generally agreed that AI systems themselves are incapable of owning IP rights. A range of opinions exists on who the legal rights owner should be of output data. Ownership could potentially be attributed to the owner of the AI system, the developer of the AI system or the end user or operator of the system.

Infringement risks of AI

When using AI technology to generate content, another important consideration is whether there is a risk of infringing third party IP rights. AI systems are generally trained using massive amounts of publicly available data – where this data is protected by copyright or other IP rights, the use of such data to train an AI system would infringe these third-party rights where the permission of the rights owners have not been obtained.

Regulatory compliance

With legislation, policy and guidance at national level moving at such a fast pace, giving any sort of contractual commitment to comply with all applicable laws and/or guidance is onerous. Consider which laws, guidance and policy documents can be complied with in a contractual context and if compliance warranties are appropriate.

Ahead

Following the February 2025 ban on certain "unacceptable risk" AI systems, the next wave of obligations will span the next two years, with full compliance for high-risk AI systems expected by 2027. Non-compliance could lead to significant penalties, with fines reaching up to EUR35m or 7% of global turnover, underscoring the importance of early preparation.

Contact our experts for further advice

Search our site