The European Artificial Intelligence Act (The AI Act)

The European Artificial Intelligence Act (The AI Act)

Webinar: AI in the workplace

On 13 March 2024, European Parliament approved the “AI Act” following a provisional agreement being reached on 8 December 2023. This contrasts to the UK’s "light touch" approach whereby existing regulators will be upskilled to govern the use of AI in favour of the imposition of a centralised body and specific regulation.

UK businesses which place or otherwise make AI systems available on the EU market should be aware of the AI Act and adhere to the new regulations once they come into force. UK companies may also wish to consider proactively embracing the provisions of the AI Act, as it is possible that the UK will develop its own similar legislative framework in coming years.

After the European Council formally endorse the final text, it is anticipated that the AI Act will come into force in around May 2024. Businesses will then have between six and 36 months to comply with the AI Act’s provisions depending on the type of AI system and risk categorisation. As may be familiar from other recent high profile EU derived legislation (e.g. GDPR), a failure to comply with the AI Act (where it applies) may risk significant fines.

Who will the AI Act apply to?

  1. Providers that put AI systems into service in the EU or place them on the EU market, irrespective of whether those providers are based within the EU or in a third country;
  2. Users of AI systems located within the EU; and
  3. Providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.

A “provider” is essentially the entity that develops an AI system or puts it on the EU market. 

Risk-based approach

The AI Act assigns the following risk categories to AI applications and systems and provides for each to be treated accordingly:

  • Unacceptable risk – these prohibited AI systems include those which use subliminal techniques to manipulate behaviour which AI designed to exploit the vulnerability of a specific group, and the untargeted scraping of facial images such as from the internet to create image databases;
  • High risk – use of these AI systems could have important consequences for individuals, such as deciding access to essential services and benefits, so are permitted but business are subject to certain requirements, including ensuring:
    • risk management systems are in place;
    • proper data governance for those AI systems that involve training of models;
    • adequate technical documentation;
    • automatic logging and record-keeping capabilities;
    • transparency and the provision of information to users;
    • human oversight interface tools; and
    • an appropriate level of accuracy, robustness and cybersecurity.
  • Limited risk; and
  • Low and minimal risk – the AI Act emphasises the importance of transparency when using these systems, such as chat bots, which interact with people. This is to ensure that individuals know that they are interacting with AI, including where there is visual or audio output that resembles existing persons, objects, places or other entities – so called “deep fakes” must be disclosed.

The revised text of the AI Act (unlike the original) recognises the concept of “foundation models” or general purpose AI systems – the systems which give generative AI models the ability to create new materials. These will now be brought into scope, designated as either lower risk systems, for example systems like ChatGPT, or as “high impact” general purpose models, with a regime similar to the high risk AI systems. Specific sectors or categories are out of scope of the regulation, such as systems for military use.

The AI Act also provides specific requirements for importers, distributors, own-labellers and users of AI systems to ensure that use of the AI is in conformance with the Act. 

The AI Act represents a milestone in AI regulation, acknowledging the transformative power of AI while seeking to mitigate potential risks. By establishing clear guidelines, promoting transparency, and emphasising ethical considerations, the EU strives to create a safer and more trustworthy AI ecosystem. 

For more information, please contact Beverley Flynn, Charles Maurice or any member of the commercial and technology team.

The information contained in this guide is intended to be a general introductory summary of the subject matters covered only. It does not purport to be exhaustive, or to provide legal advice, and should not be used as a substitute for such advice.

Contact our experts for further advice

Search our site