Draft EU AI regulation and the impact on UK businesses

Draft EU AI regulation and the impact on UK businesses

Legal Seminar

The European Commission has published its proposal for harmonised rules on artificial intelligence (AI) regulation. As currently drafted, it imposes considerable obligations on AI providers and users. This is separate from the recent European Think Tank on The Impact of the General Data Protection Regulation (GDPR) on AI.

The aim of the proposal is to create a legal framework which will promote innovation and investment in AI, whilst protecting fundamental rights and ensuring AI applications are used safely. The hope is that the regulation will instil trust and legal certainty, which is currently lacking. A consultation in 2020 requesting opinions of the White Paper on Artificial Intelligence (White Paper) confirmed the need for regulation, with a large majority of stakeholders agreeing that there are legislative gaps or new legislation needed entirely.

Whilst the draft regulation will not have direct effect in the UK, the extra-territorial scope (which we have seen frequently, for example, GDPR) will mean any UK businesses offering AI to the EU will need to comply with the extensive provisions to avoid fines. As currently drafted, the regulation anticipates the new requirements applying two years after adoption and publication of the final regulation. That means it could apply as soon as 2024.

The definition of "artificial intelligence system" within the regulation is broad; artificial intelligence system means “software that is developed with one or more of the techniques and approaches listed in Annex I (of the regulation) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. This broad definition appears to overlook stakeholder responses on the White Paper consultation, which called for a narrow, clear and precise definition. 

The scope of the definition is not the only thing which is broad - the regulation applies to both providers and users of AI. Irrespective of where the provider is based, if their systems are used in the EU or if they would impact a natural person in the EU, they are caught within the scope of the regulation. In addition, individuals or entities using the systems in the EU (unless using for personal non-business activity), are also caught by the scope.

Taking a risk-based approach and considering potential negative impact on individuals and society, the regulation categorises types of AI and even goes as far to ban some practices. For example, applications that may influence on behaviours with physical or psychological consequences pose an unacceptable risk in the Commission’s opinion and are prohibited. A list of prohibited practices can be found in Article 5 of the regulation.

Where an application is deemed to be high risk, pre-market assessments must be carried out. The bulk of the regulation revolves around high risk AI and the various requirements necessary to market such product or service. “High risk AI” is not a defined term within the regulation, however, Articles 6 and 7 provides various criteria as to whether a system could be considered to be high risk.

For limited risk applications, there are transparency obligations, such as informing people they are interacting with an AI system. Where there is minimal risk there are no prescribed obligations, however, the Commission encourages AI providers to voluntarily apply the requirements for high-risk AI and adhere to codes of conduct.

Enforcement

With maximum fines of up to EUR30m or 6% of the global annual turnover (whichever is higher), it is important businesses get up to speed with the new regulation to avoid these eye watering fines. Whilst the maximum fines are generally reserved for the greatest infringement; i.e. placing a prohibited AI application on the market, infringements such as supplying incorrect information still attract maximum fines of EUR10m or 2% of global annual turnover.

The European Artificial Intelligence Board will assist the Commission in relation to the regulation and Member States will be required to designate national competent authorities and a supervisory authority. The regulation does not provide a direct enforcement mechanism for individuals, nor does it provide a complaint system.

Next steps

Due to the extra-territorial effect of the regulation, businesses will need to assess their use of AI and ensure they are complying with the new obligations. The regulation could also set the standard globally. Since businesses incorporated outside of the EU but provide services within the EU will come under the remit of the regulation, businesses may take a more holistic approach to the application. Instead of applying the rules differently to areas of their businesses who deal with Europe, it is expected that businesses may standardise their practice effectively applying the EU regulation globally.

The proposal will now go to the European Parliament and the Council of Europe for further consideration and debate. Once adopted, the regulation will come into force 20 days after its publication in the Official Journal. The regulation is expected to apply 24 months after that date, but some provisions may apply sooner.

Contact our experts for further advice

Search our site