AI: data round up - A pro-innovation approach to AI regulation

AI: data round up - A pro-innovation approach to AI regulation

AI: data round up - A pro-innovation approach to AI regulation

Our next instalment of "AI: a round up – what’s the story?" covers the UK government’s white paper "A pro-innovation approach to AI regulation" and compares it to the EU’s approach.

If, how and to what extent governments and legislative bodies regulate the development and use of artificial intelligence and machine learning (AI) is a complex and fast-evolving topic. Law and policy makers need to balance the potential risks posed by AI against the well-documented opportunities and rewards.  

Although a private members bill has been introduced in the UK (see below), the UK government is currently taking a principles-based approach to the regulation of AI rather than, at least for now, passing specific laws to regulate the development and use of AI. This aligns with the government’s stated aim to allow existing regulators to interpret and apply AI within their own remit and is in contrast with the EU’s approach, with the EU favouring specific AI-regulation.

Taking a high level look at each of the UK’s and EU’s current approaches to AI regulation.

A “pro-innovation” stance for the UK

In its white paper, "A pro-innovation approach to AI regulation", the UK government’s broadly stated aim is to create a pro-innovation regulatory framework whilst protecting citizens’ rights and increasing public trust in the use of AI and its application.

This is underpinned by five principles, as follows:

  • Principle 1 - Safety, security and robustness. The aim is to achieve this principle via a risk-based approach. The white paper states that AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed. That said, it acknowledges that regulators may need to introduce specific measures to deal with the use of AI, for example particular technical measures to ensure data security.
     
  • Principle 2 - Appropriate transparency and explainability. In essence, this principle provides that an AI system should be transparent and be able to be explained. The rationale for this approach is to foster public trust and to enable regulators to properly regulate the use of AI. Practically achieving compliance with this principle may prove challenging due to the complexity of AI systems and how they function.
     
  • Principle 3 - Fairness. Broadly, this principle states that AI systems should not undermine the legal rights of individuals or organisations. The white paper states that this principle can be achieved by regulators ensuring that AI systems are designed and deployed with fairness in mind based on published descriptions of fairness. It includes ensuring that AI systems comply with existing UK relevant laws in this area, including (i) the Equality Act 2010, (ii) the Human Rights Act 1998, (iii) the UK GDPR and the Data Protection Act 2018, (iv) Consumer and competition laws, for example the Consumer Rights Act 2015, and (v) specific relevant sector requirements, for example the Financial Conduct Authority Handbook. The white paper appears clear that the government’s current view is that these existing laws remain generally appropriate to legislate for AI, at least for now. 
     
  • Principle 4 - Accountability and governance. This principle provides that creators and users of AI systems should have in place appropriate governance and accountability compliance frameworks, which will include documenting decisions and undertaking appropriate risk assessments. The white paper notes that compliance with this principle can be facilitated by regulatory guidance. Compliance with this principle might include establishing bespoke governance roles and committees, updating and creating new and specific policies and procedures, including risk assessments.
     
  • Principle 5 - Contestability and redress. Finally, this principle provides that users impacted by an AI system should be able to contest the decision or outcome that is harmful, or which has created a material risk of harm, with an expectation on existing regulators to ensure that there are appropriate routes to redress for users. The white paper confirms that the UK’s non-statutory approach will not create new rights or new routes to redress at this stage.

The UK – out of step with the EU?

This UK’s approach contrasts with the EU’s approach, with the latter choosing to directly regulate the development and use of AI. It was announced in December 2023 that the European Parliament and Council had reached political agreement on the EU Artificial Intelligence Act (EU AI Act).

The EU AI Act is a specific piece of EU regulation that will regulate how providers of AI systems are developed and used in the European Union.

 A copy of the EU AI Act briefing note can be found here, but a key part of the EU AI Act is the classification of AI systems depending on their use, as follows: (i) unacceptable risk (2) high-risk, or (3) low risk.

The EU AI Act prohibits systems posing unacceptable risks, for example a system that exploits any vulnerabilities of a specific group due to a physical disability.

Where a system is classified as “high risk”, for example, an AI system falling into one of the eight categories identified, including the management and operation of critical infrastructure, the EU AI Act imposes certain requirements including providers having in place a risk management system and implementing data governance and management practices.

Finally, “low risk” AI systems must comply with certain transparency requirements to enable the particular user to make a determination as to whether or not they wish to interact with it.

UK Artificial Intelligence (Regulation) Bill introduced

It is worth noting that a Private Members’ Bill entitled the "Artificial Intelligence (Regulation) Bill" (AI Bill) was introduced to the UK Parliament by Lord Holes of Richmond to make provision for the regulation of AI in the UK. The AI Bill had its first reading in the House of Lords on 22 November 2023. Private Members' Bills can be introduced by MPs and Peers that are not government ministers. The AI Bill is relatively short with the main purpose of establishing a centralised AI authority in the UK. It is worth noting that the success rate for these types of bills is not very high and it remains to be seen whether the government is supporting the AI Bill.

Looking forward

Post-Brexit, the UK has shown an intention to move away from alignment with EU rules in a number of areas, and the regulation of AI appears to be another example of this, with the UK government pushing a pro-innovation agenda compared to the EU’s seemingly more risk-adverse approach.

The UK is looking to position itself as a global leader in AI but it remains to be seen whether its current approach will remain appropriate and it may be that the UK develops a similar legislative framework to the EU’s in the coming years. UK companies may also wish to proactively embrace the provisions of the EU AI Act, in the same way that the General Data Protection Regulation has become the global standard for data protection. It may be that the AI Bill pushes the government into taking legislative action.

Contact our experts for further advice

Search our site