UK AI regulation update: a new rulebook?

UK AI regulation update: a new rulebook?

Legal Seminar

Earlier this summer, the UK set out its proposals for the regulation of artificial intelligence (AI) within the UK. The proposals aim to strike a balance between protection of the public whilst promoting innovation and are another step in the UK’s National AI Strategy.

AI regulation in the UK is not currently dealt with in one place; rather it is contained within various legal instruments, such as data protection law or the Equality Act 2010. The proposals therefore play a part in developing a UK framework, creating a more coherent approach between sectors and clarity for businesses and the public alike.

We have summarised the key elements of the proposals published below and expect a White Paper with fuller details later this year.

What will fall into scope?

The proposals do not define AI and instead sets out what are considered to be core characteristics of AI. This aims to retain flexibility whilst still providing an element of coherence, with the intention being that individual regulators will adopt more specific definitions where required in their sectors informed by the core characteristics.

The two core characteristics of AI according to the proposals are:

  • "Adaptiveness" of the technology – i.e. can the intent or logic be explained
  • "Autonomy" of the technology – i.e. it doesn’t require instruction or oversight from a user

What are the six core principles?

Where technology falls within scope, its developers and users will need to have regard to the 6 core principles described in the proposals.

The six core principles contained within the proposals require developers and users of AI to:

  1. Ensure AI is used safely

Especially in some sectors, for example, healthcare.

  1. Ensure AI is technically secure, and functions as designed

AI systems performing as they are intended to is likely to instil public confidence, allowing for the continued commercialisation of AI.

  1. Ensure AI is appropriately transparent and explainable

This principle may vary in practice between sectors. Some regulators may seek to prohibit AI decision making which cannot be explained. The proposals also suggest some example transparency obligations, such as requirements to proactively or retrospectively provide information, relating to the data being used and information relating to training data.

  1. Consider fairness

Again, this may have different applications depending on the sector and all regulators will need to consider what fairness looks like in their sector.

  1. Identify a legal person to be responsible for AI

This may be a corporate or natural person.

  1. Clarify routes to redress or contestability

Using AI should not remove the right to contest an outcome.

How will the proposals be regulated?

The proposals take a sector-specific approach. That is, sector regulators will be responsible for applying the principals to their relevant sectors. Regulators including Ofcom, the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA) and the Medicine and Healthcare Products Regulatory Agency will apply the 6 principles in overseeing AI. However, it is worth noting that the principles will initially be non-statutory so they can be monitored and updated if required.

Next steps?

Whilst the proposals are a welcome step within the UK, it needs to be acknowledged that they do not change the legislative framework around AI and instead are non-statutory “guidelines” to help increase public trust and innovation. The usefulness of the proposals are yet to be seen; the white paper expected later on this year will shed some more light on the impact of the proposals and is an opportunity to amend the core characteristics of AI or the six principles.

It also cannot be overlooked that UK businesses also operating within the EU will also need to consider the EU regulation of AI. Please see our article regarding EU regulation here. The UK and EU frameworks take fundamentally different approaches, with the EU focusing on the risks posed by AI systems generally and the UK taking a sector-specific approach.

If you would like our help navigating your use of AI within the UK, or to discuss the proposals please do get in touch.

Contact our experts for further advice

Search our site