The AI Safety Summit

The AI Safety Summit

Technology and Digital Bitesize Update

Once home to the Government Communications Headquarters, Bletchley Park recently played host to the world’s first global AI safety summit. Attendees included global leaders and industry professionals, and the purpose of the summit was to come together to consider the risks associated with AI and how such risks can be mitigated through internationally coordinated action.

The summit had five objectives:

  • a shared understanding of the risks posed by frontier AI and the need for action
     
  • a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks
     
  • appropriate measures which individual organisations should take to increase frontier AI safety
     
  • areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance
     
  • showcase how ensuring the safe development of AI will enable AI to be used for good globally.

Key outcomes from the summit include:

The Bletchley Declaration (Declaration):

The Declaration recognises the benefits presented by AI but also acknowledges the risks it poses, with the potential for it to cause serious harm. The Declaration calls for nations to work towards the human-centric, safe and trustworthy development of AI in order to mitigate risks.

Further AI safety summits:

Further meetings have been scheduled over the course of 2024, with South Korea set to host a virtual summit in May. Additionally, France will organise the next in-person summit, to take place in late 2024.

The AI Safety Institute:

The UK launched the first state-backed organisation focussed on advanced AI safety for the public interest. The AI Safety Institute will be tasked with testing the safety of emerging types of AI. While the AI Safety Institute lacks regulatory authority, its research will influence the UK and potentially international policymaking providing insights that will help shape governance and regulation.

Soon after the UK’s announcement, the US also announced that it was launching its own AI safety institute and on 1 April 2024, the UK and the US announced a new partnership on the science of AI safety. The partnership will see the respective institutes performing at least one joint testing exercise on a publicly accessible model in addition to further collaboration. In addition to its partnership with the US, the UK has also agreed a partnership with the Government of Singapore to collaborate on AI safety testing.

What’s next?

The AI safety summit, signing of the Bletchley Declaration and establishment of the AI safety institute are clear signs that the UK and international community are taking the potential dangers of AI more seriously. It is yet to be seen how these activities will shape future policy. What we know so far is that the UK is taking a principle-based approach to the regulation of AI requiring regulators in key sectors to use stated principles to devise relevant controls, whereas other countries or blocs are choosing to pass specific laws regulating AI across the board. The European Union formally adopted the EU AI Act on 13 March 2024. The US is taking a similar approach with President Biden releasing "The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" which among other things sets new standards for AI safety and security by government agencies. Under Japan's presidency, the G7 also lunched the Hiroshima AI Process.

Contact our experts for further advice

Search our site