undefined

Last week’s AI Safety Summit at Bletchley Park was hailed as a landmark moment in the world’s efforts to regulate AI. It was the first time political and industry leaders from across the globe came together to establish a consensus on the benefits and pitfalls of the emerging technology.

The key outcome from the Summit was the Bletchley Declaration, which saw 28 major nations – including China, who many observers had expected not to attend the conference – agree on the areas of AI that pose the most urgent and dangerous risks to our privacy, democracy and safety. That said, it did not result in any firm actions on how to combat these risks, just a commitment to manage them.

It was unrealistic to expect the summit to generate a specific roadmap to mitigate the risks posed by AI. It is a complex issue that is still in its relative infancy. Regulating a new industry and technology requires careful balancing between the interests of consumers, commercial players and the general public interest and each country will have its own priorities. As a result, it is going to take more than just one meeting to establish an effective and comprehensive coordinated international framework. However, it is obvious that world leaders recognise the urgency of the situation and the need for international cooperation and through the Bletchley Declaration we saw a desire to do more.

Usefully, the Bletchley Declaration sets out in some detail the main areas of concern, placing so called “frontier AI”, “general purpose AI models” and “foundation models” at the heart of the safety agenda. It underlined the need for urgent attention around issues including misinformation, bias mitigation, privacy and data protection, risks in the areas of cybersecurity and biotechnology and concerns associated with fairness, accountability, ethics and appropriate human oversight.

The question is – what does this mean in practice? And how will it shape the tech landscape in the months and years ahead, if at all?

Throwing caution to the wind

It’s a fine line to tread between managing the risks of AI and allowing positive AI development to continue. Measures to reduce risk are almost certainly going to slow the adoption of the technology across different markets, but this must be weighed up against the potential consequences of releasing unregulated new technologies into general circulation.

Notably, the Bletchley Declaration makes no mention of pausing or slowing down development in AI. There is no specific call for regulatory measures to prevent high risk AI models from being released into the market before appropriate regulations are developed and implemented in legislation.

This was most likely a deliberate omission, reflecting the concern that whilst some countries may move forward quickly to curb developers – in the private sector, in academia and in government – in their own territories, others may be reluctant to put the brakes on the developments in their domestic markets.

Ultimately, the Bletchley Declaration is non-binding. It relies the participating states (and those that did not participate) continuing the effort to develop regulations in their own countries and maintaining an agenda of international cooperation. There is always going to be an element of self-interest at play but the hope is that there will be some international alignment, at least at some basic level. 

From a legal perspective, the lack of international consensus means firms are still in the dark on how they might need to prepare. Ultimately it may mean those operating on a global scale having to adhere to different regulations across different markets, which will likely bring complications and challenges. The more serious concern is that in the absence of international regulatory alignment, national regulators will struggle to control the spread of harmful tools and applications and to protect their populations from bad actors and from poor quality and unsafe products. 

Collaboration is key

The summit underlined the UK Prime Minister, Rishi Sunak’s desire for the UK to be at the forefront of AI safety and the development of AI more widely. There were suggestions of jostling between the UK and the US to assert their position as the global leader in the field as they look to reap the benefits of the opportunities AI development can bring. Sunak spoke openly with Elon Musk about his desire to create a Silicon Valley-style approach to AI in the UK.

Ultimately, it doesn’t matter whether the UK, the US or any other nation considers themselves the leader – successfully managing the risks of AI is going to require a joined-up global effort, with everyone pulling together in the same direction. Indeed, the Bletchley Declaration highlights the need for “building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate”. This is undoubtedly going to pose challenges given the geopolitical tensions between some of the nations involved, specifically China and the US.

The summit has shown that the international community are – at least theoretically – on the same page when it comes to AI regulation and has identified the key areas of concern. It has provided a springboard, and now we need to see concrete action from governments and industry leaders and a continued programme of international cooperation in this area.