top of page

Our work related to the UK AI Safety Summit

The UK government has committed to holding an international summit on AI safety at Bletchley Park this November.

Over the past couple of months, I've been having various conversations inside and outside of the UK government, to try and better understand what the summit might aim to achieve, to form my own views on what it should aim to achieve, and to identify ways we and others can help make the summit a success.

A lot of the details here are still in flux, but I wanted to start communicating more publicly about my thoughts around the summit, and the work we at CLTR are doing related to it.

Moving AI governance discussions towards enforcement

I’m particularly excited about the idea that the UK AI Safety Summit could start to move AI governance discussions and commitments in the direction of greater enforcement and accountability: i.e. beyond companies making broad, voluntary commitments to safety and ethics, towards a world where governments can hold those companies meaningfully accountable for safe and responsible AI development.

This is a broad aim - some more concrete examples of what this could look like in terms of summit outcomes include:

  • More substantive, easily verifiable commitments from companies. Companies could agree to much more detailed and more easily verifiable versions of the White House voluntary commitments, which have been subjected to scrutiny by a range of independent experts, making it much easier to identify and call out where those commitments are or are not being followed. Commitments from companies to share information about AI development and risk-assessment practices with third parties, including the government and public, may be particularly important for making all other commitments more easily verifiable.

  • Commitments from governments to introduce reporting requirements for AI companies. Increased government visibility into AI development is necessary for any further progress in AI governance. Reporting requirements hit the ‘sweet spot’ of being achievable but also representing meaningful progress towards greater enforcement, and are something that we can begin implementing today while other details of regulation are being worked out.

  • Commitments from governments to establish enforcement, and discussions about the details of that enforcement: Governments could communicate and commit to plans to put in place enforcement measures, such as mandatory reporting requirements or periodic auditing, as well as committing to develop the state capacity that will be needed for meaningful enforcement, such as developing the expertise to scrutinise AI models. While national governments may not be ready by November to commit to putting in place specific regulatory mechanisms, broader commitments to establish regulation by some date (e.g. end of 2024), and broader discussions of the pros and cons of different regulatory levers, would be very valuable at the summit. I’d be particularly excited to see the UK lead the way here by presenting clear plans and commitments for the UK’s domestic regulation of AI.

To support this move towards enforcement in AI governance, it is particularly important that independent experts and civil society play a central role in the AI summit and preparations for it. I recently wrote a policy brief with recommendations in this direction, available here. I’m concerned that by default, the summit will centre discussions between the CEOs of AI companies and national leaders. But the event will have a better chance of resulting in substantive progress in AI governance if independent third parties, who can provide an important source of expertise and a counterweight to industry interests, are also centrally involved. I think this is as important in the lead-up to the summit as it is at the event itself: many of the substantive agreements and commitments will likely be reached beforehand, and independent scrutiny will be essential to getting these right.

I also think that, to establish real leadership internationally, the UK needs to be clear about how it plans to update its domestic regulatory approach.

Work we’re currently doing to support these aims

We’re speaking to a range of teams across the Department of Science, Innovation and Technology (DSIT) to better understand how we can best support work related to the summit.

Some work we’re currently doing at CLTR includes:

  • Hosting some workshops with RAND Europe and the Office for AI on foundation model governance, with a particular focus on getting wider expert input on a range of policy proposals to support government policy development ahead of the summit.

  • Developing more detailed policy recommendations for reporting requirements, including what kinds of commitments or agreements might be achievable by the time of the summit, and discussing these recommendations with UK policymakers.

  • Exploring current enforcement levers available to governments (e.g. reporting, auditing, certification, licensing, liability), some of the tradeoffs and costs those levers might introduce (e.g. regulatory capture, regulatory flight), and how those might be balanced.

If you’re working on or interested in something related to these topics or anything else discussed in this post, we’d love to hear from you: you can get in touch at info@longtermresilience.org


Recent Posts

See All
bottom of page