Shape of a globe
Shape of a globe

Date: Nov 14, 2023

Reflections on the AI summit and fringe events

Topic/Area:

Artificial Intelligence

On the 1st and 2nd November, the UK government held the widely-anticipated AI Safety Summit. Our Head of AI Policy, Jess Whittlestone, attended day 1 of the summit, and various other team members attended a range of AI Fringe events throughout the week. In this post, we share some reflections on the week and what we hope to see next.

The summit showed real, meaningful progress in AI governance…

Overall, the summit did a good job bringing together a wide range of countries and stakeholders to discuss important challenges and opportunities for progress in managing the risks of AI. It’s no small accomplishment to put together an event where US and Chinese government officials share a stage and talk about the shared challenges of AI. The Bletchley Declaration on AI Safety agreed at the summit, demonstrates real, if high-level, progress in reaching an international consensus and shared understanding of the need to address a wide range of risks from AI.

We were particularly pleased to see a clear commitment to subsequent summits — the next one being in just six months in South Korea — ensuring that this event is just the start of a much longer process.

The summit helped generate a range of milestone announcements and commitments, including:

  • A 50-page UK government report on ‘Emerging processes for frontier AI safety’, which details 9 different practices that companies could (or perhaps should) adopt to ensure safe and responsible development. (We contributed to and reviewed some parts of the report, especially around model reporting and information sharing practices, and the need for companies to establish effective risk governance measures).
  • Various leading AI companies publicly sharing details of their safety policies, in response to questions from the UK government, as well as committing to provide government with early access to models for safety testing.
  • A commitment to establish AI Safety Institutes in both the UK and US, which will advance public sector understanding of risks from AI, as well as building capacity to scrutinise AI models and hold companies accountable.

These all demonstrate substantial progress towards greater oversight and accountability of companies developing the most advanced AI models. Putting pressure on companies to be transparent about their safety practices and opening those practices up to external scrutiny makes it more likely we’ll see a ‘race to the top’ where companies want to outcompete each other on safety. Building the expertise and capacity within government to understand risks and scrutinise systems will be crucial to ensuring companies can’t just ‘mark their own homework’. 

… While making it clear that there is still much, much more to be done — especially on regulation

All of that said, it was clear from the day’s conversations that most agree on the need for more substantive government action, including regulation, to most effectively address a range of risks from AI development. There was clear consensus that we need to move beyond asking companies to answer questions about what safety looks like, towards government and third parties being able to step in with clear requirements, and the ability to evaluate whether companies are meeting those requirements.

It was clear that the ongoing debate about open-source in AI remains contentious and challenging: particularly whether there are some cases where the open-sourcing of models should be limited due to security or misuse concerns. While there were many strong opinions and important considerations pushing in different directions, there did seem to be some convergence on the idea that the debate needs to move beyond an overly binary ‘open vs. closed’ framing. Finding the right balance here will be crucial to getting any AI regulation right, and governments — especially the UK — must attempt to lay out a range of intermediary solutions and processes here through deeper consultation with a variety of perspectives.

Finally, there is a lot of well-justified optimism about the progress of the UK’s Frontier AI Taskforce, now the UK AI Safety Institute, and a parallel institute being set up in the US. These institutes will be key to building the government capacity and expertise needed for governments to be more proactive in establishing requirements and regulations for AI developers. However, many people pointed out that there’s still a long way to go until anyone is able to confidently assess AI systems for the full range of risks we might be concerned about. Many of the methods for doing so that are currently popular – such as red teaming and model evaluations – are limited insofar as they really work best when you know precisely what you are looking for, and won’t catch ‘unknown unknowns’. Others pointed out the importance of recognising that risks can’t be fully understood by studying AI systems in isolation – we need more methods which consider how new AI systems will interact in a range of societal contexts.

The AI Fringe did a good job representing a wider range of topics and perspectives

Several of our team members also attended the AI Fringe throughout the week. The Fringe was a series of events across London and the UK designed to complement the Summit by convening a more diverse range of voices, and expanding the conversation beyond Frontier AI safety to Responsible AI more broadly.

Cassidy Nelson, our Head of Biosecurity, spoke on a panel about risks of AI-enabled biotechnology (link to panel discussion, and schedule of day’s activities). Cassidy and Sophie Rose, our Senior Biosecurity Policy Advisor, participated in a DSIT and Royal Society workshop in addition to other side events. Ben Robinson, our newly joined AI Policy Advisor, also attended various Fringe events throughout the week, as well as contributing to two roundtable discussions the week prior (one on international collaboration organised by Chatham House, and another on definitions of AI safety organised by Connected by Data).

The Fringe had an excellent range of topics and speakers, covering issues as diverse as the role of AI in climate change, to the challenge of verifying content in the age of AI, to AI and the future of work

Some of our takeaways from the week included:

  1. More agreement than disagreement on AI risks and policy interventions. While some saw the Fringe as organised in opposition to the Summit, convening a more diverse range of voices and topics, many of the discussions seemed complementary to the work happening in Bletchley. From better understanding the risks of AI implementation in the workplace, to how AI might impact democracy, there was a sense that both frontier risks, and risks from current or less advanced systems, were important to address. This being said, there was some confusion about what the government’s definition of frontier AI meant, as well as claims that the government’s narrow focus on the frontier was driven by tech companies wanting to control the conversation about regulation. 
  2. Importance of sociotechnical safety. A range of speakers emphasised that safety cannot be understood purely as a technical issue, where the risks from AI models are studied in isolation with limited analysis of their interaction with humans and in society. Various analogies to other fields were given, such as safety in the aviation industry, and the similarities/differences to these fields debated. See for instance this panel on defining AI safety in practice, with a discussion of human and organisational factors leading to unsafe outcomes in the aviation industry at 4:00.  
  3. Importance of engaging the public on AI. Throughout the week, there were various reasons given as to why broad public engagement is necessary for good policymaking, even in an area like AI regulation that might seem best fit just for experts. Epistemically, better policy is made when a broader range of stakeholders and viewpoints are included. Politically, policy that is created and passed with public engagement is more likely to be popular in the future once implemented. And democratically, the public ought to be engaged in the decision-making of something so potentially transformative to society as AI. In practice, it’s not possible to have broad public engagement on all policy decisions, so engagement needs to be strategic, with the ‘why’ properly considered. These points, and more, were discussed throughout the week, including this panel on public involvement in AI governance. 

Next steps

Overall, it’s impressive that the UK government managed to convene such a wide-reaching event, and agree on a number of practical commitments, in such a short amount of time. We’re pleased to see early signs that traditional governance and diplomacy processes are adapting to the challenge posed by rapid and hard-to-predict AI progress.

By the next summit in 6 months’ time, we’d love to see:

  • Governments taking more proactive steps to tell companies what ‘safe and responsible development’ should look like, rather than looking to them too much for the answers;
  • The UK in particular drawing on a wider base of civil society expertise and perspectives to scrutinise companies’ approaches, and to determine the requirements they should be subject to;
  • More substantive discussions of specific regulatory proposals and how domestic regulation can best be aligned with international governance mechanisms.

At CLTR we’re planning to do work across all of these areas in the coming months, and Jess Whittlestone has taken on a 1 day/week secondment to the Department for Science, Innovation and Technology to advise on the government’s ongoing approach to regulation. If you’re doing similar work and are interested in collaborating, please get in touch at info@longtermresilience.org.

Recent Posts

New grant announcement and match fund now live

We are pleased to announce today that Open Philanthropy is supporting our work to transform global resilience to extreme risks.  Open Philanthropy is a philanthropic funder with the mission “to help others as much as we can with the resources available to us.” Open Philanthropy has committed to a grant to CLTR of £4 million […]

Oct 10, 2024

The Centre for Long-Term Resilience (CLTR) is recruiting a freelance communications contractor

In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces. 

Oct 2, 2024

Expression of Interest: Strategic Proposal Writer (contractor)

In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces. 

Aug 23, 2024