Date: Nov 14, 2023
Reflections on the AI summit and fringe events
Topic/Area:
Artificial IntelligenceDate: Nov 14, 2023
Topic/Area:
Artificial IntelligenceOn the 1st and 2nd November, the UK government held the widely-anticipated AI Safety Summit. Our Head of AI Policy, Jess Whittlestone, attended day 1 of the summit, and various other team members attended a range of AI Fringe events throughout the week. In this post, we share some reflections on the week and what we hope to see next.
Overall, the summit did a good job bringing together a wide range of countries and stakeholders to discuss important challenges and opportunities for progress in managing the risks of AI. It’s no small accomplishment to put together an event where US and Chinese government officials share a stage and talk about the shared challenges of AI. The Bletchley Declaration on AI Safety agreed at the summit, demonstrates real, if high-level, progress in reaching an international consensus and shared understanding of the need to address a wide range of risks from AI.
We were particularly pleased to see a clear commitment to subsequent summits — the next one being in just six months in South Korea — ensuring that this event is just the start of a much longer process.
The summit helped generate a range of milestone announcements and commitments, including:
These all demonstrate substantial progress towards greater oversight and accountability of companies developing the most advanced AI models. Putting pressure on companies to be transparent about their safety practices and opening those practices up to external scrutiny makes it more likely we’ll see a ‘race to the top’ where companies want to outcompete each other on safety. Building the expertise and capacity within government to understand risks and scrutinise systems will be crucial to ensuring companies can’t just ‘mark their own homework’.
All of that said, it was clear from the day’s conversations that most agree on the need for more substantive government action, including regulation, to most effectively address a range of risks from AI development. There was clear consensus that we need to move beyond asking companies to answer questions about what safety looks like, towards government and third parties being able to step in with clear requirements, and the ability to evaluate whether companies are meeting those requirements.
It was clear that the ongoing debate about open-source in AI remains contentious and challenging: particularly whether there are some cases where the open-sourcing of models should be limited due to security or misuse concerns. While there were many strong opinions and important considerations pushing in different directions, there did seem to be some convergence on the idea that the debate needs to move beyond an overly binary ‘open vs. closed’ framing. Finding the right balance here will be crucial to getting any AI regulation right, and governments — especially the UK — must attempt to lay out a range of intermediary solutions and processes here through deeper consultation with a variety of perspectives.
Finally, there is a lot of well-justified optimism about the progress of the UK’s Frontier AI Taskforce, now the UK AI Safety Institute, and a parallel institute being set up in the US. These institutes will be key to building the government capacity and expertise needed for governments to be more proactive in establishing requirements and regulations for AI developers. However, many people pointed out that there’s still a long way to go until anyone is able to confidently assess AI systems for the full range of risks we might be concerned about. Many of the methods for doing so that are currently popular – such as red teaming and model evaluations – are limited insofar as they really work best when you know precisely what you are looking for, and won’t catch ‘unknown unknowns’. Others pointed out the importance of recognising that risks can’t be fully understood by studying AI systems in isolation – we need more methods which consider how new AI systems will interact in a range of societal contexts.
Several of our team members also attended the AI Fringe throughout the week. The Fringe was a series of events across London and the UK designed to complement the Summit by convening a more diverse range of voices, and expanding the conversation beyond Frontier AI safety to Responsible AI more broadly.
Cassidy Nelson, our Head of Biosecurity, spoke on a panel about risks of AI-enabled biotechnology (link to panel discussion, and schedule of day’s activities). Cassidy and Sophie Rose, our Senior Biosecurity Policy Advisor, participated in a DSIT and Royal Society workshop in addition to other side events. Ben Robinson, our newly joined AI Policy Advisor, also attended various Fringe events throughout the week, as well as contributing to two roundtable discussions the week prior (one on international collaboration organised by Chatham House, and another on definitions of AI safety organised by Connected by Data).
The Fringe had an excellent range of topics and speakers, covering issues as diverse as the role of AI in climate change, to the challenge of verifying content in the age of AI, to AI and the future of work.
Some of our takeaways from the week included:
Overall, it’s impressive that the UK government managed to convene such a wide-reaching event, and agree on a number of practical commitments, in such a short amount of time. We’re pleased to see early signs that traditional governance and diplomacy processes are adapting to the challenge posed by rapid and hard-to-predict AI progress.
By the next summit in 6 months’ time, we’d love to see:
At CLTR we’re planning to do work across all of these areas in the coming months, and Jess Whittlestone has taken on a 1 day/week secondment to the Department for Science, Innovation and Technology to advise on the government’s ongoing approach to regulation. If you’re doing similar work and are interested in collaborating, please get in touch at info@longtermresilience.org.
We are pleased to announce today that Open Philanthropy is supporting our work to transform global resilience to extreme risks. Open Philanthropy is a philanthropic funder with the mission “to help others as much as we can with the resources available to us.” Open Philanthropy has committed to a grant to CLTR of £4 million […]
In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces.
In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces.