Date: Feb 7, 2024
The UK is heading in the right direction on AI regulation, but must move faster
Topic/Area:
Artificial IntelligenceDate: Feb 7, 2024
Topic/Area:
Artificial IntelligenceThis week marks an important milestone in the UK Government’s journey to regulating AI, with the first official update on the UK’s approach to AI regulation since the White Paper published last March.
We are pleased to see this update on the Government’s thinking, especially the firm recognition that targeted, binding requirements will be needed to address risks from the most advanced AI systems.
However, as the response itself acknowledges, the Government faces a real challenge in getting the pace of action right with any new legislation. We are concerned that the Government is moving too slowly here, contrasting the pace of progress in AI and faster regulatory action in other jurisdictions including the US and EU.
Although there are important questions that will need to be answered in the process of developing new legislation, not yet having perfect answers should not be an impediment to starting that process.
In this post, we outline why we think the UK is broadly moving in the right direction on AI regulation, why the Government must now make fast progress in breaking through the difficult problems ahead, and eight recommendations for next steps.
Getting AI regulation right is challenging: the technology is extremely fast-moving, has impacts across all of society, but poses very different risks and tradeoffs in different contexts. Given this, we broadly support the Government’s iterative and context-specific approach, which can be refined as the technology, its risks, and best practices for managing them become better understood.
The consultation response published this week highlights some important areas of progress over the past year. We are particularly pleased to see increased support for existing regulators in the form of a £10 million funding commitment. We agree that our existing regulators will be best placed to understand how AI should and should not be applied in most specific contexts, and ensuring these regulators have the skills and capacity they need will be essential.
We also welcome the firm recognition that new legislation will be needed to adequately address risks from highly capable AI systems. We’ve long advocated that a purely sector-specific approach to AI regulation will not address risks arising from the development and deployment of highly capable general-purpose systems, and that existing regulators will struggle to “reach” the relevant actors across the lifecycle, which could put an undue regulatory burden on smaller companies. We’re pleased to see the Government recognise these challenges in section 5.2.1 of the response, and proactively make the case for new legislation on highly capable general-purpose systems in section 5.2.3.
Though we believe the UK is moving in the right direction on AI regulation, we believe now is the time for action, and we are concerned that the current Government approach risks creating unnecessary delays.
The Government is understandably concerned that moving too quickly could risk stymying innovation, or could result in committing to rules which quickly become outdated. But there are also considerable risks to moving too slowly: existing harms and risks remain unaddressed while new ones will inevitably emerge; other countries and jurisdictions may increasingly set the terms of regulation; and innovation in the UK may also suffer due to a lack of regulatory certainty for businesses. As the consultation response itself acknowledges in places, there is good reason to think that targeted legislation – focused on mandating and improving risk assessment, mitigation and governance of the most highly-capable systems – can balance these two concerns effectively.
There will never be a perfect time to act: information about AI capabilities and risks will continue to be incomplete, meaning the government must find ways to act with imperfect information. And although the consultation response published this week starts to outline the conditions under which the Government will move towards new legislation, these conditions remain extremely broadly defined, meaning there is no clear target for the government to aim for.
Our overall view is that the case for additional, highly targeted regulation of highly capable general-purpose systems is at this point clear. While there are many challenging questions about the details of such regulation which need to be thoroughly considered as part of the process of designing new legislation, this uncertainty shouldn’t be an impediment to starting that process.
Since late 2022, we have been working with the UK Government in a number of different ways on AI regulation.
We have particularly focused on providing advice to senior policymakers around the possibility of introducing new legislation for the regulation of “frontier” or foundation models – both helping to make the case for such regulation, and providing expert input on many specific challenges of getting such regulation right (for example, how to define the scope of such regulation, and how to approach open source releases).
This has also included collaborative engagement with CLTR’s Biosecurity and Risk Management teams on cross-cutting topics, such as the governance of narrow but highly capable biological tools, and appropriate corporate governance requirements for AI companies. For the past few months, Dr Jess Whittlestone, our Head of AI Policy, has been on secondment 1 day per week as an expert advisor to the Department of Science, Innovation and Technology (DSIT), and particularly the teams there working on AI regulation.
On the 24th January 2024, we also worked with DSIT to convene a group of civil society and academic experts to provide input on the UK’s updated regulatory approach to the DSIT Secretary of State – attendees included senior representatives from the Ada Lovelace Institute, Alan Turing Institute, and leading UK universities.
We are pleased to announce today that Open Philanthropy is supporting our work to transform global resilience to extreme risks. Open Philanthropy is a philanthropic funder with the mission “to help others as much as we can with the resources available to us.” Open Philanthropy has committed to a grant to CLTR of £4 million […]
In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces.
In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces.