top of page

The UK is heading in the right direction on AI regulation, but must move faster

By Jess Whittlestone, Tommy Shaffer Shane, and Ben Robinson, CLTR


This week marks an important milestone in the UK Government’s journey to regulating AI, with the first official update on the UK’s approach to AI regulation since the White Paper published last March.


We are pleased to see this update on the Government’s thinking, especially the firm recognition that targeted, binding requirements will be needed to address risks from the most advanced AI systems.


However, as the response itself acknowledges, the Government faces a real challenge in getting the pace of action right with any new legislation. We are concerned that the Government is moving too slowly here, contrasting the pace of progress in AI and faster regulatory action in other jurisdictions including the US and EU.


Although there are important questions that will need to be answered in the process of developing new legislation, not yet having perfect answers should not be an impediment to starting that process.


In this post, we outline why we think the UK is broadly moving in the right direction on AI regulation, why the Government must now make fast progress in breaking through the difficult problems ahead, and eight recommendations for next steps.



We broadly support the UK’s iterative and targeted approach to AI regulation


Getting AI regulation right is challenging: the technology is extremely fast-moving, has impacts across all of society, but poses very different risks and tradeoffs in different contexts. Given this, we broadly support the Government’s iterative and context-specific approach, which can be refined as the technology, its risks, and best practices for managing them become better understood.


The consultation response published this week highlights some important areas of progress over the past year. We are particularly pleased to see increased support for existing regulators in the form of a £10 million funding commitment. We agree that our existing regulators will be best placed to understand how AI should and should not be applied in most specific contexts, and ensuring these regulators have the skills and capacity they need will be essential.


We also welcome the firm recognition that new legislation will be needed to adequately address risks from highly capable AI systems. We’ve long advocated that a purely sector-specific approach to AI regulation will not address risks arising from the development and deployment of highly capable general-purpose systems, and that existing regulators will struggle to “reach” the relevant actors across the lifecycle, which could put an undue regulatory burden on smaller companies. We’re pleased to see the Government recognise these challenges in section 5.2.1 of the response, and proactively make the case for new legislation on highly capable general-purpose systems in section 5.2.3.



But the Government must now move quickly on new legislation


Though we believe the UK is moving in the right direction on AI regulation, we believe now is the time for action, and we are concerned that the current Government approach risks creating unnecessary delays.


The Government is understandably concerned that moving too quickly could risk stymying innovation, or could result in committing to rules which quickly become outdated. But there are also considerable risks to moving too slowly: existing harms and risks remain unaddressed while new ones will inevitably emerge; other countries and jurisdictions may increasingly set the terms of regulation; and innovation in the UK may also suffer due to a lack of regulatory certainty for businesses. As the consultation response itself acknowledges in places, there is good reason to think that targeted legislation – focused on mandating and improving risk assessment, mitigation and governance of the most highly-capable systems – can balance these two concerns effectively.


There will never be a perfect time to act: information about AI capabilities and risks will continue to be incomplete, meaning the government must find ways to act with imperfect information. And although the consultation response published this week starts to outline the conditions under which the Government will move towards new legislation, these conditions remain extremely broadly defined, meaning there is no clear target for the government to aim for.



Recommendations for the UK Government’s next steps


What we would like to see next from the UK Government is:


  1. A clearer roadmap towards legislation of highly capable general-purpose systems, providing accountability for progress. This would include, for example, i) a deadline for the publication of more detailed Government thinking on legislation; ii) deadlines for industry, civil society and academia to provide input on key policy questions (Box 5 in the consultation response), and a much clearer sense of the input that is most valuable and important; iii) considerably sharpened conditions the Government would like to see met before moving forwards with legislation – including, for example, a clarification of how it will be determined whether voluntary commitments are being sufficiently adhered to – and a process and timeframe for checking back on these conditions. Inspiration can be drawn from Fact Sheets released by the US government on actions taken since the announcement of the Executive Order in late 2023. This includes specific initiatives committed to, the department/agency responsible, and a timeframe for delivery.

  2. Establishing and communicating clear checks and balances as well as accountabilities within the Government for delivering on this roadmap. It is not currently clear (at least from the outside) what exactly the scope of new functions such as CAIRF and AISI will be, how they will feed into established policymaking processes, or how they interact with other risk coordination and assessment functions (such as in the Cabinet Office). We suggest that this should include articulating where various stakeholders and functions within DSIT and Government more broadly sit on a “three lines” model, clearly separating out AI risk ownership, oversight , and audit.

  3. Clearer prioritisation of the most urgent and essential questions the Government must answer before it can legislate. The questions laid out in Box 5 of the consultation response are good ones, but most will never be answered fully. What do “good enough” answers on these questions look like? Which of these questions truly need to be answered for the government to fully commit to establishing binding measures – a process that even if started now, will likely take several years – and which involve ironing out details that will be an inevitable part of the process of drafting new legislation? This would provide confidence that the UK is not just stalling on new legislation, and provide clearer direction to civil society and academia on how experts can provide practical support in resolving these uncertainties.

  4. An update on how the Government is thinking about the scope of new legislation, and how thresholds for inclusion will be determined. This, in our view, is one of the key issues that will need to be resolved before moving forwards with legislation, especially determining a process for adapting thresholds for inclusion as the AI landscape changes, as well as considering whether any “narrow” models in high-risk domains (such as biotechnology) should also be subject to greater regulatory scrutiny. Outside experts will have valuable input to provide here, but this will be more productive the sooner the Government is able to lay out its current thinking for feedback.

  5. A commitment to work towards clear, Government-led best practices on developing and deploying advanced AI systems safely. These should include not just best practices for identifying and mitigating specific risks, but corporate governance mechanisms to establish clear accountabilities for ensuring these are followed in practice by developers. There has been some good progress here as part of the ongoing AI Safety Summits, but the UK is still hesitant to take a leading role in determining what these best practices should look like, instead tending to ask companies to provide the answers themselves. By at least the France Summit in January 2025, we would like to see a more authoritative Government view on what best practices look like, and a commitment to put these on a statutory footing as requirements for frontier AI companies.

  6. An update on how the Government will ensure that any new regulation does not concentrate power further in the hands of a small number of tech companies. There are legitimate concerns that regulatory requirements for frontier AI systems, especially any relating to model release mechanisms, may create barriers to entry and make it easier for large AI companies to maintain and strengthen their monopoly in this space. There are genuine tradeoffs here between managing serious misuse risks and enabling competition, with no simple answer – we would like to hear more on how the Government will ensure it consults and balances a variety of views here, and how DSIT is working with the Competition and Markets Authority to think through and come to a decision on managing tradeoffs.

  7. More immediate action to create binding model access and reporting requirements for ‘frontier’ AI companies, including putting AISI’s access to AI models on a statutory footing. Although there are many uncertainties surrounding the possibility of new legislation, we believe there is fairly clear consensus from experts that mandatory reporting requirements – including requiring companies to report training runs above a particular threshold, provide regular updates on risk assessment processes, and provide greater external transparency and access to models themselves – will be necessary for meaningful government oversight. As the Government acknowledges, it is unlikely that voluntary agreements will be commensurate to the scale of risks here – not only do these agreements risk breaking down over time, but this risks creating an overly deferential relationship with industry, with the Government lacking the authority to challenge where most needed.

  8. Exploring ways to take action on areas where we already have clarity, possibly through the use of secondary legislation, without needing to resolve everything. Building on the previous point, we believe there are some areas where legislative action is clearly needed now, and suggest that the Government should not wait until it is ready to pass a fully comprehensive AI bill before taking action in these areas. We would like to see the Government explore the possible use of existing or secondary legislation, as the US has done in using an existing Defence Act to mandate reporting requirements – where appropriate to enable faster action.


Our overall view is that the case for additional, highly targeted regulation of highly capable general-purpose systems is at this point clear. While there are many challenging questions about the details of such regulation which need to be thoroughly considered as part of the process of designing new legislation, this uncertainty shouldn’t be an impediment to starting that process.



Appendix: CLTR’s engagement with the UK Government on AI regulation


Since late 2022, we have been working with the UK Government in a number of different ways on AI regulation.


We have particularly focused on providing advice to senior policymakers around the possibility of introducing new legislation for the regulation of “frontier” or foundation models – both helping to make the case for such regulation, and providing expert input on many specific challenges of getting such regulation right (for example, how to define the scope of such regulation, and how to approach open source releases).


This has also included collaborative engagement with CLTR’s Biosecurity and Risk Management teams on cross-cutting topics, such as the governance of narrow but highly capable biological tools, and appropriate corporate governance requirements for AI companies. For the past few months, Dr Jess Whittlestone, our Head of AI Policy, has been on secondment 1 day per week as an expert advisor to the Department of Science, Innovation and Technology (DSIT), and particularly the teams there working on AI regulation.


On the 24th January 2024, we also worked with DSIT to convene a group of civil society and academic experts to provide input on the UK’s updated regulatory approach to the DSIT Secretary of State – attendees included senior representatives from the Ada Lovelace Institute, Alan Turing Institute, and leading UK universities.


Recent Posts

See All
bottom of page