CLTR's Head of AI Policy, Dr Jess Whittlestone, has authored a response to the Office for AI’s AI Regulation Policy Paper about how the UK can best set the rules for regulating AI in a way that drives innovation and growth while also protecting our fundamental values.
We agree with the policy paper that a context-driven approach to AI regulation has many advantages. These include being sensitive to the fact that the risks posed by an AI system are often heavily dependent on context; drawing on the strengths of the UK’s existing regulatory ecosystem; and minimising barriers to beneficial innovation.
However, we also believe the proposed approach faces several important challenges, which need to be carefully addressed:
Promoting coherence and reducing inefficiencies across the regulatory regime. Although regulatory responses are likely to differ by sector, it is also likely that regulators will face many similar challenges where consistency and sharing of information will be important.
Ensuring existing regulators have sufficient expertise and capacity. Understanding the regulatory implications of AI in even a single sector is not straightforward, and regulators will need training, the ability to share best practice, and access to external expertise to do this well.
Ensuring that regulatory gaps can be identified and addressed. There will inevitably be important areas of harm from AI that do not neatly fit within the remit of existing regulatory bodies, such as possible applications of AI to dual-use scientific research.
Being sufficiently adaptive to advances in AI capabilities. We are seeing rapid progress in AI capabilities, and a trend towards increasingly general-purpose systems. The regulatory regime must be able to keep up to date with advances and their implications for regulation.
Cross-sector principles will be a useful starting point for guiding the regulatory regime, and especially for promoting coherence, but much more will be needed. In order to successfully navigate the above challenges, we believe it is crucial that the regulatory regime is supported by a broader governance ecosystem which can effectively identify and address inefficiencies and gaps. In practice, this means identifying actors with a clear mandate and the capacity to address challenges 1-4 outlined above. We do not necessarily think this needs to be a single body, since different actors may be well-positioned to fill different gaps, and an ecosystem of bodies with different responsibilities and powers may be most effective in practice.
In our view, one of the biggest challenges for this regulation will be keeping pace with the speed of AI progress. To address this challenge, we recommend that:
● A central government body such as the CDEI or Office for AI should invest in
infrastructure to monitor AI inputs and progress.
● The Office for AI should explore how foresight and anticipatory governance
methods can support the regulatory approach.
● The whitepaper should consider specific challenges that ongoing progress in foundation models may pose for a context-specific approach to regulation.
Read the full response below.