top of page
  • Writer's pictureCLTR

Paper launch: 'Strengthening Resilience to AI Risk: A guide for UK policymakers'

This Briefing Paper from CETaS and CLTR aims to provide a clear framework to inform the UK Government’s approach to understanding and responding to the risks posed by Artificial Intelligence (AI). The Government has shown increasing ambition to take a globally leading role in mitigating AI risks, but currently the UK is inadequately resilient to the risks posed by AI. Now is the time to act decisively on the policy interventions required to address those risks. Any further delay will risk one of two undesirable outcomes: either a scenario where AI risks transition into widespread harms, directly impacting individuals and groups in society; or the converse scenario where widespread fear of AI risk results in a lack of adoption, meaning the UK does not benefit from the many societal benefits presented by these technologies. This paper addresses this challenge by presenting an evidence-based, structured framework for identifying AI risks and associated policy responses. For the UK to foster a trustworthy AI ecosystem, policymakers must demonstrate both an understanding of and capacity to intervene across the AI lifecycle. This entails addressing risk pathways at their source in the design and training stages, mitigating deployment risks through implementation of clear safeguards, and redressing harmful impacts over the longer-term diffusion of AI systems across society. The UK is not alone in wanting to mitigate risks from AI while harnessing its wide-ranging societal benefits, in sectors from health and transport to manufacturing and national security. There will be areas of intense geopolitical competition – particularly in research and development capability. But there will also be areas where global cooperation is imperative: the UK cannot safeguard its population from AI risks in isolation, because the harms caused by AI systems do not respect borders. Notwithstanding the critical role of private and third sector stakeholders in shaping the future AI policy landscape, governments must be at the forefront of a global approach which is inclusive, transparent, adaptable, and interdisciplinary in nature. Future policy must recognise the mutually reinforcing relationship between domestic and global policy interventions: by being proactive in implementing domestic AI policy measures and evaluating their success, the UK will be in a better position to advocate for the adoption of those policies on the global stage, which in turn will generate further support and investment for the UK’s domestic AI ecosystem. Achieving this virtuous cycle requires moving from ambition to action. The following recommendations are designed to support UK policymakers to this end. • Policy interventions must build resilience to risks throughout each stage of the AI lifecycle, to mitigate known harms from AI, and anticipate and prevent future risks. Many measures will need to be focused on discrete risks which arise from the application of AI in specific sectors such as healthcare or national security. However, interventions are also required to reduce the likelihood of harm, irrespective of the deployment context. If the capabilities of general-purpose AI systems continue to progress rapidly, it may be impossible to predict and ultimately mitigate the full spectrum of risks that could arise from the deployment of AI in different sectors. This suggests that additional governance measures focused on earlier stages of the AI lifecycle – to manage the way that certain AI models are developed and initially deployed – will be needed to mitigate the full range of potential harms.

• To understand and mitigate the full spectrum of potential AI risks, a diverse and global range of experts from academia, civil society, and the private sector must be engaged – as well as members of communities already being negatively impacted by increased automation and, increasingly, AI-driven technologies. The upcoming Global Summit on AI Safety presents an opportunity for the UK to convene this range of perspectives, and to ensure any plans for national or international AI governance are evidence-led, authoritative, and inclusive. Policymakers must work proactively to learn from individuals and communities who have been directly harmed by emerging uses of AI, as well as those who have worked for years on documenting and anticipating the impacts of AI on society. We suggest a framework for understanding how risks can arise at three stages of the lifecycle of AI systems and their potential impacts: (1) the design, testing and training stage; (2) the immediate deployment and usage stage; and (3) the longer-term deployment and diffusion stage. Policy recommendations will be clearly linked to these stages to ensure risks are targeted and redressed as close to their source as possible. We propose three main goals for policy interventions: creating visibility and understanding; promoting best practices; and establishing incentives and enforcement. Below, we summarise our key recommendations under each of these goals. 1. To establish better visibility and understanding around the development of AI systems and their immediate and longer-term impacts, policymakers should:

a) Promote the adoption of privacy-preserving model training techniques like federated learning to address concerns about data privacy in the model training process. b) Co-develop pre-deployment impact assessments and post-deployment monitoring requirements for AI systems, particularly frontier AI systems and AI applications in sensitive domains which involve a higher risk of accidents or misuse. These should be created through collaboration with both industry and civil society. c) Drive coordination of efforts to watermark AI-generated content (particularly visual content) and AI-enabled authorship detection to protect the public’s ability to produce, distribute, acquire and access reliable information. 3. To establish powerful incentives and enforce effective regulation, policymakers should: a) Capitalise on the UK’s strengths in AI assurance, by investing in infrastructure which allows developers to communicate the trustworthiness of their systems and attain credibility for adhering to best practices. b) Articulate clear ‘red lines’ in the context of critical infrastructure, where autonomous agents (which generate a sequence of tasks to complete until a goal is reached) should not be used, explaining the necessity of having humans in control of functions like power supply and the nuclear deterrent. c) Explore how different regulatory tools, including licensing, registration and liability can be used to hold developers accountable and responsible for mitigating the risks of increasingly capable AI systems. By demonstrating competence and commitment as well as ambition across these policy areas, the UK can establish its status as a leading voice in global discussions on AI risk and governance. Achieving this status will allow the UK to push for multilateral mechanisms which prioritise transparency and collective action, to coordinate global standards in high-risk areas of development and deployment, and to hold individual governments and private actors accountable for harmful applications of AI.

CETaS-CLTR AI Risk Briefing Paper
.pdf
Download PDF • 16.70MB

Recent Posts

See All
bottom of page