top of page
  • Writer's pictureCLTR

Written evidence submitted to DSIT: 'Pro-innovation approach to AI regulation' consultation

We were pleased to see the government’s white paper on 'A pro-innovation approach to AI regulation' published in March 2023. In many places, this white paper shows real promise for the UK to take a world-leading approach to AI regulation and effectively balance innovation and risk mitigation. We are particularly pleased to see the commitment to AI lifecycle accountability and the recognition that foundation models may need to be regulated differently from sector-specific applications.

As the government well recognises, a lot has happened in the world of AI since March. Major developments include heightened concern around societal-scale risks from AI, the UK government announcing a commitment to global leadership in AI safety, and redoubled private sector investment into accelerating AI R&D (including the merger of Google Brain and DeepMind).

The pace of change is rapid and some parts of the white paper are at risk of quickly becoming outdated. (We mentioned that this could be a risk in our previous submission and engagement, but things have changed even more quickly than we expected [CLTR, 2022]).

In the wake of these new developments, our submission provides a risk-mapping framework that we think is helpful for defining the role of regulation and designing policies at specific intervention points. We then discuss some specific recommendations for the UK’s approach to AI regulation based on this framework.

The framework examines how a range of risks from AI originate and can be mitigated by intervening at different points in the AI lifecycle. We highlight four main categories of risk:

  1. Risks from Model Development: Emergence of dangerous capabilities, which could allow models to adopt goals different from those specified by the user.

  2. Risks from Proliferation: If a model is widely accessible, and there are no protections in place for ensuring its capabilities are used safely, there is potential for misuse by malicious actors.

  3. Risks from Deployment: Irresponsible or unsafe deployment in high-stakes domains could result in catastrophic accidents and damage to critical infrastructure.

  4. Risks from Wider Societal Impacts: As AI systems are increasingly adopted, their longer-term consequences could include economic displacement, geopolitical tensions, and erosion of democratic systems.

We suggest that a successful regulatory approach should address three broad aims:

  1. Increasing Visibility: The government needs visibility on how AI systems are developed and deployed, where sources of risk and gaps in risk management exist, and how any harms are felt by the public.

  2. Defining Best Practices: Policymakers will need to translate their understanding of risks and harm into best practice guidelines for developers, users, and companies.

  3. Incentivising and Enforcing Best Practices: Regulators can create incentives and penalties that encourage behavioural change by developers, companies, and users.

We use this framework to offer a high-level discussion of strengths and opportunities for improvement in the white paper’s approach. We provide three detailed policy recommendations:

  1. Regulating Foundation Models: Establishing technical standards and best practice guidelines required of all foundation model developers, deployers, and users is necessary now to help begin preventing and mitigating risks. Additionally, laying the groundwork for frontier model regulation would begin reducing risks from dangerous capabilities and broader forms of misuse.

  2. Implementing an Information-Sharing Pilot: Setting up a voluntary information-sharing pilot program with leading AI labs, centred on model capability evaluations and compute usage, would provide the government with the visibility necessary for designing well-informed regulation and risk functions.

  3. Best Practices for a Central Risk Function: We endorse the white paper’s proposed creation of a central function for AI monitoring and evaluation, and we recommend that the government apply risk management best practices from the 'three lines of defence' model to the design and operations of this function.


OAI Consultation Response - CLTR
.pdf
Download PDF • 704KB

Recent Posts

See All
bottom of page