Shape of a globe
Shape of a globe

Artificial Intelligence

AI incident reporting: Addressing a gap in the UK’s regulation of AI

AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014.

Author(s): Tommy Shaffer Shane

Date: June 26th 2024

Executive summary

AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since 2014. With greater integration of AI into society, incidents are likely to increase in number and scale of impact.

In other safety-critical industries, such as aviation and medicine, incidents like these are collected and investigated by authorities in a process known as ‘incident reporting’.

We – along with a broad consensus of experts, U.S. and Chinese governments, and the EU – believe that a well-functioning incident reporting regime is critical for the regulation of AI, as it provides fast insights about how AI is going wrong.

However, it is a concerning gap in the UK’s regulatory plans.

This report sets out our case, and provides practical steps that the Department for Science, Innovation & Technology (DSIT) can take to address it.

The need for incident reporting

Incident reporting is a proven safety mechanism, and will support the UK Government’s ‘context-based approach’ to AI regulation by enabling it to:

  1. Monitor how AI is causing safety risks in real-world contexts, providing a feedback loop that can allow course correction in how AI is regulated and deployed;
  2. Coordinate responses to major incidents where speed is critical, followed by investigations into root causes to generate cross-sectoral learnings;
  3. Identify early warnings of larger-scale harms that could arise in future, for use by the AI Safety Institute and Central AI Risk Function in risk assessments.

A critical gap

However, the UK’s regulation of AI currently lacks an effective incident reporting framework. If not addressed, DSIT will lack visibility of a range of incidents, including:

  • Incidents in highly capable foundation models, such as bias and discrimination or misaligned agents, which could cause widespread harm to individuals and societal functions;
  • Incidents from the UK Government’s own use of AI in public services, where failures in AI systems could directly harm the UK public, such as through improperly revoking access to benefits, creating miscarriages of justice, or incorrectly assessing students’ exams;
  • Incidents of misuse of AI systems, e.g. detected use in disinformation campaigns or biological weapon development, which may need urgent response to protect UK citizens;
  • Incidents of harm from AI companions, tutors and therapists, where deep levels of trust combined with extensive personal data could lead to abuse, manipulation, radicalisation, or dangerous advice, such as when an AI system encouraged a Belgian man to end his own life in 2023.

DSIT lacks a central, up-to-date picture of these types of incidents as they emerge. Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI.

DSIT should prioritise ensuring that the UK Government finds out about such novel harms not through the news, but through proven processes of incident reporting.

Recommended next steps for UK Government

This is a gap that DSIT should urgently address. We recommend three immediate next steps:

  1. Create a system for the UK Government to report incidents in its own use of AI in public services. This is low-hanging fruit that can help the government responsibly improve public services, and could involve simple steps such as expanding the Algorithmic Transparency Recording Standard (ATRS) to include a framework for reporting public sector AI incidents. These incidents could be fed directly to a government body, and possibly shared with the public for transparency and accountability.
  2. Commission UK regulators and consult experts to confirm where there are the most concerning gaps. This is essential to ensure effective coverage of priority incidents, and for understanding the stakeholders and incentives required to establish a functional regime.
  3. Build capacity within DSIT to monitor, investigate and respond to incidents, possibly including the creation of a pilot AI incident database. This could comprise part of DSIT’s ‘central function’, and begin the development of the policy and technical infrastructure for collecting and responding to AI incident reports. This should focus initially on the most urgent gap identified by stakeholders, but could eventually collect all reports from UK regulators.

If you’re interested in discussing this work further, please reach out to the author, Tommy Shaffer Shane, using tommy@longtermresilience.org

Related Reports

Artificial Intelligence

Governance of Artificial Intelligence (AI) inquiry – submission of evidence

Dr Jess Whittlestone


CLTR's Head of AI Policy, Dr Jess Whittlestone, has co-authored a submission of evidence to the House of Commons Science and Technology Committee's

November 28th 2022

Artificial Intelligence

Transforming risk governance at frontier AI companies

Ben Robinson and James Ginns


This report explores how aspects of best practice risk governance – particularly the Three Lines Model (3LoD), which separates risk ownership, oversight and

July 19th 2024