Shape of a globe
Shape of a globe

Artificial Intelligence

Why frontier AI safety frameworks need to include risk governance

This report outlines why risk governance should be enhanced in future versions of safety frameworks, along with recommendations for AI companies and Governments in the near and longer term.

Author(s): Ben Robinson, Malcolm Murray, James Ginns, Marta Krzeminska

Date: February 05th 2025

Summary

Frontier safety frameworks have emerged as an important upstream tool for AI companies to manage extreme risks from AI. These frameworks aim to set thresholds for powerful AI models, and specify mitigations if these thresholds are reached (re-training, delaying deployment etc).  

Currently, few of these frameworks include components of effective risk governance, understood as the structure, policies and processes for holistically managing risk across an organisation. This reduces the overall effectiveness of these frameworks, potentially creating silos, blurred accountabilities and unclear decision making processes when risk thresholds are being approached or reached, thereby increasing the chance of harmful models being released. 

This report outlines why risk governance should be enhanced in future versions of safety frameworks, along with recommendations for AI companies and Governments in the near and longer term. Key contributions include: 

  • Analysis of safety frameworks with and without risk governance (Table 1, p. 5)
  • Overview of risk governance components (Table 2, p. 6) and whether currently published frameworks have evidence of these components (p. 9 and Appendix 1)
  • Recommendations for AI companies and Governments (p. 11 and below), drawing from best practice in other industries (Appendix 2).

Recommendations 

For AI Companies: 

  1. Implement a ‘Minimum Viable Product’ (MVP) of best practice risk governance. This should include: a) risk ownership, b) a sub-committee of the Board, c) challenging and advising first line, d) external audit, e) a central risk function, f) executive risk officer, and g) risk tracking and monitoring. 
  2. Self-assess current safety framework and risk management practices against the risk governance components provided (Table 2, p. 6), identifying gaps and areas for improvement. Use the recommendation matrix (Figure 2, p. 11) to complete the risk governance framework over time.

 

Figure 1 below plots these recommendations for AI companies on a matrix of value to risk management and difficulty to implement. See the recommendations section of the report for more details explaining this matrix, including the MVP components.  

For Governments:

  1. Encourage and incentivise AI companies to commit to implementing risk governance at the “MVP” level, for example by requesting that companies share their risk governance processes publicly and including risk governance in evaluations of safety frameworks.
  2. Require AI companies to implement two key components that are essential for external clarity on the efficacy of the risk management process and transparency in the risks that arise: an annual resilience statement, and robust whistleblowing channels.
Risk governance recommendation matrix, difficulty to implement vs. value to risk management

Figure 1: Risk governance recommendation matrix for AI companies

For the full report, please click the ‘Download’ button below. If you have any queries about this report, please get in touch with ben@longtermresilience.org

 

Suggested citation: Ben Robinson, Malcolm Murray, James Ginns and Marta Krzeminska (2025), ‘Why frontier AI safety frameworks need to include risk governance’, The Centre for Long-Term Resilience

Related Reports

Artificial Intelligence

AI incident reporting: Addressing a gap in the UK’s regulation of AI

Tommy Shaffer Shane


AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since

June 26th 2024

Biosecurity

CLTR’s Response to the National Institute of Standards and Technology’s Safety Considerations for Chemical and/or Biological AI Models

Richard Moulange and Cassidy Nelson


The Centre for Long-Term Resilience recently contributed to the National Institute of Standards and Technology's Safety Considerations for Chemical and/or biological AI models.

December 27th 2024

Artificial Intelligence

Governance of Artificial Intelligence (AI) inquiry – submission of evidence

Dr Jess Whittlestone


CLTR's Head of AI Policy, Dr Jess Whittlestone, has co-authored a submission of evidence to the House of Commons Science and Technology Committee's

November 28th 2022