Summary
Frontier safety frameworks have emerged as an important upstream tool for AI companies to manage extreme risks from AI. These frameworks aim to set thresholds for powerful AI models, and specify mitigations if these thresholds are reached (re-training, delaying deployment etc).
Currently, few of these frameworks include components of effective risk governance, understood as the structure, policies and processes for holistically managing risk across an organisation. This reduces the overall effectiveness of these frameworks, potentially creating silos, blurred accountabilities and unclear decision making processes when risk thresholds are being approached or reached, thereby increasing the chance of harmful models being released.
This report outlines why risk governance should be enhanced in future versions of safety frameworks, along with recommendations for AI companies and Governments in the near and longer term. Key contributions include:
- Analysis of safety frameworks with and without risk governance (Table 1, p. 5)
- Overview of risk governance components (Table 2, p. 6) and whether currently published frameworks have evidence of these components (p. 9 and Appendix 1)
- Recommendations for AI companies and Governments (p. 11 and below), drawing from best practice in other industries (Appendix 2).
Recommendations
For AI Companies:
- Implement a ‘Minimum Viable Product’ (MVP) of best practice risk governance. This should include: a) risk ownership, b) a sub-committee of the Board, c) challenging and advising first line, d) external audit, e) a central risk function, f) executive risk officer, and g) risk tracking and monitoring.
- Self-assess current safety framework and risk management practices against the risk governance components provided (Table 2, p. 6), identifying gaps and areas for improvement. Use the recommendation matrix (Figure 2, p. 11) to complete the risk governance framework over time.
Figure 1 below plots these recommendations for AI companies on a matrix of value to risk management and difficulty to implement. See the recommendations section of the report for more details explaining this matrix, including the MVP components.
For Governments:
- Encourage and incentivise AI companies to commit to implementing risk governance at the “MVP” level, for example by requesting that companies share their risk governance processes publicly and including risk governance in evaluations of safety frameworks.
- Require AI companies to implement two key components that are essential for external clarity on the efficacy of the risk management process and transparency in the risks that arise: an annual resilience statement, and robust whistleblowing channels.
Figure 1: Risk governance recommendation matrix for AI companies
For the full report, please click the ‘Download’ button below. If you have any queries about this report, please get in touch with ben@longtermresilience.org
Suggested citation: Ben Robinson, Malcolm Murray, James Ginns and Marta Krzeminska (2025), ‘Why frontier AI safety frameworks need to include risk governance’, The Centre for Long-Term Resilience