top of page
  • Writer's pictureCLTR

The UK Defence AI Strategy: ensuring safe and responsible use of AI

by Jess Whittlestone (Centre for Long-Term Resilience), Diane Cooke (Centre for the Study of Existential Risk/CSER), Shahar Avin (CSER), Kayla Matteucci (CSER), Sean O hEigeartaigh (CSER), Haydn Belfield (CSER), and Markus Anderljung (Centre for the Governance of AI)


Summary


We are pleased to see many important commitments to safe and responsible use of AI in the UK Defence AI Strategy which was published last week. We particularly welcome commitments to carefully consider where risks could arise, to establish robust assurance processes, and to lead internationally in establishing best practice for safe and responsible use.


To deliver on these commitments in the coming years, the MoD will need to carefully consider a number of important issues. These range from how existing safety and assurance processes need to adapt to the challenges raised by AI, to determining when different types of AI technology may - or may not be - appropriate for the context.


The MoD will also need to ensure it has the expertise needed to deploy AI safely and responsibly, establish clear processes for assigning responsibility for safety and ethics across the MoD, as well as putting in place mechanisms for external accountability.


As the MoD begins to implement this strategy, we suggest that it should:

  1. Communicate in further detail about how it will address a number of key challenges outlined below, including how the MoD will reevaluate existing safety and regulation regimes in light of AI. The MoD will also need to ensure AI is deployed at the appropriate pace given the capabilities and limitations of AI systems, and carefully consider in what circumstances the MoD would decide the risks of using AI outweigh the benefits. These details could be covered in related strategic and policy materials, starting with the AI Technical Strategy expected to be published in near future.

  2. Recruit and train expertise in responsible innovation, AI ethics, and AI safety specifically, in addition to technical AI capabilities.

  3. Consider implementing the “three lines” model to assign responsibility for risk management, which represents best practice in the private sector.

  4. Establish an advisory panel specifically focused on extreme risks from AI in defence and how they can be identified, prioritised, and mitigated.




Last week the UK Ministry of Defence published the Defence AI Strategy, and an accompanying policy document laying out the UK’s approach to ensuring “Ambitious, Safe, and Responsible” use of AI in defence.


We had the opportunity to give feedback on elements of earlier drafts of the strategy, and have also been working with the MoD to consider what kinds of institutional structures, policies, and practices are needed to mitigate risks from the use of AI in defence.


We are pleased to see many important commitments in the strategy and accompanying policy document which reflect our recommendations, including:


  • To “maintain a broad perspective on implications and threats, considering extreme and even existential risks which may arise.” We are particularly pleased to see the recognition that defence use of AI may pose serious risks, including to global security dynamics and by changing the character of conflict. We believe that it is especially important for militaries to recognise that risks can arise from more than just misuse of AI by adversaries, and that it will be essential to consider the potential harms from unintended consequences of increasingly powerful AI systems.


  • To use AI responsibly and establish rigorous and robust safety and assurance processes. We are particularly pleased to see a commitment to provide teams with “clear frameworks to support the early identification and resolution of safety, legal, and ethical risks,” and to see the MoD outline what it expects from a responsible AI supplier (p. 45). Deciding what and where to procure from industry will be an essential part of using AI responsibly. We believe that clear frameworks, processes, and lines of accountability will be essential to translating these commitments into practice, as we discuss below.


  • To lead internationally and “promote a common vision for the safe, responsible and ethical use of these technologies globally.” We are particularly encouraged to see specific commitments to develop and promote best practice and codes of conduct, as well as to “engaging with potential adversaries and nations whose approach to adopting AI differs from our own.” We believe this kind of international dialogue, encompassing a range of countries and perspectives, is essential for ensuring safe use of AI globally, and that the UK is well-placed to be a leader here.


  • To “pioneer and champion innovative approaches to testing, evaluation, verification and validation (TEV&V)”. It is widely recognised that modern AI systems pose fundamentally different challenges for testing and assurance when compared with more traditional software systems, and that existing TEV&V methods have not been adequately modified to ensure the reliability of AI systems in many safety-critical applications. If the MoD is going to use AI safely and responsibly, improving these methods - and understanding the limitations of what we can robustly say about the behaviour of AI systems - will be essential.



There are also many things we believe the MoD will need to consider more thoroughly in the coming months and years in order to deliver on these commitments in practice, including addressing the following key questions:


  • How will existing safety and assurance processes need to adapt to the challenges raised by AI? Though it is encouraging to see a commitment to pioneer new approaches to safety and assurance for AI, the strategy also emphasises on multiple occasions how the MoD will continue to apply its existing safety and regulation regimes to AI. As mentioned above, often these existing regimes will be insufficient to assure the behaviour and safety of modern AI systems. We would like to see the MoD commit to reevaluating relevant existing safety and regulation regimes in light of the unique challenges posed by AI, as well as communicating more clearly about how the MoD is avoiding premature deployment of AI in safety-critical contexts (such as in command and control and weapons systems) without sufficient assurance. This would help not just build public trust in the MoD’s use of AI, but could also help set international standards for assurance, and reduce the chance that adversaries prematurely deploy unsafe systems.


  • How will the MoD balance the need to deploy AI “at pace” with the need for fundamentally new assurance processes? The strategy consistently emphasises the need for rapid integration of AI into defence, but using AI for the benefit of UK defence (and the wider world) is not always the same thing as deploying AI as quickly as possible. Especially given the challenges of assuring safety and robustness in some contexts, providing leaders with a greater sense of system reliability and building public trust may require strategic patience and cautious deployment of AI. We would like to see more emphasis on how the MoD will adapt and create new assurance systems in order to deploy AI at the appropriate pace given the capabilities and limitations of AI systems, and more detail on how they are assessing whether current AI capabilities are reliable enough to offer a real advantage.


  • How will the appropriate level of technology be selected for the context? It is encouraging to see the MoD commit to “not adopt AI for its own sake” and to carefully consider “where AI is the appropriate tool to adopt”, and in future we hope to see more detail on how these decisions will be made in practice. Different types and levels of AI capability will pose very different challenges and offer different risk-benefit tradeoffs: a linear regression model whose parameters are learned from data and then fixed is very different from a reinforcement learning system that is continually evolving post-deployment, for example. We would like to see more detail from the MoD in future publications on how these decisions will be made, and in what circumstances the MoD would decide the risks of using AI outweigh the benefits. Though these decisions will to some extent need to be made on a case-by-case basis, we believe it is important for the MoD to lay out some considerations and red lines in advance to ensure clarity and consistency. Communicating clearly about these processes and decisions, where possible, will be essential for both ensuring responsible use and building public trust.


  • How will the MoD ensure it has the expertise needed to deploy AI safely and responsibly? The strategy emphasises the need to upskill the workforce across the MoD in order to develop and deliver AI capabilities, but it has less explicit emphasis on how to recruit and develop the expertise needed to ensure AI is used safely and responsibly. In part this requires technical expertise - the ability to see where AI is the “right” solution and where it has limitations which may cause problems - and upskilling procurement functions in the MoD so that they can reliably evaluate the offerings of industry suppliers may be especially important. We also suggest the MoD should prioritise recruiting and training expertise in responsible innovation, AI ethics, and AI safety specifically.


  • How will responsibility for safety and ethics be assigned across the MoD, and what mechanisms for external accountability will be in place? Though some detail on responsibilities is outlined in Section 6.2 of the strategy, we suggest that this could be strengthened by learning from the “three lines” model of risk management which represents best practice in the private sector. The model provides accountability and coordination for risk management across organisations by distinguishing between functions that own and manage risks (“first line”), functions that oversee risks (“second line”), and functions that provide independent assurance (“third line”). The MoD might consider whether its current risk management systems adequately cover and clearly delineate between these three lines of responsibility - if not, clarifying responsibilities and ensuring each has sufficient capacity could help make risk management more efficient and effective. Given the recognition that defence use of AI will have impacts that go far beyond the MoD (and even the UK), external accountability, feedback and scrutiny will also be important, and we would encourage the MoD to build on its work with the CDEI and ethics advisory panel to establish clearer mechanisms here as part of the third line assurance function. This would need to form part of the MoD's overall risk management framework as well as, ideally, of an overarching framework for Government risk management overall.


  • How will more extreme risks be identified and managed? We are glad to see the strategy recognise that it is in the MoD’s interests to take a broad and long-term perspective on the kinds of risks that may arise from the use of AI in defence, but it is not clear how the MoD will ensure these are recognised in practice. In particular, the kind of expertise needed to identify and manage these risks may be somewhat different from that needed to identify and mitigate more immediate risks. It could be helpful for the MoD to establish an advisory panel specifically focused on how catastrophic risks from AI in defence can be identified and mitigated, composed of experts in extreme risks and policy foresight – something we would be happy to support.

Recent Posts

See All
bottom of page