Shape of a globe
Shape of a globe

Artificial Intelligence

Future of compute review – submission of evidence

CLTR's Head of AI Policy, Dr Jess Whittlestone, has co-authored a submission of evidence to the UK Government's Future of Compute Review

Author(s): Dr Jess Whittlestone

Date: August 08th 2022

CLTR’s Head of AI Policy, Dr Jess Whittlestone, has co-authored a submission of evidence to the UK Government’s Future of Compute Review, recommending that the review should (i) explore ways to increase compute capacity for academia, and (ii) consider how the UK government could monitor and govern compute usage.

Prepared by:

Dr Jess Whittlestone, Centre for Long-Term Resilience (CLTR)

Dr Shahar Avin, Centre for the Study of Existential Risk (CSER), University of Cambridge

Katherine Collins, Computational and Biological Learning Lab (CBL), University of Cambridge

Jack Clark, Anthropic PBC

Jared Mueller, Anthropic PBC

Executive Summary

We are a group of experts in AI governance with experience across academia, industry, and government. Our evidence submission is framed around the question of how the UK’s compute strategy can help achieve the goals of the National AI Strategy: investing in the long-term needs of the AI ecosystem; ensuring AI benefits all; and governing AI effectively.

Access to increasingly large amounts of computing power has been a key driver of AI progress in recent years. In order to leverage the benefits of this progress for society and the economy, the UK government must effectively and proactively manage risks. Compute-intensive AI progress is particularly likely to lead to more systemic, high-stakes, and difficult to anticipate impacts, requiring anticipatory governance approaches to manage.

By taking more proactive measures to understand and influence how large scale-compute is used to drive AI progress, the government can more effectively ensure that AI benefits all of society.

We make two specific recommendations for how the UK can do this, which we suggest this review should explore:

1. Our first recommendation is that the review should explore ways to increase compute capacity for academia, especially researchers working on beneficial AI applications, AI safety and security research (relevant to questions 1-3 and 9-11).

Given the importance of compute for AI progress, which groups have access to large-scale compute will determine the interests and incentives that shape AI development.

There is currently a large disparity between the computational resources available to AI researchers in academia and industry, and evidence of substantial latent demand for compute among academic researchers.

Increasing compute access for academia would strengthen the UK’s AI ecosystem while improving the scrutiny and accountability of commercial research and shifting incentives towards longer-term benefits for society.

We also outline additional measures which could support academics’ ability to contribute to compute-intensive AI research, including providing access to trained models via APIs.

2. Our second recommendation is that the review should consider how the UK government could monitor and govern compute usage in AI development more broadly (relevant to questions 7 and 12).

In order to effectively govern AI, the UK government needs better information about potential harms, and more effective tools for intervening to prevent harm.

Compute is a strategic leverage point which can help with both these challenges: providing information about potentially high-risk uses of AI, and a lever by which the government can practically shape AI progress.

Practically, compute is much more easily monitored and governed than other inputs such as data, algorithms, and talent, due to being relatively easy to quantify, and being highly centralised in its supply and use.

We outline a ‘tiered’ approach to compute-indexed AI monitoring and governance, and explain how this could strongly support the government’s aim of establishing a pro-innovation approach to regulating AI, as well as accurately assessing long-term AI safety and risks.

We first provide some background on compute as an important policy lever for shaping beneficial AI development, before outlining the two recommendations above in more detail.

Related Reports

Artificial Intelligence

Transforming risk governance at frontier AI companies

Ben Robinson and James Ginns


This report explores how aspects of best practice risk governance – particularly the Three Lines Model (3LoD), which separates risk ownership, oversight and

Jul 19, 2024

Artificial Intelligence

AI incident reporting: Addressing a gap in the UK’s regulation of AI

Tommy Shaffer Shane


AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since

Jun 26, 2024

Artificial Intelligence

The near-term impact of AI on disinformation

Tommy Shaffer Shane


It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, whether

May 16, 2024