Date: Jul 10, 2023
Paper launch: “Frontier AI Regulation: Managing Emerging Risks to Public Safety”
Topic/Area:
Artificial IntelligenceDate: Jul 10, 2023
Topic/Area:
Artificial IntelligenceThe past 6 months have seen rapid advances in the capabilities of general-purpose AI systems, often called “foundation models”. While these advances promise many benefits for society, there is already evidence of how they can cause harm to society, from reflecting bias and discrimination, to being used to generate misinformation. As the frontier of AI capabilities advances so do its risks, with experts warning that near-future systems could be used to design novel weapons, harness unprecedented cyber offensive capabilities, and increasingly evade human control.
Industry self-regulation will increasingly be insufficient to mitigate these risks: we need independent government regulation to ensure that as AI capabilities continue to advance, they do so in a way that is safe and beneficial for everyone. This is the key message of a paper I contributed to published today, “Frontier AI Regulation: Managing Emerging Risks to Public Safety.”
The paper brings together a range of voices – from across academia, think tanks, and industry – to make the case for why regulating the forefront of AI development is so important, and to lay out a framework for how this might work in practice. The paper also highlights a number of challenges and open questions that will need wider input from governments and independent experts to address.
There are two concrete things I believe that governments can do today to begin regulating the frontier of AI development, and a third which needs more thought:
The capabilities of frontier foundation models are currently growing more quickly than the government can design and implement AI policy. To provide meaningful oversight and guardrails over frontier AI development, the government first needs to know where and how this development is happening, and to be able to ask questions about it.
Earlier this month, the UK government secured pledges from three leading labs to provide “early access” to new models. This is a great step towards greater regulatory visibility, and in another paper published recently, we make a number of recommendations for how this could work concretely. In particular, we suggest that labs should share information with the government about the compute used to train models and about the capability evaluations that have been conducted to assess and mitigate risks.
More generally, we suggest the UK government should aim to work towards mandatory reporting requirements for companies training AI models above a certain compute threshold, as is commonplace in other industries (such as financial services), and the ability to audit AI companies, directly or via third parties.
A huge challenge at present is that we don’t have established, effective methods for evaluating what risks might be posed by frontier AI systems, or what safe and responsible frontier AI development looks like. Developing these methods should be a government priority, and requires engagement from a wide range of actors including civil society and academia, to avoid capture by industry interests, or settling too early on suboptimal standards.
This may be particularly well-suited to the newly-established UK Foundation Models Taskforce, which could convene a range of experts through workshops, focus groups, and calls for evidence, to pioneer ways to evaluate models for risks and societal impact.
This is perhaps the most important, but also most challenging, part of the puzzle. The stakes are too high here, and industry has too many competing incentives to rely on voluntary compliance with standards for safe and responsible development. However, how best to enforce compliance in practice raises important questions which require more input from a wider range of independent, expert stakeholders.
One important issue is how to create meaningful regulatory oversight without inadvertently concentrating power with the small set of companies currently leading in AI development. Proposals like licensing have the advantage of being anticipatory – requiring companies to apply for a government license prior to training a model creates an opportunity for government oversight of high-risk models before they are released into the world. However, licensing proposals have also reasonably been criticised as anti-competitive, creating barriers to entry and making it easier for existing companies to continue dominating AI development.
Some options to consider when trying to manage this tradeoff, one aspect to consider is how safety regulation could best be complemented by other pro-competition regulations which seek to identify and mitigate abuses of dominance, as well as support for widespread access to AI systems deemed to be sufficiently low-risk and beneficial to society. Another option worth considering is some kind of tort liability which would hold developers of systems liable for some types of harm even if they have already complied with licensing requirements, helping avoid a situation where those requirements simply become a “tick box” exercise.
A related concern is that of regulatory capture: industry actors are currently very prominent voices in the conversation about AI regulation, and while they do have relevant knowledge and expertise, have a clear conflict of interest when it comes to setting the details and terms of regulation. We must design regulatory institutions that can limit and challenge the influence of private interests, and seek detailed input from academia and civil society before implementing a regulatory approach, especially something like licensing.
My personal view is that some form of licensing for frontier AI development will likely be needed to mitigate risks. That said, it should be carefully considered and complemented with both pro-competition regulation to prevent abuses of power, and some kind of liability regime to ensure that frontier AI developers are accountable for some harms even when licensing requirements are met (to avoid the latter simply becoming a “tick box” exercise). We plan to do more work in the coming months to convene a range of expert perspectives on these particular issues and further develop practical solutions.
We are pleased to announce today that Open Philanthropy is supporting our work to transform global resilience to extreme risks. Open Philanthropy is a philanthropic funder with the mission “to help others as much as we can with the resources available to us.” Open Philanthropy has committed to a grant to CLTR of £4 million […]
In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces.
In this role, you will be responsible for researching, writing, and submitting grant proposals to secure funding for our various research initiatives, and policy development programs and creating impact reports and other external communication pieces.