Updated: Sep 12, 2021
*** Please note that the application deadline for this role has now passed, and we are no longer receiving applications. Thank you to those who have applied - we will be in touch in the coming days. ***
Job title: Head of AI Policy
Reports to: Chief Executive
Salary: £48k to £80k, depending on skill set and experience
Location and travel requirements: Flexible location within the UK, with the expectation of attending our central London office on Mondays, and being able to travel to Westminster for in-person meetings with government and other stakeholders as needed. Currently, much of the role can be conducted from a remote location, but we anticipate that the requirement to travel to Westminster for meetings will rise to an average of two or three days per week over time, as more policymakers and other key stakeholders return to the office in the coming months.
Application form: Submit here.
Deadline: Sunday 12 September 2021, 11pm London time.
For any questions about the role, please contact Angus Mercer at email@example.com
About the Centre for Long-Term Resilience
The Centre for Long-Term Resilience (CLTR) is a UK-based think tank that focuses on three of the most crucial long-term challenges of our time – the safe development of AI, pandemic preparedness, and improving the way governments manage these types of extreme risks. We recently launched our first report, Future Proof, which was co-authored by Oxford’s Toby Ord (author of The Precipice) and received coverage in The Times, The Telegraph, Sky News, Wired and UnHerd.
In our first two years, we have helped bring these challenges to the attention of senior policymakers in Westminster. The UK’s recent Integrated Review highlighted the need to boost resilience to catastrophic-impact events, and extreme risks appear to be a key theme of the Government’s upcoming National Resilience Strategy. As the UK begins to develop its longer term policy response to Covid-19, there is now an excellent opportunity to transform the country’s resilience to the full range of extreme risks we face.
We're a small non-profit with big plans. We have just secured the funding required to make our first full-time hires beyond the founding team. We will likely be making four new hires in the coming months: three Heads of Policy to lead policy development and advocacy across each of our three focus areas, and an Operations Manager / Strategy Adviser to help manage our transition to a six-person organisation.
About this role
We are looking for a smart, driven and senior Head of AI Policy who would report directly to the Chief Executive. This person would lead the development of, and advocacy for, CLTR’s key AI policy recommendations, and ensure that the responsible development of AI is given the attention it deserves in senior UK policy circles. This is a one-off opportunity to join our young and dynamic organisation at the ground level, at a time where significant policy progress in our priority areas looks to be achievable.
The AI policy development work would likely be about 50% of the role, and would consist of:
Working with our network of academic and technical experts, and with UK policymakers and other stakeholders, to develop and update the AI recommendations set out in Future Proof.
Working with our network of academic and technical experts, and with UK policymakers and other stakeholders, to develop further concrete, actionable policy recommendations.
Identifying the most effective way to communicate these recommendations to senior stakeholders.
Building and maintaining trusted relationships with senior academics, technical experts and policymakers in this space.
The AI advocacy work would likely be about 45% of the role, and would consist of:
Identifying opportunities to influence government decision making in priority areas of AI risk policy;
Identifying and building relationships with key stakeholders, including senior academics, technical experts, government decision makers and supportive partners.
Developing and delivering plans to make the case for our policy recommendations to government decision makers, both in private through written advice notes, meetings, workshops, crisis simulations, and, as needed, through more public campaigns (supported by a public affairs / communications agency).
Being responsible for our thought leadership efforts on the issue of AI risk by writing articles, conducting interviews and participating in relevant public and private events that help further our core advocacy priorities (supported by a public affairs / communications agency).
The remaining 5% of the role would involve helping set CLTR’s strategic direction.
Skills and experience required
We want the best people and we don’t want biases holding us back. We strongly encourage people of every colour, orientation, age, gender, origin, and ability to apply. If you are passionate about CLTR’s mission and think you have what it takes to be successful in this role even if you don’t check all the boxes, please apply. We’d appreciate the opportunity to consider your application. Our intention when hiring is to optimise for talent, values and potential rather than experience.
Strong alignment with CLTR’s core mission and values.
Ability to lead our strategic thinking about CLTR’s AI policy development and advocacy priorities.
Outstanding advocate and communicator, with a proven:
understanding of how to advocate and communicate policy recommendations to senior stakeholders;
ability to plan for, lead and facilitate meetings with very senior stakeholders including government decision makers, academics and other partners.
Strong understanding of the UK AI policy space and the key players within it, and ideally with a good existing network in this space.
Significant AI policy expertise, especially in domains relevant to the mitigation of AI risk. Examples include (but are not limited to) the issue areas referenced in the AI section of Future Proof:
Improving foresight and progress tracking in AI research;
Bringing more AI technical expertise into government;
The risk of incorporating AI systems into nuclear command, control and communications technology;
How to stress-test AI system security;
The optimum definition of Lethal Autonomous Weapons Systems;
Semiconductor supply chains and their role in foreign and security policy;
How governments can develop machine-learning relevant compute resources for socially beneficial applications and AI safety; and
How governments can effectively boost spending on AI safety research and development.
A thought leader, or future thought leader, in the UK AI policy space.
Excellent writing skills, and ability to bring together information from a range of sources and present it clearly and compellingly to a non-expert audience.
Strong self-direction, proactivity and work ethic. Comfortable with working independently on solo projects a significant portion of the time. Eagerness for discussion and feedback.
A talented and reliable project manager, capable of both managing a range of workstreams simultaneously, and managing ‘up’ and ‘down’ the organisation as needed.
Experience in managing a team, and contributing to the hiring process where necessary. At the outset, the role will involve managing a public affairs / communications agency who will support your work, but over the coming years the role will likely also involve managing (and helping to hire and onboard) an in-house policy team.
Directness and openness in giving and receiving feedback.
Comfort working within a fast-paced environment, and with flexing priorities at short notice if a time-critical project comes in.
Ability to help drive a positive team culture, in an advocacy environment where the feedback loops between policy advocacy and policy impact are often slow.
A growth mindset; a willingness to learn and improve.
Excitement about working for a relatively new organisation whose processes are developing as we grow the team.
In this role, we are looking for ‘T-shaped people’ who are both generalists (highly skilled at a broad set of valuable things) and experts in their field within a narrow discipline.
Skills and experience preferred
Experience engaging with UK media - for instance, writing and pitching articles, conducting interviews and fielding questions from journalists.
Location and travel requirements
Flexible location within the UK, with the expectation of attending our central London office on Mondays, and being able to travel to Westminster for in-person meetings with government and other stakeholders as needed. Currently, much of the role can be conducted from a remote location, but we anticipate that the requirement to travel to Westminster for meetings will rise to an average of two or three days per week over time, as more policymakers and other key stakeholders return to the office in the coming months.
Salary and benefits
Salary: £48k to £80k, depending on skill set and experience.
Benefits package includes generous parental leave entitlements, an employer pension contribution scheme, a cycle to work scheme and an annual grant for your learning and development.
A commitment from CLTR to:
Care deeply about your wellbeing, career development and overall experience working with us;
Respect your preferred working patterns, wherever possible; and
Support you to carve out time for deep, uninterrupted work and avoid the hyperactive hive mind!