
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading AI research firm in the United Kingdom is looking for two Research Engineers. The role focuses on identifying and mitigating risks associated with frontier models. Candidates should have strong skills in deep learning and Python, with a proven track record in research and engineering. This position offers the opportunity to impact production systems and contribute to significant advancements in AI safety and ethics.
London, UK; New York City, New York, US; San Francisco, California, US
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
Our team identifies, assesses, and mitigates potential catastrophic risks from current and future AI systems. As a member of technical staff, you will design, implement, and empirically validate approaches to assessing and managing catastrophic risk from current and future frontier AI systems. At the moment, these risks range from loss of control of advanced AI systems or automated ML R&D through misuse of AI for widespread CBRN or cyber harm.
The Risk Assessment team measures and assesses the possible risks posed by frontier systems, making sure that GDM knows the capabilities and propensities of frontier models so that adequate mitigations are in place. We also make sure that the mitigations do enough to manage the risks.
But the risks posed by frontier systems are, themselves, unclear. Forecasting the possible risk pathways is challenging, as is designing and implementing sensors that could reliably detect emerging risks before we actually have real-world examples. We focus on building decision‑relevant and trustworthy evaluation systems that prioritise compute and effort on risk measurements with the highest value of information. We then need to be able to assess the extent to which proposed and implemented mitigations actually cover the identified risks, and to measure how successfully they generalise to novel settings.
The Risk Assessment team is part of Frontier Safety which is responsible for measuring and managing severe potential risks from current and next‑generation Frontier models. Our approach is one of adaptively scaling risk assessment and mitigation processes to handle the near‑future. We are part of GDM’s AGI Safety and Alignment Team, whose other members focus on research aimed at enabling systems further in the future to be aligned and safe. These include interpretability, scalable oversight, control, and incentives.
We are seeking 2 Research Engineers for the Frontier Safety Risk Assessment team within the AGI Safety and Alignment Team.
In this role, you will contribute novel research towards our ability to measure and assess risk from frontier models. This might include:
Your work will involve complex conceptual thinking as well as engineering. You should be comfortable with research that is uncertain, under‑constrained, and which does not have an achievable “right answer”. You should also be skilled at engineering, especially using Python, and able to rapidly familiarise yourself with internal and external codebases. Lastly, you should be able to adapt to pragmatic constraints around compute and researcher time that require us to prioritise effort based on the value of information.
Although this job description is written for a Research Engineer, all members of this team are better thought of as members of technical staff. We expect everyone to contribute to the research as well as the engineering and to be strong in both areas.
The role will mostly depend on your general ability to assess and manage future risks, rather than from specialist knowledge within the risk domains, but insofar as specialist knowledge is helpful, knowledge in ML R&D and loss of control as risk domains are likely the most valuable.
In order to set you up for success as a Research Engineer at Google DeepMind, we look for the following skills and experience:
You have extensive research experience with deep learning and/or foundation models (for example, but not necessarily, a PhD in machine learning).
In addition, any of the following would be an advantage:
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food an on‑site gym, faith rooms, terraces etc.
We are also open to relocating candidates and offer a bespoke service and immigration support to make it as easy as possible (depending on eligibility).
The US base salary range for this full‑time position is between 136,000 – 245,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Form CC‑305 Page 1 of 1 OMB Control Number 1250‑0005 Expires 04/30/2026
Why are you being asked to complete this form?
We are a federal contractor or subcontractor. The law requires us to provide equal employment opportunity to qualified people with disabilities. We have a goal of having at least 7% of our workers as people with disabilities. The law says we must measure our progress towards this goal. To do this, we must ask applicants and employees if they have a disability or have ever had one. People can become disabled, so we need to ask this question at least every five years.
Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor’s Office of Federal Contract Compliance Programs (OFCCP) website at www.dol.gov/ofccp .
How do you know if you have a disability?
A disability is a condition that substantially limits one or more of your “major life activities.” If you have or have ever had such a condition, you are a person with a disability. Disabilities include, but are not limited to:
Disability Status Select...
PUBLIC BURDEN STATEMENT: According to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. This survey should take about 5 minutes to complete.