
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A cutting-edge technology firm in London is actively seeking an Engineering Manager for its Safeguards team. This role involves leading the team responsible for developing the data infrastructure critical for deploying AI models responsibly. The ideal candidate should have extensive engineering management experience and a strong grasp of data privacy principles. This position offers competitive compensation and a hybrid work policy, requiring in-office attendance at least 25% of the time.
London, UK
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Safeguards team is responsible for the systems that allow us to deploy powerful AI models responsibly — and the data infrastructure underneath those systems is foundational to getting that right. The Safeguards Data Infrastructure team owns the offline data stack that underpins our safeguards work: the storage layer for sensitive user data, the tooling built on top of it, and the interfaces that let the rest of the Safeguards organization access that data safely and ergonomically.
As Engineering Manager of this team, you'll be responsible for ensuring full portability of our safeguards data stack across an expanding set of deployment environments, building privacy-preserving data interfaces that enable ML and training workflows, and driving compliance with data regulations including HIPAA. This is a role at the intersection of infrastructure engineering, data privacy, and enterprise product requirements — and it sits at a critical juncture as Anthropic scales into new cloud environments and geographies.
The annual compensation range for this role is listed below.
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas. However, we aren't able to sponsor for every role and every candidate. If we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not meet every single qualification. Not all strong candidates will meet every single qualification as listed. We think AI systems like the ones we're building have enormous social and ethical implications, so we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from@anthropic.com email addresses. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthopic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. We value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with computer science. We’re an extremely collaborative group, and we host frequent research discussions to ensure we are pursuing the highest-impact work at any given time. We greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.