
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A cutting-edge AI company in Greater London is looking for an MLOps Engineer who will focus on deploying and operating large-scale machine learning models. The successful candidate will optimize performance, work closely with ML researchers, and ensure advanced models function efficiently across distributed systems. Ideal applicants have strong experience with PyTorch and ML infrastructure, are comfortable with model lifecycle management, and are inspired by making a significant real-world impact in scientific applications. A highly competitive salary and equity ownership are offered.
Boltz is a public benefit company building the next generation of AI‑powered molecular modeling tools to make biology programmable and accelerate drug discovery, while keeping bedo frontier capabilities broadly accessible.
Boltz‑1, Boltz‑2, and BoltzGen are open models trusted by 100,000+ scientists across biotech and academia, and used in programs at every Top 20 pharma as well as leading agrichemical and industrial research organizations.
We deliver these capabilities through Boltz Lab, our platform for running our latest models and design agents as reliable, production‑grade tools. Boltz Lab is designed around real chemistry and biology workflows, so teams can start from a target and a hypothesis and quickly generate, evaluate, and rank candidate molecules. We provide the compute, the scalable infrastructure, and the collaboration layer, so scientists can iterate faster and stay focused.
You can read more about our mission, research and product vision on our manifesto.
As an MLOps Engineer, you will focus on optimizing, deploying, and operating large‑scale machine learning models that power Boltz Lab. Your primary responsibility Therapeutic will be to ensure that advanced models for molecular modeling and design run efficiently, reliably, and cost‑effectively across distributed systems.
You will work closely with ML Researchers to take trained models and turn them into production‑ready services by optimizing training and inference performance, reducing memory and compute overhead, and scaling workloads across multi‑GPU and cloud environments. This includes profiling, improving model throughput and latency and hardening systems for long‑running and high‑volume workloads.
This role is ideal for someone who thrives on technical ownership and operational excellence, enjoys working close to systems and infrastructure, and is motivated by deploying high‑impact machine learning systems at scale for real‑world scientific use.