Job Search and Career Advice Platform

Enable job alerts via email!

Data Quality Engineer

Methodfi

Greater London

On-site

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A tech company in London seeks a data quality engineer to ensure high data standards for training AI models. Responsibilities include owning upstream data quality, designing automation for QA, and collaborating with researchers. Ideal candidates should possess proficiency in Python, experience with large datasets, and excellent communication skills. This role offers top-tier compensation and comprehensive health benefits, facilitating impactful work and work-life balance.

Benefits

Top-tier compensation
Comprehensive medical, dental, and vision insurance
Fully paid parental leave
Paid time off and relocation support
Daily lunch and team celebrations

Qualifications

  • Strong engineering fundamentals with experience building data pipelines.
  • Detail-oriented with an analytical mindset for detecting data quality issues.
  • Experience designing and validating automated quality checks.

Responsibilities

  • Own upstream data quality for LLM post-training and evaluation.
  • Partner with research teams to translate requirements into measurable quality signals.
  • Design, validate, and scale automated QA methods.

Skills

Proficiency in Python
Experience with ML / LLM workflows
Working with large datasets
Excellent communication skills
Job description
Our Mission

Reflection’s mission is to build open superintelligence and make it accessible to all.

We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.

About the Role

Data is playing an increasingly crucial role at the frontier of AI innovation. Many of the most meaningful advances in recent years have come not from new architectures, but from better data.

As a member of the Data Team, your mission is to ensure that the data used to train and evaluate our models meets a high bar for quality, reliability, and downstream impact. You will directly shape how our models perform on critical capabilities — agentic tool use, long-horizon reasoning and robust safety alignment.

Working with world-class researchers on our post-training teams, you’ll help turn fuzzy notions of “good data” into concrete, measurable standards that scale across large data campaigns. We’re looking for engineers who combine strong engineering fundamentals with a deep curiosity about data quality and its impact on model behavior

Working closely with our post-training teams you will:

  • Own upstream data quality for LLM post-training and evaluation by analyzing expert-developed datasets and operationalizing quality standards for reasoning, alignment, and agentic use cases
  • Partner closely with research and post-training teams to translate requirements into measurable quality signals, and provide actionable feedback to external data vendors
  • Design, validate, and scale automated QA methods, including LLM-as-a-Judge frameworks, to reliably measure data quality across large campaigns
  • Build reusable QA pipelines that reliably deliver high-quality data to post-training teams for model training and evaluation
  • Monitor and report on data quality over time, driving continuous iteration on quality standards, processes, and acceptance criteria
About You
  • Strong engineering fundamentals with experience building data pipelines, QA systems, or evaluation workflows for post-training data and agentic environments
  • Detail-oriented with an analytical mindset, able to identify failure modes, inconsistencies, and subtle issues that affect data quality
  • Solid understanding of how data quality impacts training (SFT and RL) and evaluation, with the ability to translate quality concerns into concrete signals, decisions, and feedback
  • Experience designing and validating automated quality checks, including rule-based systems, statistical methods, or model-assisted approaches such as LLM-as-a-Judge
  • Comfortable working autonomously, owning problems end-to-end, and collaborating effectively with researchers, engineers, and operations partners
Skills and Qualifications
  • Proficiency in Python and building ML / LLM workflows. Must be comfortable debugging and writing scalable code
  • Experience working with large datasets and automated evaluation or quality-checking systems
  • Familiarity with how LLMs work and can describe how models are trained and evaluated
  • Excellent communication skills with the ability to clearly articulate complex technical concepts across teams
What We Offer:

We believe that to build open superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models.

We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.

  • Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
  • Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
  • Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
  • Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
  • Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.