Job Search and Career Advice Platform

Enable job alerts via email!

Senior Research Engineer - Interactive Avatars London

Methodfi

Greater London

Hybrid

GBP 60,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading video AI company is seeking a Research Engineer in Greater London to advance innovative AI video technology. This role focuses on adapting and developing cutting-edge video diffusion models, with responsibilities including improving visual quality and developing streaming capabilities. Candidates should possess strong machine learning and computer vision skills, particularly with diffusion models. Competitive compensation and a hybrid work environment are offered.

Benefits

Competitive compensation including stock options
Hybrid work setting
25 days of annual leave + public holidays
Great company culture
Regular planning and socials

Qualifications

  • Expert in machine learning and diffusion models.
  • Hands-on experience with avatar-centric or video-focused models.
  • Strong Python skills and commitment to maintainable code.

Responsibilities

  • Adapt diffusion models for diverse conditioning signals.
  • Develop streaming methods for long video sequences.
  • Improve lip-sync accuracy and visual quality in video models.

Skills

Machine Learning
Diffusion Models
Computer Vision
Python Engineering
Git and Version Control

Education

Experience with Diffusion Models
Strong publication record in relevant fields

Tools

PyTorch
Modern ML Frameworks
Job description
Welcome to the video first world

From your everyday PowerPoint presentations to Hollywood movies, AI will transform the way we create and consume content.

Today, people want to watch and listen, not read — both at home and at work. If you’re reading this and nodding, check out our brand video.

Despite the clear preference for video, communication and knowledge sharing in the business environment are still dominated by text, largely because high‑quality video production remains complex and challenging to scale—until now….

Meet Synthesia

We’re on a mission to make video easy for everyone. Born in an AI lab, our AI video communications platform simplifies the entire video production process, making it easy for everyone, regardless of skill level, to create, collaborate, and share high‑quality videos. Whether it’s for delivering essential training to employees and customers or marketing products and services, Synthesia enables large organizations to communicate and share knowledge through video quickly and efficiently. We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2.

In February 2024, G2 named us as the fastest growing company in the world. Today, we’re at a $2.1bn valuation and we recently raised our Series D. This brings our total funding to over $330M from top‑tier investors, including Accel, Nvidia, Kleiner Perkins, Google and top founders and operators including Stripe, Datadog, Miro, Webflow, and Facebook.

What you’ll do at Synthesia:

As a Research Engineer, you will join a team of 40+ Researchers and Engineers within the R&D Department working on cutting edge challenges in the Generative AI space, with a focus on avatar‑centric interactive video diffusion models. Within the team you’ll have the opportunity to work on the applied side of our research efforts and directly impact our solutions that are used worldwide by over 60,000 businesses.

This is a unique opportunity for experts in machine learning and diffusion models to shape the future of AI video agents that can think, act, and react like humans. As part of our Interactive Avatars Team, you’ll work on cutting‑edge research with a clear focus on turning breakthrough ideas into real product capabilities. You’ll join a team that moves fast, iterates often, and builds models that ship and make a meaningful impact. Example tasks and responsibilities include:

  • Adapt diffusion models to incorporate diverse conditioning signals (e.g., audio, motion, interaction cues).

  • Develop methods for streaming infinitely long video sequences at real‑time rates.

  • Work on the perceptual layer of interactive agents, including understanding user audio and generating appropriate contextual reactions.

  • Improve lip‑sync accuracy, motion realism, and overall visual quality in video diffusion models.

  • Build robust evaluation frameworks and test suites to enable continuous quality tracking.

  • Collaborate closely with our data team to define data needs and ensure high‑quality datasets.

  • Stay up to date with research in world models, interactive human/agent modeling, diffusion models, and related areas.

What we’re looking for:
  • Comfortable owning and executing on the responsibilities listed above.

  • Strong ML (e.g., diffusion, GANs, VAEs) and computer vision background with relevant industry experience.

  • Hands‑on experience with diffusion models (ideally avatar‑centric or video‑focused) and up to date with recent advances.

  • Proficient in PyTorch and familiar with modern ML frameworks and tooling.

  • Strong Python engineering skills, confident with git and version control, and a commitment to clean, maintainable research code.

  • Outcome‑driven, detail‑oriented, and motivated to push state‑of‑the‑art research into real product impact.

  • Clear communicator of hypotheses, experiments, and results.

What will make you stand out:

  • Experience with audio‑conditioned video diffusion models and deep knowledge of recent video DiT architectures.

  • Demonstrated ability to own the full model development pipeline end to end, from data preparation to model design, training, and evaluation.

  • A strong publication record in areas such as world models, interactive agents, or video diffusion models.

Why join us?

We’re living the golden age of AI. The next decade will yield the next iconic companies, and we dare to say we have what it takes to become one. Here’s why,

Our culture

At Synthesia we’re passionate about building, not talking, planning or politicising. We strive to hire the smartest, kindest and most unrelenting people and let them do their best work without distractions. Our work principles serve as our charter for how we make decisions, give feedback and structure our work to empower everyone to go as fast as possible. You can find out more about these principles here.

Serving 50,000+ customers (and 50% of the Fortune 500)

We’re trusted by leading brands such as Heineken, Zoom, Xerox, McDonald’s and more. Read stories from happy customers and what 1,200+ people say on G2.

Proprietary AI technology

Since 2017, we’ve been pioneering advancements in Generative AI. Our AI technology is built in-house, by a team of world‑class AI researchers and engineers. Learn more about our AI Research Lab and the team behind.

AI Safety, Ethics and Security

AI safety, ethics, and security are fundamental to our mission. While the full scope of Artificial Intelligence's impact on our society is still unfolding, our position is clear: People first. Always. Learn more about our commitments to AI Ethics, Safety & Security.

The good stuff...
  • Competitive compensation (salary + stock options + bonus)

  • Hybrid work setting with an office in London, Amsterdam, Zurich, Munich, or remote in Europe.

  • 25 days of annual leave + public holidays

  • Great company culture with the option to join regular planning and socials at our hubs

  • + other benefits depending on your location

You can see more about Who we are and How we work here: https://www.synthesia.io/careers

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.