Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer (Brahma)

DNEG Group

London

On-site

GBP 40,000 - 70,000

Full time

30+ days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

Join a pioneering AI company as a Data Engineer and play a vital role in architecting and maintaining data pipelines crucial for innovative products. Your expertise in Python, SQL, and scalability will directly contribute to optimizing workflows and integrating advanced AI-driven tools.

Qualifications

  • 3+ years in data engineering or related backend/infrastructure role.
  • Strong programming skills in Python or similar languages.
  • Experience with software development lifecycle (SDLC) and CI/CD pipelines.

Responsibilities

  • Design and maintain scalable pipelines for ingesting, processing, and validating datasets.
  • Collaborate with data scientists and product teams to deliver data solutions.
  • Debug and resolve complex data issues to ensure system performance.

Skills

Python
Linux
SQL
Docker
Data pipeline management
Problem-solving
Continuous learning

Tools

Docker
Kubernetes
ETL tools
Cloud platforms
Job description
Brahma is a pioneering enterprise AI company developing Astras, AI-native products built to help enterprises and creators innovate at scale. Brahma enables teams to break creative bottlenecks, accelerate storytelling, and deliver standout content with speed and efficiency. Part of the DNEG Group, Brahma brings together Hollywood’s leading creative technologists, innovators in AI and Generative AI, and thought leaders in the ethical creation of AI content.
Job Description

As a Data Engineer, you’ll architect and maintain the pipelines that power our products and services. You’ll work atthe intersection of ML, media processing, and infrastructure; owning the data tooling and automation layer thatenables scalable, high-quality training and inference. If you’re a developer who loves solving tough problems andbuilding efficient systems, we want you on our team.
Key Responsibilities
  • Design and maintain scalable pipelines for ingesting, processing, and validating datasets with main focus visual and voice data.
  • Work with other teams to identify workflow optimisation potential, design and develop automation tools, using AI-driven tools and custom model integrations and scripts.
  • Write and maintain tests for pipeline reliability.
  • Build and maintain observability tooling in collaboration with other engineers to track data pipeline health and system performance.
  • Collaborate with data scientists, operators, and product teams to deliver data solutions.
  • Debug and resolve complex data issues to ensure system performance.
  • Optimise storage, retrieval, and caching strategies for large media assets across environments.
  • Deploy scalable data infrastructure using cloud platforms as well as on-premise and containerization.
  • Deepen your knowledge of machine learning workflows to support AI projects.
  • Stay current with industry trends and integrate modern tools into our stack.
Must Haves
  • 3+ years in data engineering or related backend/infrastructure role.
  • Strong programming skills in Python or similar languages.
  • Experience with software development lifecycle (SDLC) and CI/CD pipelines.
  • Proven experience building and testing data pipelines in production.
  • Proficiency in Linux.
  • Solid SQL knowledge.
  • Experience with Docker or other containerisation technologies.
  • Proactive approach to solving complex technical challenges.
  • Passion for system optimisation and continuous learning.
  • Ability to adapt solutions for multimedia data workflows.

Nice to Have

  • Experience with Kubernetes (k8s).
  • Knowledge of machine learning or AI concepts.
  • Familiarity with ETL tools or big data frameworks.
  • Familiarity with cloud platforms (e.g., AWS, GCP, Azure).

About You

  • Innovative
  • Like challenges
  • Adaptable
  • Calm under pressure
  • Strong communication abilities
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.