Job Search and Career Advice Platform

Enable job alerts via email!

Platform Data Engineer

Runware Inc.

Remote

GBP 60,000 - 80,000

Full time

22 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading AI media-creation company in the United Kingdom is seeking a Mid-Senior level Data Engineer to build and maintain robust data infrastructure. The ideal candidate will have solid experience in managing data pipelines and supporting observability efforts. This role is fully remote with flexible working hours, generous paid time off, and meaningful stock options.

Benefits

Generous paid time off
Meaningful stock options
Flexible hours
Family leave
Company retreats

Qualifications

  • Solid experience as a Data Engineer or similar role in a production environment.
  • Strong understanding of data pipelines, streaming vs batch processing, and data modelling.
  • Comfortable digging through logs and metrics.

Responsibilities

  • Design, build, and maintain schemas and data models.
  • Build robust pipelines for API logs and system metrics.
  • Contribute to debugging, RCA, and performance optimisation initiatives.

Skills

Data pipeline management
Data modelling
Log analysis
ETL/ELT workflows
Monitoring tools integration
Collaboration with cross-functional teams

Tools

ClickHouse
Prometheus
Grafana
Datadog
OpenTelemetry
Job description

Runware is building a high-performance AI media‑creation platform powering instant generation of text, image, video, 3D, and audio. As our platform scales and integrations grow, we need robust, reliable, and high‑throughput data systems.

Mission

Build, optimise, and maintain Runware's data infrastructure.

You will ensure that logs, metrics, performance data, and events are efficiently ingested, processed, stored, and ready to be analysed by engineering, ML, and product teams.

This role is central to:

  • Supporting observability & platform reliability
  • Enabling deep log & performance analytics
  • Powering internal dashboards and customer reporting
  • Providing clean, structured data to the Data Expert and all stakeholders
What You Will Do
Architecture & Ownership
  • Design, build, and maintain schemas and data models
  • Optimize table layout, partitioning, indexing, and compression in high‑volume data
  • Ensure fast, efficient querying for logs, requests, metrics, and performance traces
  • Maintain ingestion pipelines for billions of records
Data Engineering & Pipelines
  • Build robust pipelines for API logs, model inference logs, error events, usage & integration events, GPU & system metrics
  • Implement ETL/ELT workflows to transform raw data into analytics‑ready structures
  • Ensure quality, reliability, and real‑time availability of data sources
Performance & Log Analysis Infrastructure
  • Build tooling to support large‑scale log analysis
  • Enable deep investigation into latency, throughput, errors, and bottlenecks
  • Provide the raw data foundation for E2E inference‑time monitoring
  • Help debug production issues using logs and traces
Tooling & Observability Infrastructure
  • Work closely with DevOps, ML, and backend engineering
  • Integrate pipelines with monitoring tools (Prometheus, Grafana, Datadog, OpenTelemetry)
  • Automate ingestion and cleanup tasks
  • Build internal libraries or utilities to support monitoring and debugging workflows
Collaboration & Cross‑Functional Support
  • Provide clean data interfaces for the Data Expert (dashboards, monitoring, analytics)
  • Support engineering teams by exposing the right logs and metrics
  • Contribute to debugging, RCA, and performance optimisation initiatives
Requirements
  • Solid experience as a Data Engineer or similar role in a production environment
  • Strong understanding of data pipelines, streaming vs batch processing, and data modelling
  • Experience working with analytical databases (ClickHouse is a plus, but not mandatory)
  • Comfortable digging through logs, metrics, and platform data to understand system behaviour
  • Familiarity with event‑based systems, monitoring, and observability concepts
  • Pragmatic mindset: care about usefulness, reliability, and performance over theory
  • Comfortable working cross‑functionally with backend, infra, and data profiles
  • Startup / scale‑up experience is a plus
Nice to Have
  • Experience with high‑throughput or realtime systems
  • Exposure to cost monitoring, performance analytics, or platform observability
  • Background in AI, ML platforms, or data‑heavy products
Benefits

We’re a remote‑first collective, meeting in person twice a year to plan, brainstorm, celebrate wins, and enjoy some face‑to‑face time. We have core hours for cooperative working and calls, but outside of that your calendar is yours. Work the hours that let you perform at your peak while also building a healthy life.

Our release cycles are fast and intense, but they’re followed by real downtime. After big pushes we expect the team to unplug, recharge, and come back ready & stronger than ever for the next leap.

  • Generous paid time off – vacation, sick days, public holidays
  • Meaningful stock options – share in the upside you create
  • Remote‑first setup – work from home anywhere we can employ you
  • Flexible hours – own your schedule outside core collaboration blocks
  • Family leave – paid maternity, paternity, and caregiver time
  • Company retreats – twice‑yearly gatherings in inspiring locations

Please note: We are unable to offer visa sponsorship in the UK at this time. Candidates must have existing right to work in the UK.

Seniority level

Mid‑Senior level

Employment type

Full‑time

Job function

Other

Industries

IT Services and IT Consulting

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.