Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer

Data Freelance Hub

Greater London

Hybrid

GBP 60,000 - 80,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading data consultancy is seeking an experienced Data Engineer for a 6-month contract in Central London. The role involves designing and managing cloud data pipelines and requires strong skills in Python and cloud object storage (GCS/S3), along with practical experience in PyTorch. Ideal candidates will have a strong commercial background in data engineering and an interest in ML workflows. This position is based in Central London, with an agile working pattern of 2 to 3 days in-office, offering £750 per day.

Qualifications

  • Strong commercial experience as a Data Engineer.
  • Hands-on experience with cloud object storage (GCS preferred, or AWS S3).
  • Practical PyTorch experience for training pipelines and dataset handling.

Responsibilities

  • Design, build, and maintain scalable cloud data pipelines supporting ML workloads.
  • Manage large volumes of unstructured data using cloud object storage.
  • Support PyTorch-based data loading and dataset management.

Skills

Python development
Cloud object storage (GCS/S3)
PyTorch

Tools

BigQuery
SQL databases
Job description

⭐ - Featured Role | Apply direct with Data Freelance Hub

This role is for a Data Engineer on a 6-month contract, paying £750 per day, based in Central London (2/3 days in-office). Key skills include strong Python, cloud object storage (GCS/S3), and PyTorch experience.

Location: London, England, Central London

Contract: 6 months | Rate: £750 per day | Inside IR35

Key Responsibilities
  • Design, build, and maintain scalable cloud data pipelines supporting ML workloads
  • Manage large volumes of unstructured data using cloud object storage (GCS / S3)
  • Support PyTorch-based data loading and dataset management in production environments
  • Work closely with ML practitioners to enable training and inference pipelines
  • Ensure efficient memory usage and performance when handling large datasets
  • Integrate data from SQL-based systems into cloud and ML pipelines
  • Apply best practices around reliability, monitoring, and scalability
Required Experience
  • Strong commercial experience as a Data Engineer
  • Strong Python development skills
  • Hands‑on experience with cloud object storage (GCS preferred, or AWS S3)
  • Practical PyTorch experience (e.g. supporting training pipelines, dataset handling, data loaders)
  • Experience working in cloud environments with large‑scale file‑based data
Desired Experience
  • BigQuery (GCP)
  • SQL databases (Microsoft SQL Server preferred; PostgreSQL also acceptable)
  • Memory management and performance optimisation
  • Exposure to ML workflows (without being a dedicated ML Engineer)
Nice to Have
  • Broader GCP experience (Cloud Run, Cloud SQL, Cloud Scheduler, etc.)
  • Pharma or life sciences domain exposure (or strong interest in the space)
  • TensorFlow experience (acceptable alternative to PyTorch)

Freelance data hiring powered by an engaged, trusted community — not a CV database.

85 Great Portland Street, London, England, W1W 7LT

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.