Job Search and Career Advice Platform

Enable job alerts via email!

AWS Data Engineer

N Consulting Limited

Greater London

Hybrid

GBP 70,000 - 75,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading consulting firm in the UK is seeking an experienced AWS Data Engineer for a 12-month contract. The role involves designing scalable data pipelines using Python and Apache Spark, orchestrating data workflows with AWS tools, and collaborating with business teams. Ideal candidates should be proficient in AWS services and data engineering principles, with a commitment to agile teamwork. This position offers a hybrid work model with a competitive salary range of £70,000 to £75,000 per year.

Qualifications

  • Expert in Python with experience in writing clean, maintainable code.
  • Familiar with data engineering and batch processing principles.
  • Understanding of the AWS data stack, including S3 and Glue.

Responsibilities

  • Design and develop scalable data pipelines using Python and Spark.
  • Orchestrate workflows with AWS tools like Glue and EMR Serverless.
  • Collaborate with business teams to provide data-driven solutions.

Skills

Python programming
Apache Spark
AWS Services
Data Engineering Basics
Collaboration in Agile teams

Tools

AWS Glue
AWS Lambda
Apache Iceberg
Job description
LocationLondon, England, United Kingdom# AWS Data Engineer at N Consulting LtdLocationLondon, England, United KingdomSalary£70000 - £75000 /yearJob TypeContractDate PostedJanuary 14th, 2026Apply Now**Role – AWS Data Engineer****Location : London, UK****12 Months FTC****Work Mode : Hybrid** **Need Expert in Python Pyspark, AWS, Cloud, AWS Services, AWS Components** • Designing and developing scalable, testable data pipelines using Python and Apache Spark• Orchestrating data workflows with AWS tools like Glue, EMR Serverless, Lambda, and S3• Applying modern software engineering practices: version control, CI/CD, modular design, and automated testing• Contributing to the development of a lakehouse architecture using Apache Iceberg• Collaborating with business teams to translate requirements into data-driven solutions• Building observability into data flows and implementing basic quality checks• Participating in code reviews, pair programming, and architecture discussions• Continuously learning about the financial indices domain and sharing insights with the teamWHAT YOU'LL BRING:* Writes clean, maintainable Python code (ideally with type hints, linters, and tests like pytest)* Understands data engineering basics: batch processing, schema evolution, and building ETL pipelines* Has experience with or is eager to learn Apache Spark for large-scale data processing* Is familiar with the AWS data stack (e.g. S3, Glue, Lambda, EMR)* Enjoys learning the business context and working closely with stakeholders • Works well in Agile teams and values collaboration over solo heroicsNice-to-haves:* It’s great (but not required) if you also bring:* Experience with Apache Iceberg or similar table formats* Familiarity with CI/CD tools like GitLab CI, Jenkins, or GitHub Actions* Exposure to data quality frameworks like Great Expectations or Deequ* Curiosity about financial markets, index data, or investment analytics
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.