Job Search and Career Advice Platform

Enable job alerts via email!

Lead Data Engineer

Hitachi Digital LLC

Birmingham

On-site

GBP 70,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology company in Birmingham is hiring a Lead Data Engineer to architect and optimize data pipelines and warehouses, specifically working with BigQuery, Spanner, and Neo4j. Candidates should possess strong SQL skills and experience in data modeling for large-scale analytics. This role involves leading the data architecture and integrating AI/ML pipelines, focusing on data governance and security. The company offers competitive salary and flexible working arrangements.

Benefits

Flexible arrangements
Competitive salary
Comprehensive health plans
Professional development opportunities

Qualifications

  • 8+ years of experience in data engineering.
  • 3+ years experience with BigQuery, Spanner, and Neo4j.
  • Hands-on experience with GCP services relevant to data engineering.

Responsibilities

  • Architect, develop, and optimize data pipelines.
  • Design and maintain data warehouses and databases.
  • Integrate graph databases with AI/ML pipelines.

Skills

SQL
Cypher
Data modeling
Graph algorithms
Analytical skills
Problem-solving skills

Education

Bachelor’s or Master’s in Computer Science, Data Engineering

Tools

BigQuery
Spanner
Neo4j
GCP services
Looker
Job description

Hitachi Digital is building enterprise-scale data platforms to power AI-driven insights and advanced analytics. We leverage Google Cloud Platform (GCP), BigQuery, Spanner, and graph technologies to deliver secure, scalable, and intelligent data ecosystems.

Role Overview

As a Lead Data Engineer, you will architect, develop, and optimize data pipelines, data warehouses, and graph-based solutions. You will lead data modeling, ETL strategies, and integration workflows for structured and graph data, enabling advanced analytics and AI applications across global OpCos.

Key Responsibilities – Data Architecture & Engineering
  • Design and maintain BigQuery data warehouses and Spanner databases for large-scale analytics.
  • Develop graph schemas and knowledge graphs using Neo4j for contextual data relationships.
  • Implement ETL pipelines, data transformations, and ingestion strategies for structured and unstructured data.
Performance & Optimization
  • Optimize SQL queries, Cypher queries, and indexing for high-volume datasets.
  • Monitor and tune BigQuery cost-performance and Neo4j performance.
  • Ensure high availability, disaster recovery, and backup strategies for critical data assets.
Integration & AI Enablement
  • Integrate graph databases with AI/ML pipelines for semantic search and contextual reasoning.
  • Enable RAG workflows and knowledge graph-driven AI for enterprise applications.
  • Collaborate with data science teams to embed graph intelligence into analytics platforms.
Governance & Security
  • Enforce data governance policies, lineage tracking, and compliance standards.
  • Implement IAM roles, VPC-SC, and data protection controls on GCP.
  • Document architecture decisions, query patterns, and best practices.
Required Qualifications
  • Bachelor’s or Master’s in Computer Science, Data Engineering, or related field.
  • 8+ years in data engineering; 3+ years with BigQuery, Spanner, and Neo4j.
  • Strong proficiency in SQL, Cypher, and data modeling for relational and graph databases.
  • Hands‑on experience with GCP services (BigQuery, Spanner, IAM, VPC‑SC, Cloud Storage).
  • Familiarity with graph algorithms, knowledge graphs, and semantic modeling.
  • Understanding of ETL pipelines, data governance, and cloud security.
Preferred Qualifications
  • Certifications: Google Professional Data Engineer, Neo4j Certified Professional.
  • Experience with AI/ML integration, RAG pipelines, and agentic workflows.
  • Exposure to Looker BI, Power BI, or Tableau for visualization.
  • Knowledge of LangChain, vector databases, and embedding strategies.
  • Strong analytical and problem‑solving skills.
  • Ability to design scalable, secure, and performant data architectures.
  • Collaborative mindset with excellent communication skills.
  • Passion for graph-driven AI and cloud-native data platforms.
Success Metrics
  • Deployment of optimized BigQuery and Spanner environments for enterprise analytics.
  • Integration of Neo4j knowledge graphs into AI workflows for contextual intelligence.
  • Reduction in query latency and cost across large-scale datasets.
  • Positive impact on data quality, governance, and compliance readiness.
What You’ll Work With
  • Graph DB: Neo4j, Cypher, APOC procedures.
  • Data Tools: Python, SQL, ETL frameworks, BI tools (Looker preferred).
Benefits

Flexible arrangements, competitive salary, comprehensive health plans, professional development opportunities, and a culture that values innovation and diversity.

We’re proud to say we’re an equal‑opportunity employer and welcome all applicants for employment without regard to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status, or any other protected characteristic. If you need reasonable accommodations during the recruitment process, please let us know so we can support you.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.