Job Search and Career Advice Platform

Enable job alerts via email!

Data Engineer (Databricks, Neo4j)

Datamaticstechnologies

Remote

EUR 70,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology company is seeking an experienced Data Engineer with a strong background in Databricks, Teradata, and Neo4j. This remote role requires candidates based in Europe. Responsibilities include designing data pipelines, optimizing ETL processes, and collaborating with cross-functional teams. The ideal candidate has 5-7 years of experience, mandatory skills in Databricks and Neo4j, and a solid understanding of cloud platforms. Join a global team focused on cutting-edge data technologies.

Qualifications

  • 5–7 years of experience as a Data Engineer.
  • Strong hands-on experience with Databricks.
  • Mandatory expertise in Neo4j.

Responsibilities

  • Design and optimize scalable data pipelines using Databricks.
  • Build and maintain ETL/ELT processes across data environments.
  • Integrate structured and unstructured datasets.

Skills

Databricks
Neo4j
Teradata
Python
Cloud platforms (Azure/AWS/GCP)
ETL/ELT concepts
Data modeling
Problem-solving
Communication skills

Tools

Spark
Kafka
Event Hub
Job description

Job Title: Data Engineer (Databricks, Teradata & Neo4j)

Location: Remote (Candidates must be based in Europe)

Experience: 5–7 Years

Employment Type: Full-Time

Client Location: Sweden

Position Overview

We are looking for an experienced Data Engineer with strong hands‑on expertise in Databricks, Teradata, and Neo4j to join a leading technology‑driven team in Sweden. This is a remote role, but we require candidates who are currently residing in Europe due to project compliance and collaboration needs. The ideal candidate will have a solid background in building scalable data pipelines, integrating complex data sources, and working with modern data platforms.

Key Responsibilities
Data Engineering & Development
  • Design, develop, and optimize scalable data pipelines using Databricks (PySpark/Spark).
  • Build, maintain, and enhance ETL/ELT processes across multiple data environments.
  • Integrate structured and unstructured datasets for downstream analytics and consumption.
  • Develop and optimize data models on Teradata for performance and reliability.
  • Implement graph‑based data solutions using Neo4j.
Solution Design & Architecture
  • Collaborate with solution architects and business teams to understand data needs and design robust solutions.
  • Participate in system design sessions and contribute to architecture improvements.
  • Ensure data quality, validation, and governance throughout the data lifecycle.
Performance & Optimization
  • Troubleshoot and optimize Spark jobs, Teradata SQL queries, and data workflows.
  • Ensure highly available and high-performance data pipelines.
  • Monitor data operations and automate workflows where possible.
Collaboration & Communication
  • Work with cross‑functional teams including BI, Data Science, and Platform Engineering.
  • Document technical designs, pipelines, and solutions clearly and thoroughly.
  • Communicate effectively with remote stakeholders in a multicultural environment.
Required Skills & Qualifications
  • 5–7 years of experience as a Data Engineer.
  • Strong, hands‑on experience with Databricks (Spark, PySpark, Delta Lake).
  • Mandatory expertise in Neo4j (graph modeling, Cypher queries).
  • Solid experience with Teradata (SQL, performance tuning, data modelling).
  • Strong scripting and coding experience in Python.
  • Experience working with cloud platforms (Azure/AWS/GCP) is preferred—Azure is a plus.
  • Strong understanding of ETL/ELT concepts, data modelling, and distributed data processing.
  • Excellent analytical, problem‑solving, and communication skills.
  • Ability to work independently in remote, cross‑cultural teams.
Preferred Qualifications
  • Experience with CI/CD pipelines for data workflows.
  • Knowledge of data governance, data quality frameworks, and metadata management.
  • Exposure to real‑time data processing technologies (Kafka, Event Hub, etc.) is an advantage.
Additional Information
  • Remote role – Europe‑based candidates only due to project requirements.
  • Opportunity to work with a global team on cutting‑edge data technologies.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.