Senior PySpark Data Engineer

LUXOFT
Italy
EUR 30.000 - 50.000
Descrizione del lavoro

Project description

Join our dynamic team working on exciting projects in the thriving Middle East region. We offer a multitude of opportunities in various domains. Our diverse team comprises skilled professionals, including front-end and back-end developers, data analysts, data scientists, architects, analysts, and project managers. Currently, we are actively seeking a talented Data Engineer with proficiency in Python programming.

Responsibilities

  1. Actively engage in requirements clarification and contribute to sprint planning sessions.
  2. Design and architect technical solutions that align with project objectives.
  3. Develop comprehensive unit and integration tests to ensure the robustness and reliability of the codebase.
  4. Provide valuable support to QA teammates during the acceptance process, addressing and resolving issues promptly.
  5. Continuously assess and refine best practices to optimize development processes and code quality.
  6. Collaborate with cross-functional teams to ensure seamless integration of components and efficient project delivery.
  7. Stay abreast of industry trends, emerging technologies, and best practices to contribute to ongoing process improvement initiatives.
  8. Contribute to documentation efforts, ensuring clear and comprehensive records of technical solutions and best practices.
  9. Actively participate in code reviews, providing constructive feedback and facilitating knowledge sharing within the team.

SKILLS

Must have

  1. 5+ years of relevant experience in a Senior Data Engineer role.
  2. Big Data Technologies: Familiarity with big data technologies such as Hadoop, Apache Spark, or other distributed computing frameworks.
  3. Data Security and Governance: Comprehensive understanding of data security principles and practices to ensure the confidentiality and integrity of sensitive information, coupled with knowledge of data governance frameworks and practices for ensuring data quality, compliance, and proper data management.
  4. Python and PySpark: Demonstrated strong expertise in both Python and PySpark for efficient data processing and analytics.
  5. Advanced SQL Knowledge: Proficient in SQL with the ability to handle complex queries and database operations.
  6. ETL Experience: Prior experience working with Extract, Transform, Load (ETL) processes.
  7. Data Pipelines: Familiarity with data cleansing, data profiling, data lineage, and adherence to best practices in data engineering.
  8. Familiarity with Data Analysis Approaches: Some experience with various data analysis methodologies.
  9. Python Libraries: Familiarity with building libraries in Python for enhanced functionality.
  10. API Integration: Knowledge of integrating data pipelines with various APIs for seamless data exchange between systems.
  11. Version Control: Proficiency in version control systems, such as Git, for tracking changes in code and collaborative development.
  12. Cloud Technology Experience: Prior exposure to cloud technologies, particularly Azure or any leading cloud platform.
  13. Data Visualization: Some exposure to data visualization tools like Tableau, Power BI, or others to create meaningful insights from data.
  14. Collaboration Tools: Familiarity with collaboration tools such as Azure DevOps, Jira, Confluence, or others to enhance teamwork and project documentation.
  15. Educational Background: A degree in computer science, mathematics, statistics, or a related technical discipline.
  16. Financial Markets Knowledge: Familiarity with financial markets, portfolio theory, and risk management is a plus.

Non-technical skills:

  1. Problem-Solving: Strong problem-solving skills to tackle complex data engineering challenges.
  2. Data Storytelling: Ability to convey insights effectively through compelling data storytelling.
  3. Quality Focus: Keen attention to delivering high-quality solutions within specified timelines.
  4. Team Collaboration: Proven ability to work collaboratively within a team, taking a proactive approach to problem resolution and process improvement.
  5. Communication Skills: Excellent communication skills to articulate technical concepts clearly and concisely.

Nice to have

  1. Streaming Data Processing: Exposure to streaming data processing technologies like Apache Kafka for real-time data ingestion and processing.
  2. Containerization: Knowledge of containerization technologies like Docker for creating, deploying, and running applications consistently across various environments.
  3. Data Modeling and Evaluation: Extensive experience in data modeling and the evaluation of large datasets.
  4. Model Training, Deployment, and Maintenance: Background in training, deploying, and maintaining models for effective data-driven decision-making.
  5. Requirements for Machine Learning: Experience in developing and implementing machine learning algorithms, Natural Language Processing (NLP), and Neural Networks.
  6. Applied Mathematics: Proficiency in applied mathematics, including but not limited to linear algebra, probability, statistics, and distributions.
Ottieni una revisione del curriculum gratis e riservata.
Seleziona il file o trascinalo qui
Avatar
Consulenza online gratuita
Aumenta le tue probabilità di ottenere quel colloquio!
Sii tra i primi a scoprire le nuove offerte di lavoro da Senior PySpark Data Engineer in località Italy