Data Engineer (Kafka, Hadoop, Spark, Python, DBT)
Location: Manchester
Contract: 6 months
Hours: 37.5 hours per week
This is a *Manchester* based role with an excellent immediate start within a Global Technology Client that is working *to design and deliver scalable data pipelines and help build a modern, high-performance data platform.*
Overview
Our client is looking for an experienced Data Engineer to design and deliver scalable data pipelines and help build a modern, high-performance data platform. You will work with cross-functional teams to ensure data is reliable, secure, and easily accessible for analytics and product development.
Key Responsibilities
Build and maintain scalable data pipelines and data models.
Ensure data quality, governance, monitoring, and security best practices.
Troubleshoot and optimise data workflows.
Support analytics teams with data access and insights.
Provide technical guidance and mentor junior engineers where needed.
Required Skills
5+ years data engineering experience.
Strong experience with Kafka, Hadoop, Spark, DBT (Python also considered).
Data modelling experience (Dimensional or Data Vault).
CI/CD and Agile experience.
Cloud experience (AWS preferred).
Strong communication and collaboration skills.
Education
Relevant degree or equivalent experience.
Please send your CV or call Toni to discuss further.
We are an equal opportunities employment agency and welcome applications from all suitably qualified persons regardless of race, sex, disability, religion/belief, sexual orientation, or age.
We champion differences in technology recruitment and work with clients who actively wish to diversify their talent force - ALL applicants are welcome to apply.