to support and scale our data infrastructure. This role is ideal for a proactive problem-solver who thrives in dynamic environments and is comfortable working on both pipeline development and platform optimisation.
Key Responsibilities:
Design, build, and maintain scalable data pipelines using
Azure Data Factory
and
Databricks
Integrate and transform data from various sources to support analytics and reporting requirements
Optimise data workflows for performance and cost-effectiveness in a cloud-based environment
Collaborate with analysts, data scientists, and business stakeholders to deliver clean, reliable data
Monitor data quality and ensure robust data governance practices
Participate in code reviews, DevOps processes, and continuous improvement initiatives
Required Skills & Experience:
3+ years' experience as a Data Engineer or in a similar role
Proven expertise in
Azure Data Factory
and
Databricks
(Spark, Python, SQL)
Experience with building ELT/ETL pipelines in a cloud-based architecture
Strong proficiency in SQL and working with structured/unstructured data
Familiarity with Azure Synapse, Data Lake, and version control tools (e.g., Git)
Solid understanding of data security and governance best practices
Nice to Have:
Experience in CI/CD and infrastructure-as-code (e.g., Terraform, Azure DevOps)
Exposure to machine learning or real-time data processing
In order to comply with the POPI Act, for future career opportunities, we require your permission to maintain your personal details on our database. By completing and returning this form you give PBT your consent*
If you have not received any feedback after 2 weeks, please consider you application as unsuccessful.*