Blue Bean Software is a premier custom software and product development IT company, delivering custom-made solutions for large enterprises as well as dynamic start-ups.
We pride ourselves in taking on and solving complex problems as well as high-stakes projects through the use of a balanced combination between tech savvy and a deep understanding of a client's needs.
We have a prominent presence in the financial services industry and have started to branch out into other industries such as agritech and healthtech.
Who we are
At Blue Bean Software, we believe in creating an environment where like-minded software engineers are able to express themselves freely and pursue their individual and professional growth. We further encourage individuals to master their respective skill sets whilst effectively working within teams to overcome challenges and accomplish set goals.
At Blue Bean Software, we firmly believe in maintaining a culture of self-motivation, integrity and trust to drive productivity.
How we work
We have a flat organisational structure and value collaboration between our teams. We further believe in empowering individual team members to ensure agile decision-making and streamlined communication across all teams to deliver efficient and effective customer service at all times.
We are looking for a Senior Data Engineer to join our team of professionals.
Job Summary:
We are looking for an experienced
Intermediate to Senior Data Engineer
to join our growing team. The ideal candidate is passionate about designing and building scalable data solutions, has a strong grasp of data architecture, and is comfortable working across modern cloud-based data platforms. You'll be responsible for building, optimising, and maintaining efficient data pipelines that drive business value.
Key Responsibilities:
5+ years
of professional experience in data engineering or related fields.
Strong experience building
ETL pipelines
in cloud environments (preferably AWS).
Proficiency in Python
for scripting, data manipulation, and automation.
Experience with
Apache Spark
and knowledge of the broader big data ecosystem
Hands-on experience with
streaming technologies
such as
Kafka
or
Kinesis
.
Working knowledge of
AWS services
like S3, Glue, Lake Formation, Athena, and IAM.
Familiarity with
CI/CD tools
(e.g. GitHub Actions, Azure DevOps) and version control (Git).
Experience with
Terraform
or other infrastructure-as-code frameworks.
Exposure to
Lakehouse/Data Mesh
architectures.
Understanding of
security protocols
including encryption, OAuth, SAML, and identity providers (AD/LDAP/Kerberos).
Exposure to
containerisation and orchestration
tools like Docker and Kubernetes is advantageous.
Familiarity with both
relational
and
NoSQL
databases.
Additional Information:
Advantageous:
Experience with dbt (Data Build Tool).
Exposure to Snowflake or similiar cloud-native data warehouse platforms.
Relevant certifications such as AWS Certified Data Analytics or Azure Data Engineer Associate.
Experience with monitoring and observability tools for data pipelines.*
Competencies:
Strong analytical thinking and attention to detail.
Comfortable working in high-pressure environments with time-sensitive data.
Excellent problem-solving and debugging abilities.
Team-oriented with strong communication skills.
A proactive, solutions-driven mindset.
* Embraces change and thrives in Agile environments.
Beware of fraud agents! do not pay money to get a job
MNCJobs.co.za will not be responsible for any payment made to a third-party. All Terms of Use are applicable.