Specialist Data Engineer

Cape Town, Western Cape, South Africa

Job Description


Job Summary What you will be doing: Understand the technical landscape and bank wide architecture that is connected to or dependent on the business area supported in order to effectively design & deliver data solutions (architecture, pipeline etc.) Translate and interpret the data architecture direction and associated business requirements & leverage expertise in analytical & creative problem solving to produce data solution designs (build a solution from its components) beyond the analysis of the problem. Participate in design thinking processes to successfully deliver data solution blueprints Leverage open source relational and No-SQL databases as well integration and streaming platforms do deliver sustainable business specific data solutions. Design data retrieval, storage & distribution solutions (or components thereof) including contributing to all phases of the development lifecycle. Develop high quality data processing, retrieval, storage & distribution design in a test driven & domain driven / cross domain environment. Assemble large, complex data sets that meet business requirements & manage the data pipeline Build analytics tools that use the data pipeline by quickly producing well-organised, optimized, and documented source code & algorithms to deliver technical data solutions Automate tasks through appropriate tools and scripting technologies e.g. Terraform Debug existing source code and polish feature sets. Build infrastructure to automate extremely high volumes of data delivery Create data tools for analytics and data science teams that assist them in building and optimizing data sets for the benefit of the business Ensure designs & solutions support the technical organisation principles of self-service, repeatability, testability, scalability & resilience Apply general design patterns and paradigms to deliver technical solutions Inform & support the infrastructure build required for optimal extraction, transformation, and loading of data from a wide variety of data sources Support the continuous optimisation, improvement & automation of data processing, retrieval, storage & distribution processes Ensure the quality assurance and testing of all data solutions aligned to the QA Engineering & broader architectural guidelines and standards of the organisation Implement & align to the Group Security standards and practices to ensure the undisputable separation, security & quality of the organisations data Meaningfully contribute to & ensure solutions align to the design & direction of the Group Architecture & in particular data standards, principles, preferences & practices. Short term deployment must align to strategic long term delivery. Monitor the performance of data solutions designs & ensure ongoing optimization of data solutions Stay ahead of the curve on data processing, retrieval, storage & distribution technologies & processes (global best practices & trends) to ensure best practice Conduct peer reviews, testing, problem solving within and across the broader team Build data science team capability in the use of data solutions Identify technical risks and mitigate these (pre, during & post deployment) Update / Design all application documentation aligned to the organization technical standards and risk / governance frameworks Create business cases & solution specifications for various governance processes (e.g. CTO approvals) Participate in incident management & DR activity applying critical thinking, problem solving & technical expertise to get to the bottom of major incidents Deliver on time & on budget What we are looking for : Completed BSc / IT degree or other related fields 5 years Production experience 3-5 years Production experience with designing and building, BI systems and complex data eco systems 3 + years Production experience working in Big data environment (advantageous for all, a must for high volume environments) optimising and building Big Data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka. Most of the broader team are .NET Core developers integrating with Kafka. Therefore .NET Core and C# experience will be advantageous. Well versed in Big Data and Event Driven Architecture best practices. Thorough knowledge of web applications & specifically knowledge of data design patterns used within web applications Production experience in designing and setting up Hadoop and Kafka clusters in AWS Experience in setting up automated onboarding for these clusters Experience with test-driven development and domain driven design Experience with appropriate unit testing framework(s) Solid understanding of messaging protocols and web services like and REST Experience with open-source relational and NoSQL databases Advanced knowledge of SQL e.g. query authoring Familiar with CI/CD tools (preferably Azure DevOps / TFS) and Artifact Management (preferably JFrog Artifactory) Please note that if you do not hear from us within 3 weeks, consider your application unsuccessful. Please note that most of our positions are remote however candidates should be residing within the traveling distance as circumstance of the opportunity can change.

Psybergate

Recruiter

Job Mail

Beware of fraud agents! do not pay money to get a job

MNCJobs.co.za will not be responsible for any payment made to a third-party. All Terms of Use are applicable.


Related Jobs

Job Detail

  • Job Id
    JD1243567
  • Industry
    Not mentioned
  • Total Positions
    1
  • Job Type:
    Full Time
  • Salary:
    Not mentioned
  • Employment Status
    Permanent
  • Job Location
    Cape Town, Western Cape, South Africa
  • Education
    Not mentioned