Relevant B-Degree in Computer Science OR; equivalent level of industry certification in Technical field &
essential if minimum requirements of experience & practical application in the following are evident:
3-5 years of Production experience with designing and building, BI systems and complex data ecosystems
3 + years of Production experience working in Big data environments (advantageous for all, a must for high volume environments) optimizing and building Big Data pipelines, architectures, and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka. Most of the broader team are .NET Core developers integrating with Kafka. Therefore, .NET Core and C# experience will be advantageous.
Architecture:
Well-versed in Big Data and Event-Driven Architecture best practices.
Thorough knowledge of web applications & specifically knowledge of data design patterns used within web applications
Infrastructure:
Production experience in designing and setting up Hadoop and Kafka clusters in AWS
Experience in setting up automated onboarding for these clusters
Logical & physical design of data models and entity relations
Testing:
Experience with test-driven development and domain-driven design
Experience with an appropriate unit testing framework(s)
Messaging:
Solid understanding of messaging protocols and web services like REST
Databases:
Experience with open-source relational and NoSQL databases
Advanced knowledge of SQL e.g. query authoring
Thorough knowledge of operating systems such as Windows and Linux
SSL Security including expertise in encryption methods & data transfer across systems
Strong understanding of version control and related concepts and techniques, particularly Git
Familiar with OAuth, Open Connect ID, and SAML, preferably with an understanding of AD / LDAP /
Kerberos * Familiar with Containerisation technologies like Docker & or orchestrators like Kubernetes
Automation:
Familiar with CI/CD tools (preferably Azure DevOps / TFS) and Artifact Management (preferably JFrog Artifactory)
Familiar with scripting languages like Bash and/or Python
Data Architecture & Specialist Data Engineering
Understand the technical landscape and bank-wide architecture that is connected to or dependent on the
business area supported in order to effectively design & deliver data solutions (architecture, pipeline, etc.) * Translate and interpret the data architecture direction and associated business requirements & leverage expertise in analytical & creative problem solving to produce data solution designs (build a solution from its components) beyond the analysis of the problem.
Participate in design thinking processes to successfully deliver data solution blueprints
Leverage open-source relational and No-SQL databases as well as integration and streaming platforms to deliver sustainable business-specific data solutions.
Design data retrieval, storage & distribution solutions (or components thereof) including contributing to all phases of the development lifecycle.
Develop high-quality data processing, retrieval, storage & distribution design in a test-driven & domain
driven / cross-domain environment. Assemble large, complex data sets that meet business requirements & manage the data pipeline * Build analytics tools that use the data pipeline by quickly producing well-organized, optimized, and
documented source code & algorithms to deliver technical data solutions * Automate tasks through appropriate tools and scripting technologies e.g. Terraform
Debug existing source code and polish feature sets.
ExecutivePlacements.com
Beware of fraud agents! do not pay money to get a job
MNCJobs.co.za will not be responsible for any payment made to a third-party. All Terms of Use are applicable.