See Yourself in the Team
As a member of the Operations Data Engineering team, you will perform day-to-day Hadoop platform performance tuning and deliver support, ensuring optimal system performance and reliability. Building ETL applications and managing large-scale, complex big data applications using cloud-native technologies. Collaborate with stakeholders in the Data and Platform crew to drive our Data Strategy and achieve our vision of becoming the best AI Bank in the world.
We Are Interested in Hearing from People Who:
Have hands-on experience in building cloud-native applications in AWS.
Bring hands-on experience with Infrastructure as Code using tools such as Terraform and CloudFormation for repeatable AWS deployments.
Are comfortable designing CI/CD pipelines with GitHub Actions.
Focus on automation and building efficient systems.
Are familiar with tools like Hadoop, Spark, Kafka, and other distributed processing frameworks for handling large datasets.
Have knowledge of financial markets and/or corporate & institutional lending products (advantageous).
Tech Skills
We use a broad range of tools, languages, and frameworks. While we don't expect you to know them all, experience or a willingness to learn these skillsets (or equivalents) will set you up for success in our team:
AWS Certified Data Engineer – Associate
Strong SQL experience
Understanding of ETL data pipelines using Ab Initio (any other ETL tools)
AWS EMR or Glue
Integration knowledge of MQs, REST APIs and Kafka
Data warehousing with Snowflake
Infrastructure as Code using Terraform
Design and implement secure CI/CD pipelines using GitHub Actions
With this role, there will be an on-call support component with it sitting in the operations team.
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV. If this job isn't quite right for you, but you are looking for a new position, please contact us for a confidential discussion on your career.
LHS 297508