Back to jobs
Cloud Data Engineer
Successfully
Req. VR-116201
Support one of the top Australian banks as they seek to modernise their data and analytics platform.
You will be working directly with IT and business stakeholders in Data and Platform team to implement banks data strategy to become the best AI bank of the world.
Roles & Responsibilities:
Design, build, and deliver a new cloud data solution to transform our international regulatory reporting requirements
Lead the design and delivery of cost-effective, scalable data solutions aligned with strategic goals that meet performance, security, and operational requirements.
Drive solution architecture decisions, ensuring alignment with enterprise architecture principles and business priorities
Engineer robust data product assets and pipelines in AWS (S3, Glue, Iceberg, Kinesis, Airflow, Sagemaker, Redshift) that integrate with other applications, including SaaS reporting applications, eg, Axiom
Provide technical data governance and risk management,
Lead a team of data engineers providing technical guidance, reviewing work, and mentoring team members to deliver high-quality data products
Define and implement engineering standards, including data modelling, ingestion, transformation, and egression patterns and reviews
Collaborate across teams to ensure a secure, efficient, and well-documented solution.
Learn and contribute to continuous improvement initiatives within the team.
We would like to hear from individuals with expertise in:
Have strong experience in Data Engineering using Agile practices and DevSecOps
Are experienced in designing, building, and delivering “greenfield” data solutions in AWS Cloud using cloud native technologies that produce data products or data assets with proper data quality assurance and security controls.
Are passionate technology leaders, can mentor and build a good community of engaged and curious engineers
Having strong solution design capabilities, consistently driving cost-effective and technologically feasible solutions, while steering solution decisions across the group, to meet both operational and strategic goals, is essential.
Have excellent verbal and written communication skills
Have an ability to engage, manage internal stakeholders, and external suppliers
Have a problem-solving mindset with a focus on automation and continuous process improvement
Must have
Total Years of experience in the range of 4 to 7 years in the following skills:
We use a broad range of tools, languages, and frameworks. We don’t expect you to know them all, but experience or exposure with some of these (or equivalents) will set you up for success in this team-
Extensive experience in designing, building, and delivering enterprise-wide data ingestion, data integration, and data pipeline solutions
Strong Data Architecture expertise, including different data modelling techniques and design patterns (conceptual, logical, physical, semantic is preferred)
Strong knowledge of data governance, such as data lineage, technical metadata, data quality, and reconciliation
Ability to drive platform efficiency through automation and AI capabilities
AWS Data Stack: EMR, Glue, Redshift, Athena, S3, Lambda, ECS
Data Orchestration & Pipelines: Airflow, Dataform
Data Formats & Modelling: Iceberg, JSON, XML, CSV, Data Modelling
Programming & DevOps: Python, SQL, Git, GitHub Actions, Team City, Jenkins, Octopus, Unix shell scripting
ETL & Ingestion: File ingress/egress solutions in AWS
Security and Observability: DevSecOps, Artifactory, Observability tooling
Testing & Automation: test automation frameworks, Jupyter Notebooks
Familiarity with data warehousing and build experience in Teradata, Oracle
Experience in visualisation tools such as Power BI, Tableau
Familiarity and experience with Agile processes
AWS Data Engineer Associate certification
Nice to have
AWS Solution Architect certification
Containerisation (Docker, Kubernetes)
Data visualisation tools and integration to Tableau, Power BI
Alation
Observability tools (i.e., Observe, Splunk, or Prometheus/Grafana).
Ab initio or DBT tooling
Experience with Parquet File Format, Iceberg tables
Glue Data Catalogue & AWS DataZone
Markets domain knowledge
Languages
English: C2 Proficient
Seniority
Regular
Bengaluru, India
Req. VR-116201
DevOps
BCM Industry
19/11/2025
Req. VR-116201
Apply for Cloud Data Engineer in Bengaluru
*Indicates a required field