Data Engineer

Apply
Apply

Share

successfully icon

Successfully

The vacancy has been successfully added to favorites

location icon

Guadalajara, MX, Mexico

specialization icon

DWH Development

lob icon

HLS & Consumer industry

date icon

26/01/2026

Req. VR-120227

Apply
Project description

The project focuses on the development, maintenance, and enhancement of a cloud-based data platform supporting Procter & Gamble analytics and data processing use cases. The platform operates across Microsoft Azure and Google Cloud Platform, enabling scalable data ingestion, transformation, storage, and analytics for enterprise and Point of Sale (POS) data.

The engineer will be part of a data engineering team responsible for designing and implementing robust data pipelines and ELT processes, building data models, and ensuring high-quality, reliable data delivery. The role requires strong software engineering practices, including CI/CD, automated testing, code quality standards, and version control.

On the Azure side, the solution leverages Azure Databricks, Azure SQL Server, Azure Machine Learning, and Azure Blob Storage, primarily using Python and PySpark. On the Google Cloud Platform, the stack includes BigQuery, DataProc, Airflow, and GCP Buckets, also with a strong emphasis on Python and PySpark development.

The engineer will actively contribute throughout the software development lifecycle, including requirements definition, code reviews, deployment, release engineering, and operational support. CI/CD pipelines are implemented using GitHub and GitHub Actions, following enterprise security and compliance standards.

The role requires compliance with P&G IT policies, completion of mandatory P&G training, and authorization to work with POS (Point of Sale) data. The engineer must have access to both Azure and GCP P&G cloud environments, including virtual machines with connectivity to PGI endpoints. A MacOS or Linux development environment is preferred.

From a collaboration perspective, the role works closely with P&G stakeholders and distributed teams. Candidates located in Latin America are preferred, with Spanish language skills considered a plus. The engineer should operate in the same time zone as the P&G team, although a minimum 4-hour overlap may be negotiated for candidates from other regions.

Responsibilities
bullet icon

Design, develop, and maintain scalable data pipelines and ELT processes across Azure and Google Cloud Platform environments.

bullet icon

Implement and optimize data solutions using Python and PySpark on platforms such as Azure Databricks, BigQuery, DataProc, and Airflow.

bullet icon

Develop and maintain data models to support analytics, reporting, and downstream data consumers.

bullet icon

Ensure high code quality by applying software engineering best practices, including unit testing, code coverage, linting, and peer code reviews.

bullet icon

Build, maintain, and enhance CI/CD pipelines using GitHub and GitHub Actions.

bullet icon

Participate in the full software development lifecycle, including requirements analysis, design, implementation, deployment, and release management.

bullet icon

Collaborate closely with cross-functional teams and P&G stakeholders to deliver reliable, high-quality data solutions.

bullet icon

Monitor, troubleshoot, and optimize data workflows to ensure performance, scalability, and reliability.

bullet icon

Adhere to P&G IT policies, security standards, and compliance requirements, including working with POS (Point of Sale) data.

bullet icon

Contribute to documentation, knowledge sharing, and continuous improvement of engineering processes and standards.

bullet icon

Support cloud infrastructure usage on Azure and GCP, including working with virtual machines and secure access to PGI endpoints.

bullet icon

Actively identify opportunities for automation, process improvements, and reduction of operational defects.

Skills

Must have

bullet icon

Strong proficiency in Python and PySpark, with hands-on experience in production-grade data solutions.

bullet icon

Solid knowledge of Data Engineering concepts, including object storage, data pipelines, ELT processes, and data modeling.

bullet icon

Hands-on experience with cloud-based data platforms on Microsoft Azure and Google Cloud Platform.

bullet icon

Practical experience with tools and services such as Azure Databricks, Azure Blob Storage, Azure SQL Server, BigQuery, DataProc, Airflow, and GCP Buckets.

bullet icon

Strong understanding of Software Engineering best practices, including:

bullet icon

Continuous Integration and Continuous Deployment (CI/CD)

bullet icon

Unit testing, test coverage, and code quality standards

bullet icon

Linting and static code analysis

bullet icon

Version Control Systems (Git)

bullet icon

Experience with GitHub and GitHub Actions for source control and CI/CD automation.

bullet icon

Understanding of the Software Development Lifecycle (SDLC), including requirements definition, code reviews, release engineering, and deployments.

bullet icon

Ability to work with enterprise-scale data, including sensitive datasets such as Point of Sale (POS) data, while adhering to security and compliance requirements.

bullet icon

Familiarity with working in enterprise cloud environments, including access via virtual machines and secure endpoints.

bullet icon

Ability to work effectively in a distributed team environment and collaborate with multiple stakeholders.

bullet icon

Availability to work in the same time zone as the P&G team or ensure a minimum of 4-hour daily overlap.

Nice to have

bullet icon

Experience working in enterprise or regulated environments, preferably within large global organizations.

bullet icon

Knowledge of Azure Machine Learning and/or advanced analytics workloads.

bullet icon

Experience with workflow orchestration and scheduling beyond basic pipelines (e.g. advanced Airflow patterns).

bullet icon

Familiarity with monitoring, logging, and alerting for data pipelines and cloud-based applications.

bullet icon

Experience with data quality frameworks and automated validation of data.

bullet icon

Knowledge of infrastructure-as-code concepts (e.g. Terraform, ARM, or similar tools).

bullet icon

Understanding of cost optimization strategies in cloud environments.

bullet icon

Experience in cross-cloud or hybrid cloud architectures (Azure + GCP).

bullet icon

Prior exposure to POS (Point of Sale) data or retail/consumer data domains.

bullet icon

Spanish language skills, especially for collaboration with teams in Latin America.

bullet icon

Experience working in Agile / Scrum environments.

bullet icon

Familiarity with Linux or MacOS development environments.

Other
seniority icon

Languages

Spanish: B2 Upper Intermediate,English: B2 Upper Intermediate

seniority icon

Seniority

Senior

Guadalajara, MX, Mexico

Req. VR-120227

DWH Development

HLS & Consumer industry

26/01/2026

Req. VR-120227

Apply for Data Engineer in Guadalajara, MX

*Indicates a required field

Under the terms of your specific consent or to perform our obligations under a contract with you, as applicable, we, Luxoft Holding Inc. will manually and electronically process your personal data, specifically your first name, last name, phone number, e-mail address and other data you provide us through this form.


Within this context, we process personal data only for the specific purpose(s) indicated in the individual consent language or other notices provided below.


We will – insofar as reasonably necessary for the purpose you have agreed to and within the scope of applicable laws – transfer your personal data to other entities within the Luxoft Group and to the group of third party recipients listed in our Privacy Notice. Such Recipients can be located outside the European Union (EU) and/or the European Economic Area (EEA) (“Third Countries”). The Third Countries concerned, e.g. the USA, may not have the level of data protection that you enjoy e.g. under the GDPR. This can result in disadvantages such as an impeded enforcement of data subjects’ rights, a lack of control over further processing and access by state authorities. You may only have limited legal remedies against this. Insofar our transfer of your personal data to recipients in Third Countries is not covered by an adequacy decision of the EU Commission, we achieve an adequate level of data protection as further detailed out in our Privacy Notice.


With your consent, we personalise marketing communications to you by way of carrying out marketing research analysis, analysing the surfing-behaviour of our website visitors and to adjust it to their detected tendencies, as well as to plan more efficient future marketing activities. This personalised marketing does not include any automated decision-making activities.


Further information on how we process personal data in general is available in our Privacy Notice. You may withdraw any given consent at any time. The withdrawal of your consent(s) will not affect the lawfulness of processing before its withdrawal. For any request in this context, please e-mail us at: DPO@luxoft.com.


Before uploading CV or any other information to this website, to learn more about your obligations and restrictions arising from the use of this website, please read our Terms of Use.