The client is a global technology company that touches nearly every aspect of travel. Their innovative software enables more than a billion people around the world to plan, book and experience their travel at a time and price that is right for them. By delivering the technology behind travel, the client is working magic behind the scenes every day to make the world a better place, one journey at a time.
• Build and maintain data pipeline, data transformation and analytical code.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing code for greater scalability, performance, reliability and maintainability.
• Build analytics software products that utilize the data pipeline to provide actionable insights.
• Design and develop applications in Java, Scala, Spark, Apache Beam, Python using a variety of frameworks and tools.
• Design and develop software solutions in Talend, Dataflow, DataProc and other ETL tools.
• Understand and translate business needs into data models supporting long-term solutions.
• Work with the application development team to implement the data strategies, build data flows and develop conceptual data models.
• Perform reverse engineering of code and physical data models from Java, databases and SQL scripts.
• Validate business data objects for accuracy and completeness.
• Analyze data-related system integration challenges and propose appropriate solutions.
• Assemble large, complex data sets that meet functional / non-functional business requirements.
• Work with the development manager on a day-to-day basis.
• Work with other team members to accomplish key software development tasks
• Work with operation support team on transition and stabilization
• bachelor's degree in Computer Science, Engineering, Statistics, Informatics, Information Systems or another quantitative field.
• The candidate must have experience using the following software/tools:
o Experience with big data tools: Hadoop, Spark, Kafka, etc.
o Experience with GCP and/or AWS cloud services: DataProc, Dataflow, BigQuery, DataFusion VMs, Kubernetes
o Experience with stream-processing systems: Apache Beam, Storm, Spark-Streaming, etc.
o Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
o Experience with ETL tools: Talend, Oracle Data Integrator, Informatica
o Experience with relational SQL and NoSQL databases
o Experience with data pipeline and workflow management tools: Airflow, Luigi etc.
• Excellent written and verbal communication skills in English; ability to work on multiple projects
• Ability to work effectively in a fast-paced environment
• Self-starter with a high degree of self-management and commitment to delivery timelines
• Proven interpersonal, communication and presentation skills
• Ability to clearly explain technical concepts and analysis implications to a wide audience
Nice to have
• Experience and good understanding of travel business is a plus
• Familiarity with agile DevOps framework and processes
English: B2 Upper Intermediate
If needed, we can help you with relocation process. Click here for more information.
BigData (Hadoop etc.)
|Specialization||Position / Title||Location||Seniority||Send to a friend|
|BigData (Hadoop etc.)||Senior Data Modeller||Bangalore, IN||Senior||
|BigData (Hadoop etc.)||Azure Databricks Application Developer||Bangalore, IN||Regular||
|BigData (Hadoop etc.)||Application Developer - Microsoft Azure BI Stack||Bangalore, IN||Senior||
|BigData (Hadoop etc.)||Application Developer - Microsoft Azure BI Stack||Bangalore, IN||Regular||