Work in a fun environment, learn new things and take on cool challenges. The frontend developer will design & develop a Responsive and Stylish Web UI for our new Portal based on modern, cloud based platform including predictive analytics, data mapping & visualization. You will play a key role in helping to design and build our new unified portal.
• Hands on experience in implementing data integration processes, designing and developing data models and building in detail ETL/ELT processes or programs.
• Contributed in at least 2 phases of SDLC lifecycle and experience in Big Data, data warehouse, data analytics projects, data migration, change management process, and/or any IM (Information Management) related works.
• Experience with Hadoop Technologies such as HDFS/MapRFS, Map Reduce(II), Advanced HDFS ACLS, Hive, Cassandra, Spark, Kafka
• Hands-on experience on Spark, SparkSQL, Hive QL, Impala, Spark Data Frames and Flink CEP as ETL framework
• Hands-on programming skill on Scala/PySpark using Spark/Flink Framework
• Strong knowledge of Big Data stream ingestion and stream processing using Kafka and Spark Structured Streaming, Flink
• Good working knowledge of HCatalog and Hive Metadata.
• Knowledge and experience in data visualisation concepts using tools such as Tableau, Microsoft PowerBI or QlikView will be an advantage.
• Knowledge about Spark Memory management with and without Yarn memory management and Cloudera or Horton Works distributions
• Experience in working with RDBMS technologies such as MySQL, Maria DB and NoSQL database such as Cassandra, Mongo, CouchDB/Couchbase, Elasticsearch etc.
• Ability to create a positive learning culture and have growth mindset.
• 8 to 10 years of experience in data warehouse, data analytics projects, change management process, and/or any IM (Information Management) related works.
• Preferably with experience in implementation best practices involving data management, data reconciliation, data duping, scheduling, etc.
• Able to assess design considerations in the aspect of data management and integration
• Experience with Agile/SCRUM/Kanban software implementation methodology
• Should have good knowledge in DevOps engineering using Continuous Integration/Delivery tools such as Docker, Jenkins, Puppet, Chef, GitHub Atlassian Jira etc.
• Certification in any of Hadoop Big Data tool/technology, data integration, data management, or visualisation tools is an added advantage.
• Knowledge of Collibra Metadata is an added advantage.
• Knowledge of Apache Airflow is an added advantage.
• Knowledge about the infrastructure paradigms such as OS, network is an added advantage.
Nice to have
English: C2 Proficient
If needed, we can help you with relocation process. Click here for more information.
BigData (Hadoop etc.)