Knowing that the existing data structure is not meeting the established requirements or quality standards, the deliverables will be focused on establishing a iterative processes that essentially casts a wide net across the myriad of systems, Data Warehouses and Data Lakes in order to create a data set that that can be mined for applicable information. This iterative data processing will build a centralized metadata and table information repository
Focusing on standardizing and simplifying the accessibility of the required data, the Deliverables will implement data governance model and knowledge management repository to simplify the search-ability of the data with the pretense of eventually migrating to the Azure cloud environment.
5+ years of software engineering experience with object oriented design, coding and testing patterns on large scale data infrastructure.
• Advance SQL Programming,
• Object-oriented programming python/java
• Advance html/CSS/java script programming.
• Web development
• Building data structures/dictionaries.
• Data manipulation/cleaning/wrangling/munging
• Data Modeling/ETL/Data Migration/Data Lineage
• PowerShell / Bash Scripting
• Ability to work with data pipelines.
• 5+ years experience in Software Development using SQL and/or Python.
• 3+ years experience with cloud-based Data Warehouse and Data Lake environments (Snowflake)
• 2+ years experience in Migration, methods to cloud solutions.
• Azure blob storage, Azure Data Factory, Azure-Dev Ops.
• 2+ years of experience with ETL tools.
• Experience working on CI/CD processes and source control tools such as GitHub.
Nice to have
• Big Data Programming Tools such as Apache Spark/Scala/Hive.
• Azure Data Services and DevOps.
• Knowledge/Experience in building CI/CD pipelines within cloud services.
• Git/ Github
• Teradata Database Systems.
• Azure Databricks
• Microsoft Power BI Development
English: C2 Proficient
If needed, we can help you with relocation process. Click here for more information.
SQL (MySQL, T-SQL, Teradata etc.)