Back to jobs
Senior Databricks Data Engineer
Successfully
Req. VR-121533
We are seeking a Senior Data Engineer with strong hands-on expertise in Databricks, PySpark, and cloud-based data platforms to support the development, migration, and optimization of our enterprise data platform within the investment domain.
This role will focus on building and maintaining scalable data pipelines and lakehouse data models that support investment analytics, portfolio management, risk analysis, and trading data workflows.
The successful candidate will work closely with data engineers, quantitative analysts, and investment stakeholders to deliver high-quality, reliable, and performant data solutions. Experience with financial datasets such as market data, portfolio holdings, transactions, pricing data, and risk metrics is highly valuable.
Key Responsibilities:
Develop and maintain scalable data pipelines and transformation workflows using Databricks, PySpark, and SQL.
Support the migration of datasets, pipelines, and transformation logic from Palantir Foundry to Databricks Delta Lake.
Work with investment and financial datasets, including market data, portfolio positions, transactions, pricing data, and risk metrics.
Build and maintain ETL/ELT pipelines that enable data availability for investment analytics, reporting, and portfolio management systems.
Implement data validation, reconciliation, and quality frameworks to ensure financial data accuracy and consistency.
Optimize Spark jobs, cluster configurations, and storage formats to improve performance and cost efficiency.
Maintain data lineage, documentation, and governance practices to meet financial industry standards.
Support monitoring, troubleshooting, and performance tuning of production data pipelines.
Must have
Mandatory Skills:
Advanced Databricks Experience. Deep knowledge of Databricks architecture, Delta Lake, job orchestration, cluster management, and performance tuning.
Experience working with investment data, such as market data, portfolio holdings, transactions, pricing data, risk metrics, and financial instruments
Expert-Level PySpark & Python Skills. Strong ability to design, optimize, and refactor distributed data processing workflows.
Advanced SQL & Data Modelling Expertise. Experience in dimensional modeling, lakehouse architecture patterns, and query optimization.
Cloud Platform Experience (Azure preferred). Hands-on experience deploying and managing data platforms in cloud environments, including storage, security, and networking considerations.
Nice to have
Strong Hands-on Expertise in Palantir Foundry. Proven experience with Foundry pipelines, ontologies, data lineage, transformations, and platform governance.
Proven Migration Experience from Palantir / to Databricks. Demonstrated experience leading or executing platform migrations, including pipeline conversion, data model redesign, and production cutover.
Familiarity with Dynatrace or Datadog for system observability and monitoring.
Databricks certification, cloud certifications (Azure/AWS), or enterprise data architecture certifications.
Languages
English: C1 Advanced
Seniority
Senior
*The acquisition of rights to the above benefits depends on the form of cooperation. Benefits apply to those employed under a contract of employment.
**Please note that relocation is not available for all open positions. At Luxoft Poland it is possible to work remotely only from the territory of Poland.
***Options offered by the Polish government.
Warsaw, Poland
Req. VR-121533
Other Consulting
BCM Industry
16/03/2026
Req. VR-121533
Apply for Senior Databricks Data Engineer in Warsaw
*Indicates a required field