Senior Data Engineer

Mumbai, India 

and 2 more

|
21012706

Share This

FacebookEmailXLinkedInLink

We are hiring a Senior Data Engineer to join our team. At Kroll, we are building a strong, forward-looking data practice that integrates artificial intelligence, machine learning, and advanced analytics. You will be designing, building, and integrating data pipelines from diverse sources and collaborate with teams that serve the world’s largest financial institutions, law enforcement bodies, and government agencies. This role will partner with and have a dotted-line relationship with the Alternative Asset Advisory service line, supporting their strategic data initiatives and client delivery goals.

The day-to-day responsibilities include but not limited to:

  • Design and build robust, scalable organizational data infrastructure and architecture.

  • Identify and implement process improvements (e.g., infrastructure redesign, automation of data workflows, performance optimizations)

  • Select appropriate tools, services, and technologies to build resilient pipelines for data ingestion, transformation, and distribution.

  • Develop and manage ELT/ETL pipelines and related applications.

  • Collaborate with global teams to deliver fault-tolerant, high-quality data engineering solutions.

  • Perform monthly code quality audits and peer reviews to ensure consistency, readability, and maintainability across the engineering codebase.

Requirements:

  • Proven experience building and managing ETL/ELT pipelines.

  • Advanced proficiency with Azure, AWS, and Databricks (with focus on data services)

  • Deep knowledge of Python, Spark ecosystem (PySpark, Spark SQL) and relational databases

  • Experience building REST APIs, Python SDKs, libraries, and Spark-based data services.

  • Hands-on expertise with modern frameworks and tools like FastAPI, Pydantic, Polars, Pandas, Delta Lake, Docker, Kubernetes

  • Understanding of Lakehouse architecture, Medallion architecture, and data governance

  • Experience with pipeline orchestration tools (e.g., Airflow, Azure Data Factory)

  • Strong communication skills, ability to work cross-functionally with international teams.

  • Skilled in data profiling, cataloging, and mapping for technical data flows

  • Understanding of API product management principles, including lifecycle strategy, documentation standards, and versioning

Desired Skills:

  • Deep understanding of cloud architecture (compute, storage, networking, security, cost optimization)
  • Experience tuning complex SQL/Spark queries and pipelines for performance.
  • Hands-on experience building Lakehouse solutions using Azure Databricks, ADLS, PySpark, etc.
  • Familiarity with OOP, asynchronous programming, and batch processing paradigms
  • Experience with CI/CD, Git, and DevOps best practices

About Kroll

In a world of disruption and increasingly complex business challenges, our professionals bring truth into focus with the Kroll Lens. Our sharp analytical skills, paired with the latest technology, allow us to give our clients clarity—not just answering all areas of business. We value the diverse backgrounds and perspectives that enable us to think globally. As part of One team, One Kroll, you will contribute to a supportive and collaborative work environment that empowers you to excel.

Kroll is the premier global valuation and corporate finance advisor with expertise in complex valuation, disputes and investigations, M&A, restructuring, and compliance and regulatory consulting. Our professionals balance analytical skills, deep market insight and independence to help our clients make sound decisions. As an organization, we think globally—and encourage our people to do the same.

Kroll is committed to equal opportunity and diversity, and recruits people based on merit.

In order to be considered for a position, you must formally apply via careers.kroll.com

#LI-Hybrid

#LI-AT1