Manager - Data Engineering

Date: 17 Mar 2026

Location: Bangalore, KA, IN, 560024

Company: Tata Consumer Products Limited

Tata Consumer Products Ltd.

 

 

 

 

About the Job: Manager – Data Engineering

 

Function: Digital

Location: Bangalore

Reporting To: Senior Manager – Data Platform Architect

 

At Tata Consumer Products Ltd, we stand #ForBetter – Planet, Sourcing, Nutrition, Communities. And #ForBetter Opportunities …. Here’s an exciting one!

 

How does this Job align to our Strategy?

At the core of Tata Consumer Products' business approach lie six strategic pillars that serve as the foundation for its growth and success: Strengthening & Accelerating our Core Business, Driving Digital and Innovation, Unlocking Synergies, Creating a Future-Ready Organization, Exploring New Opportunities and Embedding Sustainability.

This job opportunity closely aligns with the key strategic pillars of driving digital and innovation. We are looking for a Data Engineer to build and operate reliable, scalable data pipelines and curated datasets that power enterprise analytics, reporting, and AI/ML initiatives. You will work on modern cloud data platforms to ingest, transform, and deliver trusted data products with strong quality, performance, and operational discipline. This role requires hands-on engineering depth, a mindset for automation, and strong collaboration with analytics and data science teams.

 

 

 

 

 

Top dimensions :

Geography: Global

Direct reports: - NA

Complexity of the role (Optional):

 

 

Matrix Reports : NA

Type of Role : Individual Contributor

Primary Stakeholders (Optional):

 

 

 

What are the Key Deliverables in this role ?

Data Pipelines & ELT/ETL Engineering

  • Design, build, and maintain end-to-end ETL/ELT pipelines for structured and semi-structured data.
  • Develop and enhance integrations using SnapLogic and AWS Glue with reusable components and standard patterns.
  • Implement scalable transformations using SQL and Python/PySpark, ensuring correctness and maintainability.
  • Support incremental and batch processing patterns with robust error handling and recovery.

 

Cloud Data Platform (S3 + Snowflake)

 

  • Build cloud-native data solutions using AWS S3 for storage and Snowflake for analytics-ready datasets.
  • Develop efficient loading patterns into Snowflake and optimize performance and cost.
  • Create curated, consumption-ready datasets that serve dashboards, reporting, and downstream ML workloads.

Workflow Orchestration & Automation (Airflow)

  • Orchestrate and monitor production workflows using Apache Airflow (scheduling, dependencies, retries, alerts).
  • Implement operational automation to improve reliability, reduce manual effort, and increase repeatability.
  • Troubleshoot failures quickly and drive resolution to meet defined delivery SLAs.

Data Quality, Governance & Operational Excellence

  • Leverage AI/ML to innovation and learnings from market into creating seamless customer experience
  • Champion agile methodologies, steering our Sales digital transformation projects with precision and adaptability, ensuring that we deliver value fast, and pivot even faster when needed.

Collaboration & Delivery

  • Partner with analytics, reporting, and data science teams to understand consumption needs and deliver trusted data.
  • Participate in design and code reviews; contribute to improving engineering standards and platform practices.Support release management practices including versioning, testing, and controlled rollouts.

 

 

What are the Critical success factors for the Role ?

  • 5–6 years of hands-on experience in Data Engineering / ETL / ELT roles.
  • Proven experience building and operating production-grade pipelines with monitoring and incident resolution.
  • Experience working with cloud-scale datasets and performance-oriented data processing.

 

What are the Desirable success factors for the Role?

  • Exposure to AI/ML fundamentals and enabling ML-ready datasets (feature consistency, stable metrics).
  • Familiarity with data governance / metadata practices (catalog, lineage, stewardship).
  • Experience with additional AWS services (e.g., Lambda, Redshift) or Azure/GCP.
  • Relevant certifications: AWS / Snowflake / SnapLogic.

 

 

 

 

 

 

 

 

Core Technical Skills

Area

Requirements

SQL

Advanced SQL for transformations, reconciliation, and query optimization.

Python / PySpark

Production-quality coding for scalable processing and transformations.

ETL/ELT Tools

Hands-on experience with SnapLogic and/or similar integration tools.

AWS Data Services

Strong working knowledge of AWS Glue and S3 (ingestion, processing, storage).

Snowflake

Experience with Snowflake warehousing, loading strategies, and performance tuning.

Orchestration

Production usage of Apache Airflow for scheduling, dependency control, and monitoring.

Reliability & Ops

Ability to troubleshoot failures, improve observability, and stabilize pipelines.

Version Control

Strong Git proficiency and standard code review / branching workflows.