Senior Data Engineer

6 - 7 years

15 - 17 Lacs

Posted:6 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

About The Opportunity

This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies.Role & Responsibilities
  • Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage.
  • Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads.
  • Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management.
  • ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability.
  • Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management.
  • Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology.

Skills & Qualifications

Must-Have

  • 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks.
  • Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs.
  • Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt.
  • Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns.
  • Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality.

Preferred

  • Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem.
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions.
  • Familiarity with ML feature stores, MLOps workflows, or data governance frameworks.
  • Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects.

Location:

India |

Employment Type:

FulltimeSkills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Hyderabad, Telangana, India

Bengaluru, Karnataka, India

Mumbai, Maharashtra, India

Gurugram, Haryana, India

Bengaluru, Karnataka, India

Hyderabad, Telangana, India