Data Engineer

4 years

0 Lacs

Posted:3 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title : Data Engineer

Experience : 4-9 Years

Location : Noida, Chennai & Pune

Skills : Python, Pyspark, Snowflake & Redshift

Key Responsibilities

• Migration & Modernization

• Lead the migration of data pipelines, models, and workloads from Redshift to Snowflake/Yellowbrick.

• Design and implement landing, staging, and curated data zones to support scalable ingestion and consumption patterns.

• Evaluate and recommend tools and frameworks for migration, including file formats, ingestion tools, and orchestration.

• Design and build robust ETL/ELT pipelines using Python, PySpark, SQL, and orchestration tools (e.g., Airflow, dbt).

• Support both batch and streaming pipelines, with real-time processing via Kafka, Snowpipe, or Spark Structured Streaming.

• Build modular, reusable, and testable pipeline components that handle high volume and ensure data integrity.

• Define and implement data modeling strategies (star, snowflake, normalization/denormalization) for analytics and BI layers.

• Implement strategies for data versioning, late-arriving data, and slowly changing dimensions.

• Implement automated data validation and anomaly detection (using tools like dbt tests, Great Expectations, or custom checks).

• Build logging and alerting into pipelines to monitor SLA adherence, data freshness, and pipeline health.

• Contribute to data governance initiatives including metadata tracking, data lineage, and access control.

Required Skills & Experience

• 10+ years in data engineering roles with increasing responsibility.

• Proven experience leading data migration or re-platforming projects.

• Strong command of Python, SQL, and PySpark for data pipeline development.

• Hands-on experience with modern data platforms like Snowflake, Redshift, Yellowbrick, or BigQuery.

• Proficient in building streaming pipelines with tools like Kafka, Flink, or Snowpipe.

• Deep understanding of data modeling, partitioning, indexing, and query optimization.

• Expertise with ETL orchestration tools (e.g., Apache Airflow, Prefect, Dagster, or dbt).

• Comfortable working with large datasets and solving performance bottlenecks.

• Experience in designing data validation frameworks and implementing DQ rules.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You