Home
Jobs

Senior Data Engineer

4 - 6 years

18 - 22 Lacs

Posted:6 hours ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Job Summary:

We are seeking a skilled and motivated Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines and infrastructure on the On-Premises and cloud platform. You will leverage your expertise in PySpark, Spark, Python and Apache Airflow to process and orchestrate large-scale data workloads, ensuring data quality, efficiency, and scalability. If you have a passion for data engineering and a desire to make a significant impact, we encourage you to apply!

Key Responsibilities:

Data Engineering & Data Pipeline Development

  • Design, develop, and optimize scalable DATA workflows using Python, PySpark, and Airflow
  • Implement real-time and batch data processing using Spark
  • Enforce best practices for data quality, governance, and security throughout the data lifecycle
  • Ensure data availability, reliability and performance through monitoring and automation.

Cloud Data Engineering :

  • Manage cloud infrastructure and cost optimization for data processing workloads
  • Implement CI/CD pipelines for data workflows to ensure smooth and reliable deployments.

Big Data & Analytics:

  • Build and optimize large-scale data processing pipelines using Apache Spark and PySpark
  • Implement data partitioning, caching, and performance tuning for Spark-based workloads.
  • Work with diverse data formats (structured and unstructured) to support advanced analytics and machine learning initiatives.

Workflow Orchestration (Airflow)

  • Design and maintain DAGs (Directed Acyclic Graphs) in Airflow to automate complex data workflows
  • Monitor, troubleshoot, and optimize job execution and dependencies

Required Skills & Experience:

  • 4+ years of experience in data engineering with expertise in PySpark, Spark.
  • Strong programming skills in Python, SQL with the ability to write efficient and maintainable code
  • Deep understanding of Spark internals (RDDs, DataFrames, DAG execution, partitioning, etc.)
  • Experience with Airflow DAGs, scheduling, and dependency management
  • Knowledge of Git, Docker, Kubernetes, and apply best practices of DevOps for CI/CD workflows
  • Experience in cloud platform like Azure/AWS is favourable.
  • Excellent problem-solving skills and ability to optimize large-scale data processing.
  • Experience in Agile/Scrum environments
  • Strong communication and collaboration skills, with the ability to effectively work with remote teams

Bonus Points:

  • Experience with data modeling and data warehousing concepts
  • Familiarity with data visualization tools and techniques
  • Knowledge of machine learning algorithms and frameworks

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
CIRCANA
CIRCANA

Market Research / Analytics

Chicago

RecommendedJobs for You

Chennai, Tamil Nadu, India