8 - 12 years

20 - 35 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Summary:

Key Responsibilities:

  • Data Pipeline Development:

    Design, develop, and maintain efficient and reliable ETL/ELT pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data from diverse sources (e.g., databases, APIs, streaming sources, cloud storage).

  • Data Architecture & Modeling:

    Design and maintain optimal data models, schemas, and database structures for relational (SQL) and NoSQL databases, data warehouses, and data lakes to support analytical and operational use cases.

  • Big Data & Cloud Solutions:

    Work with big data technologies such as Apache Spark, Hadoop, Kafka, or Flink for large-scale data processing (batch and streaming).

  • Collaboration & Support:

    Collaborate closely with data scientists to productionize machine learning models and ensure data availability for model training and inference.

  • Data Governance & Security:

    Implement and enforce data security controls, access management policies, and best practices to protect sensitive information.

  • Monitoring & Optimization:

    Monitor data pipeline performance, troubleshoot issues, and implement optimizations to enhance reliability, efficiency, and cost-effectiveness.

  • Documentation:

    Document technical designs, data flows, workflows, and best practices to facilitate knowledge sharing and maintain comprehensive system documentation.

Required Qualifications:

  • [8+] years of proven experience as a Data Engineer, Software Engineer with a data focus, or a similar role.
  • Strong proficiency in at least one programming language commonly used in data engineering (e.g.,

    Python

    is highly preferred, Java, Scala).
  • Expertise in

    SQL

    and strong experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server) and database design.
  • Hands-on experience with building and optimizing ETL/ELT pipelines.
  • Familiarity with big data technologies (e.g., Apache Spark, Hadoop, Kafka).
  • Experience with at least one major cloud platform (e.g.,

    AWS

    ,

    GCP

    , or

    Azure

    ) and its data services.
  • Solid understanding of data warehousing concepts and data modeling techniques (dimensional modeling, Kimball, Inmon).
  • Experience with workflow orchestration tools (e.g., Apache Airflow, dbt).
  • Proficiency with version control systems (e.g., Git).

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You