Data Engineer

4 - 6 years

15 - 25 Lacs

Posted:None| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

(*Note: This is a requirement for one of Uplers Clients)

  • Design, develop, and maintain robust ETL/ELT pipelines using Azure Data Factory (ADF), Databricks (PySpark/Scala), and SQL Server.
  • Work on the Bronze-Silver-Gold (Medallion) architecture to organize data ingestion, cleansing, and transformation layers efficiently.
  • Ingest and process both structured and unstructured data from diverse data sources such as APIs, cloud storages, transactional systems, and logs.
  • Collaborate with data scientists, analysts, and business teams to ensure data availability and reliability for downstream use cases.
  • Implement CI/CD for data pipelines using DevOps best practices (Azure DevOps / GitHub Actions).
  • Optimize data processing performance on Databricks using Delta Lake, caching, Z-Ordering, and cluster tuning.
  • Ensure data quality, observability, and lineage through tools like Great Expectations, OpenLineage, and dbt.
  • Support metadata management and documentation for governance and compliance using tools like Apache Atlas or Unity Catalog.
  • Explore and integrate open-source data tooling to supplement cloud-native workflows.

Technical Skills Required

  • Cloud Platform - Azure (Data Factory, Data Lake Gen2, Blob Storage, Azure SQL DB, Key Vault, Azure DevOps)
  • Compute & Processing - Databricks (Spark, PySpark, Delta Lake), SQL Server
  • Orchestration - Azure Data Factory, dbt, Airflow (nice to have)
  • Data Modeling - Dimensional Modeling, Data Vault (plus), Star/Snowflake schemas
  • Programming Languages - Python, SQL, Spark (PySpark/Scala)
  • CI/CD & DevOps - Azure DevOps, Git, GitHub Actions
  • Data Observability & Quality - Great Expectations, Soda SQL, Monte Carlo (plus)
  • Metadata Management - Unity Catalog, Apache Atlas
  • Monitoring & Logging - Datadog, Azure Monitor, Log Analytics
  • Open-Source Tools (Plus) - dbt, Apache Kafka, Apache Airflow, OpenLineage

Qualifications

  • Bachelor's or Masters degree in Computer Science, Information Systems, or a related field.
  • 46 years of hands-on experience in data engineering with strong exposure to Azure and Databricks ecosystems.
  • Experience working in a modern data stack and familiarity with open-source best practices.
  • Strong understanding of data warehouse concepts, ETL frameworks, and cloud-native data pipelines.
  • Familiarity with data governance, lineage, cataloging, and access control.
  • Strong problem-solving skills, attention to detail, and ability to work in agile, fast-paced environments.

Good to Have

  • Certification in Azure Data Engineer Associate or Databricks Data Engineer Associate/Professional.
  • Exposure to Data Mesh, Lakehouse architectures, and streaming systems like Kafka or Azure Event Hubs.
  • Familiarity with machine learning pipeline orchestration (e.g., MLflow, Feature Store).

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Uplers logo
Uplers

Digital Services

Ahmedabad

RecommendedJobs for You