Home
Jobs

Databricks Developer

6 - 11 years

15 - 30 Lacs

Posted:13 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

We are seeking an experienced Databricks Developer / Data Engineer to design, develop, and optimize data pipelines, ETL workflows, and big data solutions using Databricks. The ideal candidate should have expertise in Apache Spark, PySpark, SQL, and cloud-based data platforms (Azure, AWS, GCP). This role involves working with large-scale datasets, data lakes, and data warehouses to drive business intelligence and analytics.
Key Responsibilities:Design, build, and optimize ETL and ELT pipelines using Databricks and Apache Spark.Work with big data processing frameworks (PySpark, Scala, SQL) for data transformation and analytics.Implement Delta Lake architecture for data reliability, ACID transactions, and schema evolution.Integrate Databricks with cloud services like Azure Data Lake, AWS S3, GCP BigQuery, and Snowflake.Develop and maintain data models, data lakes, and data warehouse solutions.Optimize Spark performance tuning, job scheduling, and cluster configurations.Work with Azure Synapse, AWS Glue, or GCP Dataflow to enable seamless data integration.Implement CI/CD automation for data pipelines using Azure DevOps, GitHub Actions, or Jenkins.Perform data quality checks, validation, and governance using Databricks Unity Catalog.Collaborate with data scientists, analysts, and business teams to support analytics and AI/ML models.Required Skills & Qualifications:6+ years of experience in data engineering and big data technologies.Strong expertise in Databricks, Apache Spark, and PySpark/Scala.Hands-on experience with SQL, NoSQL, and structured/unstructured data processing.Experience with cloud platforms (Azure, AWS, GCP) and their data services.Proficiency in Python, SQL, and Spark optimizations.Experience with Delta Lake, Lakehouse architecture, and metadata management.Strong understanding of ETL/ELT processes, data lakes, and warehousing concepts.Experience with streaming data processing (Kafka, Event Hubs, Kinesis, etc.).Knowledge of security best practices, role-based access control (RBAC), and compliance.Experience in Agile methodologies and working in cross-functional teams.Preferred Qualifications:Databricks Certifications (Databricks Certified Data Engineer Associate/Professional).Experience with Machine Learning and AI/ML pipelines on Databricks.Hands-on experience with Terraform, CloudFormation, or Infrastructure as Code (IaC).

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Hanker Systems (india)
Hanker Systems (india)

Information Technology

Gurgaon

51 - 200 Employees

140 Jobs

    Key People

  • Amit Verma

    CEO
  • Ritika Sharma

    CTO

RecommendedJobs for You

Pune, Maharashtra, India