3 - 7 years

10 - 20 Lacs

Posted:4 hours ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description


Job Summary

Key Responsibilities

  • Develop and optimise scalable

    PySpark applications

    on Databricks.
  • Work with

    AWS services

    (S3, EMR, Glue, Lambda) for data processing.
  • Integrate

    streaming and batch data

    sources using Kafka.
  • Tune Spark jobs for performance and cost efficiency.
  • Collaborate with DevOps, product, and analytics teams.
  • Ensure

    data governance, lineage, and quality compliance

    .

Required Skills

  • 3-7 years of strong

    PySpark development

    experience.
  • Hands-on

    Databricks

    (Spark UI, performance tuning).
  • Good understanding of

    AWS services

    (S3, EMR, Glue, Lambda).
  • Experience with

    Kafka

    for streaming/batch processing.
  • Spark optimisation (partitioning, caching, joins).
  • Data formats:

    Parquet, Avro, ORC

    .
  • Orchestration:

    Airflow / Databricks Workflows

    .
  • Scala

    is a strong plus.

Qualifications

  • Bachelors or Master’s in Computer Science / IT.

More Jobs at KloudPortal Technology Solutions PVT. LTD

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

noida, uttar pradesh, india

kolkata, mumbai, new delhi, hyderabad, pune, chennai, bengaluru

bangalore urban, karnataka, india

pune, chennai, bengaluru

pune, chennai, bengaluru