Home
Jobs

5 - 10 years

3 - 15 Lacs

Posted:6 days ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Title: PySpark Developer
Experience: 5–10 Years
Location: Hyderabad, Bangalore & Chennai
Employment Type: Full Time, Permanent

Job Description:

We are hiring a Senior Python & PySpark Developer with 5–10 years of experience in building scalable, high-performance data solutions. The ideal candidate should have expertise in data processing, ETL development, and performance optimization using Python and PySpark in big data environments.

You will be responsible for designing and developing robust data pipelines, working with large datasets, and ensuring data quality and consistency across systems.

Key Responsibilities:

  • Design, develop, and maintain data pipelines and ETL workflows using Python and PySpark.
  • Optimize PySpark jobs for performance, scalability, and reliability in distributed data environments.
  • Collaborate with data engineers, architects, and analysts to transform business requirements into technical solutions.
  • Implement data validation, cleansing, and transformation processes.
  • Integrate data from multiple structured and unstructured sources.
  • Monitor, debug, and troubleshoot performance issues in ETL workflows and big data systems.
  • Ensure best practices in code quality, testing, and documentation.
  • Participate in code reviews, agile ceremonies, and technical discussions.

Required Skills:

  • Strong hands-on experience with Python and Apache Spark (PySpark).
  • Deep understanding of ETL concepts, data warehousing, and data integration.
  • Proficient in Spark SQL, RDDs, and DataFrames.
  • Experience working with large-scale data in distributed environments (HDFS, Hive, etc.).
  • Strong performance tuning and debugging skills for big data pipelines.
  • Solid understanding of data structures, algorithms, and parallel processing.
  • Experience with version control tools like Git and workflow tools like Apache Airflow.

Preferred/Good to Have:

  • Familiarity with cloud platforms (AWS EMR, Azure Data Lake, or GCP BigQuery).
  • Exposure to Kafka, NiFi, or similar data streaming tools.
  • Experience with CI/CD pipelines and DevOps practices in data engineering.
  • Knowledge of SQL and NoSQL databases.
  • Understanding of data security, governance, and compliance standards.

Education:

  • UG: Any Graduate in Computer Science, Engineering, or related field

Job Type: Full-time

Pay: ₹335,248.84 - ₹1,500,731.21 per year

Schedule:

  • Day shift

Application Question(s):

  • In how many days you can join if you get selected ?

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Bengaluru, Karnataka, India

Hyderabad, Telangana, India