Posted:None|
Platform:
Work from Office
Full Time
• Develop and optimize scalable PySpark applications on Databricks.
• Work with AWS services (S3, Eby, Lambda, Dlue) for cloud-native data processing.
• Integrate streaming and batch data sources, especially using Kafka.
• Tune Spark jobs for performance, memory, and compute efficiency.
• Collaborate with DevOps, product, and analytics teams on delivery and deployment.
• Ensure data governance, lineage, and quality compliance across all pipelines
36 years of hands-on development in PySpark.
• Experience with Databricks and performance tuning using Spark UI.
• Strong understanding of AWS services, Kafka, and distributed data processing.
• Proficient in partitioning, caching, join optimization, and resource configuration.
• Familiarity with data formats like uarquet, Avro, and OyC.
• Exposure to orchestration tools (Airflow, Databricks Workflows).
• Scala experience is a strong plus
Allegis Group
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Hyderabad, Chennai, Bengaluru
12.0 - 22.0 Lacs P.A.
Chennai
12.0 - 16.0 Lacs P.A.
Chennai
12.0 - 16.0 Lacs P.A.
chennai
12.0 - 16.0 Lacs P.A.
hyderabad, chennai, bengaluru
12.0 - 22.0 Lacs P.A.
hyderabad
15.0 - 18.0 Lacs P.A.
gurugram, haryana, india
Salary: Not disclosed
gurugram, haryana, india
Experience: Not specified
Salary: Not disclosed
17.0 - 27.5 Lacs P.A.
12.0 - 20.0 Lacs P.A.