Posted:4 hours ago|
Platform:
Work from Office
Full Time
We are seeking a highly skilled Data Engineer with expertise in ETL, PySpark,AWS and big data technologies. The ideal candidate will have in-depth knowledge of Apache Spark, Python, and Java programming (Java 8 and above, including Lambda, Streams, Exception Handling, Collections, etc.). Responsibilities include developing data processing pipelines using PySpark, creating Spark jobs for data transformation and aggregation, and optimizing query performance using file formats like ORC, Parquet, and AVRO. Candidates must also have hands-on experience with Spring Core, Spring MVC, Spring Boot, REST APIs, and cloud services like AWS. This role involves designing scalable pipelines for batch and real-time analytics, performing data enrichment, and integrating with SQL databases. Location-Pan india,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Evnek
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Mumbai, New Delhi, Bengaluru
12.0 - 14.0 Lacs P.A.
Ahmedabad
4.5 - 8.0 Lacs P.A.
Bengaluru
20.0 - 25.0 Lacs P.A.
Bengaluru
5.0 - 9.0 Lacs P.A.
15.0 - 25.0 Lacs P.A.
Bengaluru
18.0 - 27.5 Lacs P.A.
Hyderabad, Pune, Bengaluru
22.5 - 30.0 Lacs P.A.
9.0 - 13.0 Lacs P.A.
Bengaluru
10.0 - 11.0 Lacs P.A.
Bengaluru
10.0 - 11.0 Lacs P.A.