Posted:21 hours ago|
Platform:
Work from Office
Full Time
1. Design, develop, and maintain robust ETL/ELT pipelines using PySpark and other big data technologies. Optimize Spark jobs for performance and scalability.
2. Implement data transformations, aggregations, and joins over large datasets. Perform batch and real-time data processing tasks. 3. Collaborate with data scientists, analysts, and other engineers to understand requirements and deliver quality solutions. Integrate PySpark solutions with data warehouses (like Hive, Redshift, Snowflake) and other data stores. 4. Write clean, maintainable, and well-documented code. Follow version control and CI/CD practices using tools like Git, Jenkins, or Azure DevOps. 5. Troubleshoot data quality and performance issues. Monitor pipeline health and ensure SLAs are met. 6. Work with tools and platforms like Hadoop, Hive, HDFS, AWS EMR, Databricks, or Azure Synapse as required.
Tata Consultancy Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now
chennai, tamil nadu, india
Experience: Not specified
10.0 - 30.0 Lacs P.A.
greater kolkata area
Experience: Not specified
10.0 - 30.0 Lacs P.A.
bengaluru, karnataka, india
Experience: Not specified
10.0 - 30.0 Lacs P.A.
hyderabad, telangana, india
Experience: Not specified
10.0 - 30.0 Lacs P.A.
pune, maharashtra, india
Experience: Not specified
10.0 - 30.0 Lacs P.A.
hyderabad, chennai, bengaluru
1.0 - 1.5 Lacs P.A.
visakhapatnam
10.0 - 20.0 Lacs P.A.
pune
Experience: Not specified
Salary: Not disclosed
pune, maharashtra, india
Experience: Not specified
Salary: Not disclosed
kolkata, hyderabad, chennai
0.5 - 1.25 Lacs P.A.