Posted:6 days ago|
Platform:
On-site
Full Time
Job Description: Hadoop & ETL Developer
Location: Shastri Park, Delhi
Experience: 3+ years
Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS
Salary: Upto 80k (rest depends on interview and the experience)
Notice Period: Immediate joiner to 20 days of joiners
Candidates from Delhi/ NCR will only be preferred
Job Summary:-
We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration.
This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs.
Key Responsibilities
Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies.
Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation.
Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow.
Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage.
Optimize MapReduce and Spark jobs for performance, scalability, and efficiency.
Ensure data quality, governance, and consistency across the pipeline.
Collaborate with data engineering teams to build scalable and high-performance data solutions.
Monitor, debug, and enhance big data workflows to improve reliability and efficiency.
Required Skills & Experience :
3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark).
Strong expertise in ETL processes, data transformation, and data warehousing.
Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte.
Proficiency in SQL and handling structured and unstructured data.
Experience with NoSQL databases like MongoDB.
Strong programming skills in Python or Scala for scripting and automation.
Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices.
Preferred Qualifications
Experience in real-time data streaming and processing.
Familiarity with Docker/Kubernetes for deployment and orchestration.
Strong analytical and problem-solving skills with the ability to debug and optimize data workflows.
If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you!
Job Types: Full-time, Contractual / Temporary
Pay: From ₹400,000.00 per year
Work Location: In person
NetProphets Cyberworks Pvt. Ltd.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now4.0 - 4.0 Lacs P.A.
Delhi, Delhi
Salary: Not disclosed