Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
20 - 30 Lacs
Hyderabad, Telangana, India
On-site
Hiresquad Resources is looking for talented Data Engineers to join our client's team. If you have solid experience with Python, Kafka Stream, PySpark, and Azure Databricks, and are ready to design and implement real-time data pipelines, we encourage you to apply for an immediate start in Hyderabad, Bengaluru, or Noida. Roles and Responsibilities Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support complex healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for both structured and unstructured healthcare data. Ensure data integrity, security, and compliance with critical healthcare regulations (such as HIPAA, HITRUST, etc.). Collaborate effectively with fellow data engineers, analysts, and business stakeholders to understand requirements and translate them into robust technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct thorough code reviews, and ensure adherence to best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Qualifications Experience: 5+ years of experience in data engineering. Mandatory Tools: Must have experience in Python, Kafka Stream, PySpark, and Azure Databricks. Skills Core Proficiency: Strong proficiency in Kafka and Python . Real-time Data Processing: Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Cloud Platforms: Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with other cloud platforms (Azure preferred, AWS or GCP is a plus) . Database & Modeling: Proficiency in SQL, NoSQL databases, and data modeling for big data processing. DevOps Practices: Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Domain Knowledge (Plus): Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a significant plus. Problem Solving: Strong analytical skills, a problem-solving mindset, and the ability to lead complex data projects. Communication: Excellent communication and stakeholder management skills. Interested Call Rose at 9873538143 (or WhatsApp at 8595800635 ) or email your resume to [HIDDEN TEXT] .
Posted 2 weeks ago
4.0 - 6.0 years
20 - 35 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Hi All, Greetings for the day!! We are currently hiring for Data Engineer (Python, Pyspark, and Azure Databricks) for Emids(MNC) at Bangalore location. Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks . Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Note: This is not a contract position, this will be a permanent position with Emids. Interested candidates Can Share Your Updated Profile with details for below Email. NAME: CCTC: ECTC: Notice Period: Offers in Hand : Email ID: Ravi.chekka@emids.com
Posted 3 weeks ago
6.0 - 7.0 years
11 - 14 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across our organization. Develop Kafka producers, consumers, and stream processing applications. Implement Kafka Connect connectors and configure Kafka clusters. Optimize Kafka performance and troubleshoot related issues. Utilize Confluent tools like Schema Registry, Control Center, and ksqlDB. Collaborate with cross-functional teams and ensure compliance with data policies. Qualifications: Bachelors degree in Computer Science or related field. Confluent Certified Developer for Apache Kafka certification. Strong programming skills in Java/Python. In-depth Kafka architecture and Confluent platform experience. Experience with cloud platforms and containerization (Docker, Kubernetes) is a plus. Experience with data warehousing and data lake technologies. Experience with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code tools such as Terraform, or CloudFormation.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France