Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain Kafka-based data pipelines for real-time processing. Implement Kafka producer and consumer applications for efficient data flow. Optimize Kafka clusters for performance, scalability, and reliability. Design and manage Grafana dashboards for monitoring Kafka metrics. Integrate Grafana with Elasticsearch, or other data sources. Set up alerting mechanisms in Grafana for Kafka system health monitoring. Collaborate with DevOps, data engineers, and software teams. Ensure security and compliance in Kafka and Grafana implementations. Requirements: 8+ years of experience in configuring Kafka, ElasticSearch and Grafana Strong understanding of Apache Kafka architecture and Grafana visualization. Proficiency in .Net, or Python for Kafka development. Experience with distributed systems and message-oriented middleware. Knowledge of time-series databases and monitoring tools. Familiarity with data serialization formats like JSON. Expertise in Azure platforms and Kafka monitoring tools. Good problem-solving and communication skills. Mandate : Create the Kafka dashboards , Python/.NET Note: Candidate must be immediate joiner.
Posted 19 hours ago
6.0 - 7.0 years
11 - 14 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across our organization. Develop Kafka producers, consumers, and stream processing applications. Implement Kafka Connect connectors and configure Kafka clusters. Optimize Kafka performance and troubleshoot related issues. Utilize Confluent tools like Schema Registry, Control Center, and ksqlDB. Collaborate with cross-functional teams and ensure compliance with data policies. Qualifications: Bachelors degree in Computer Science or related field. Confluent Certified Developer for Apache Kafka certification. Strong programming skills in Java/Python. In-depth Kafka architecture and Confluent platform experience. Experience with cloud platforms and containerization (Docker, Kubernetes) is a plus. Experience with data warehousing and data lake technologies. Experience with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code tools such as Terraform, or CloudFormation.
Posted 3 days ago
4.0 - 9.0 years
5 - 13 Lacs
Thane, Goregaon, Mumbai (All Areas)
Work from Office
Opening for Leading Insurance company. **Looking for Immediate Joiner and 30 Days** Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives. Interested candidates shared resume on snehal@topgearconsultants.com
Posted 3 days ago
8.0 - 13.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 8+ Yrs exp in Kafka Administrator Mandatory Skill: kSQL DB Developers who must have hands on experience in writing the Ksql queries. Kafka Connect development experience. Kafka Client Stream Applications Developer Confluent Terraform Provider Skill: 8+ years of experience in Development project and Support project experience 3+ years of hands on experience in Kafka Understanding Event Streaming patterns and when to apply these patterns Designing building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka Working with different database solutions for data extraction, updates and insertions. Identity and Access Management space including relevant protocols and standards such as OAuth, OIDC, SAML, LDAP etc. Knowledge of networking protocols such as TCP, HTTP/2, WebSockets etc. Candidate must work in Australia timings [AWST]., Interview mode will be Face to Face Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 4 days ago
3.0 - 7.0 years
9 - 14 Lacs
Gurugram
Remote
Kafka/MSK Linux In-depth understanding of Kafka broker configurations, zookeepers, and connectors Understand Kafka topic design and creation. Good knowledge in replication and high availability for Kafka system ElasticSearch/OpenSearch Perks and benefits PF, ANNUAL BONUS, HEALTH INSURANCE
Posted 5 days ago
3.0 - 7.0 years
10 - 15 Lacs
Pune
Work from Office
Responsibilities: * Manage Kafka clusters, brokers & messaging architecture * Collaborate with development teams on data pipelines * Monitor Kafka performance & troubleshoot issues Health insurance Provident fund Annual bonus
Posted 5 days ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad, Pune, Gurugram
Work from Office
GSPANN is looking for an experienced Kafka Developer with strong Java skills to join our growing team. If you have hands-on experience with Kafka components and are ready to work in a dynamic, client-facing environment, wed love to hear from you! Key Responsibilities: Develop and maintain Kafka Producers, Consumers, Connectors, kStream, and KTable. Collaborate with stakeholders to gather requirements and deliver customized solutions. Troubleshoot production issues and participate in Agile ceremonies. Optimize system performance and support deployments. Mentor junior team members and ensure coding best practices. Required Skills: 4+ years of experience as a Kafka Developer Proficiency in Java Strong debugging skills (Splunk experience is a plus) Experience in client-facing projects Familiarity with Agile and DevOps practices Good to Have: Knowledge of Google Cloud Platform (Dataflow, BigQuery, Kubernetes) Experience with production support and monitoring tools Ready to join a collaborative and innovative team? Send your CV to heena.ruchwani@gspann.com
Posted 1 week ago
10.0 - 16.0 years
15 - 30 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Gracenote, a Nielsen company, is dedicated to connecting audiences to the entertainment they love, powering a better media future for all people. Gracenote is the content data business unit of Nielsen that powers innovative entertainment experiences for the worlds leading media companies. Our entertainment metadata and connected IDs deliver advanced content navigation and discovery to connect consumers to the content they love and discover new ones. Gracenotes industry-leading datasets cover TV programs, movies, sports, music and podcasts in 80 countries and 35 languages.Common identifiers Universally adopted by the worlds leading media companies to deliver powerful cross-media entertainment experiences. Machine driven, human validated best-in-class data and images fuel new search and discovery experiences across every screen. Gracenote's Data Organization is a dynamic and innovative group that is essential in delivering business outcomes through data, insights, predictive & prescriptive analytics. An extremely motivated team that values creativity, experimentation through continuous learning in an agile and collaborative manner. From designing, developing and maintaining data architecture that satisfies our business goals to managing data governance and region-specific regulations, the data team oversees the whole data lifecycle. Role Overview: We are seeking an experienced Senior Data Engineer with 10-12 years of experience to join our Video engineering team with Gracenote - a NielsenIQ Company. In this role, you will design, build, and maintain our data processing systems and pipelines. You will work closely with Product managers, Architects, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for Business, analytical and operational needs. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes Architect and implement data warehousing solutions and data lakes Optimize data flow and collection for cross-functional teams Build infrastructure required for optimal extraction, transformation, and loading of data Ensure data quality, reliability, and integrity across all data systems Collaborate with data scientists and analysts to help implement models and algorithms Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc. Create and maintain comprehensive technical documentation Mentor junior engineers and provide technical leadership Evaluate and integrate new data management technologies and tools Implement Optimization strategies to enable and maintain sub second latency. Oversee Data infrastructure to ensure robust deployment and monitoring of the pipelines and processes. Stay ahead of emerging trends in Data, cloud, integrating new research into practical applications. Mentor and grow a team of junior data engineers. Required qualification and Skills: Expert-level proficiency in Python, SQL, and big data tools (Spark, Kafka, Airflow). Bachelor's degree in Computer Science, Engineering, or related field; Master's degree preferred Expert knowledge of SQL and experience with relational databases (e.g., PostgreSQL, Redshift, TIDB, MySQL, Oracle, Teradata) Extensive experience with big data technologies (e.g., Hadoop, Spark, Hive, Flink) Proficiency in at least one programming language such as Python, Java, or Scala Experience with data modeling, data warehousing, and building ETL pipelines Strong knowledge of data pipeline and workflow management tools (e.g., Airflow, Luigi, NiFi) Experience with cloud platforms (AWS, Azure, or GCP) and their data services. AWS Preferred Hands on Experience with building streaming pipelines with flink, Kafka, Kinesis. Flink Preferred. Understanding of data governance and data security principles Experience with version control systems (e.g., Git) and CI/CD practices Proven leadership skills in grooming data engineering teams. Preferred Skills Experience with containerization and orchestration tools (Docker, Kubernetes) Basic knowledge of machine learning workflows and MLOps Experience with NoSQL databases (MongoDB, Cassandra, etc.) Familiarity with data visualization tools (Tableau, Power BI, etc.) Experience with real-time data processing Knowledge of data governance frameworks and compliance requirements (GDPR, CCPA, etc.) Experience with infrastructure-as-code tools (Terraform, CloudFormation)Role & responsibilities
Posted 1 week ago
8.0 - 13.0 years
25 - 40 Lacs
Chennai
Work from Office
Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming
Posted 1 week ago
4.0 - 9.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Senior Software Engineer - DevOps Bangalore, India Who we are: INVIDI Technologies Corporation is the worlds leading developer of software transforming television all over the world. Our two-time Emmy Award-winning technology is widely deployed by cable, satellite, and telco operators. We provide a device-agnostic solution delivering ads to the right household no matter what program or network you re watching, how youre watching, or whether you re in front of your TV, laptop, cell phone or any other device. INVIDI created the multi-billion-dollar addressable television business that today is growing rapidly globally. INVIDI is right at the heart of the very exciting and fast-paced world of commercial television; companies benefiting from our software include DirecTV and Dish Network, networks such as CBS/Viacom and A&E, advertising agencies such as Ogilvy and Publicis, and advertisers such as Chevrolet and Verizon. INVIDI s world-class technology solutions are known for their flexibility and adaptability. These traits allow INVIDI partners to transform their video content delivery network, revamping legacy systems without significant capital or hardware investments. Our clients count on us to provide superior capabilities, excellent service, and ease of use. The goal of developing a unified video ad tech platform is a big one and the right DevOps Engineer --like you--flourish in INVIDI s creative, inspiring, and supportive culture. It is a demanding, high-energy, and fast-paced environment. INVIDI s developers are self-motivated quick studies, can-do individuals who embrace the challenge of solving difficult and complex problems. About the role: We are a modern agile product organization looking for an excellent DevOps engineer that can support and offload a remote product development team. Our platform handles tens of thousands of requests/second with sub-second response times across the globe. We serve ads to some of the biggest live events in the world, providing reports and forecasts based on billions of log rows. These are some of the complex challenges that make development and operational work at INVIDI interesting and rewarding. To accomplish this, we use the best frameworks and tools out there or, when they are not good enough, we write our own. Most of the code we write is Java or Kotlin on top of Dropwizard, but every problem is unique, and we always evaluate the best tools for the job. We work with technologies such as Kafka, Google Cloud (GKE, Pub/Sub), BigTable, Terraform and Jsonnet and a lot more. The position will report directly to the Technical Manager of Software Development and will be based in our Chennai, India office. Key responsibilities: You will maintain, deploy and operate backend services in Java and Kotlin that are scalable, durable and performant. You will proactively evolve deployment pipelines and artifact generation. You will have a commitment to Kubernetes and infrastructure maintenance. You will troubleshoot incoming issues from support and clients, fixing and resolving what you can You will collaborate closely with peers and product owners in your team. You will help other team members grow as engineers through code review, pairing, and mentoring. Our Requirements: You are an outstanding DevOps Engineer who loves to work with distributed high-volume systems. You care about the craft and cherish the opportunity to work with smart, supportive, and highly motivated colleagues. You are curious; you like to learn new things, mentor and share knowledge with team members. Like us, you strive to handle complexity by keeping things simple and elegant. As a part of the DevOps team, you will be on-call for the services and clusters that the team owns. You are on call for one week, approximately once or twice per month. While on-call, you are required to be reachable by telephone and able to act upon alarm using your laptop. Skills and qualifications: Master s degree in computer science, or equivalent 4+ years of experience in the computer science industry Strong development and troubleshooting skill sets Ability to support a SaaS environment to meet service objectives Ability to collaborate effectively and work well in an Agile environment Excellent oral and written communication skills in English Ability to quickly learn new technologies and work in a fast-paced environment. Highly Preferred: Experience building service applications with Dropwizard/Spring Boot Experience with cloud services such as GCP and/or AWS. Experience with Infrastructure as Code tools such as Terraform. Experience in Linux environment. Experience working with technologies such as SQL, Kafka, Kafka Streams Experience with Docker Experience with SCM and CI/CD tools such as GIT and Bitbucket Experience with build tools such as Gradle or Maven Experience in writing Kubernetes deployment manifests and troubleshooting cluster and application-level issues. Physical Requirements: INVIDI is a conscious, clean, well-organized, and supportive office environment. Prolonged periods of sitting at a desk and working on a computer are normal. Note: Final candidates must successfully pass INVIDI s background screening requirements. Final candidates must be legally authorized to work in India. INVIDI has reopened its offices on a flexible hybrid model. Ready to join our team? Apply today!
Posted 1 week ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 1 week ago
5.0 - 8.0 years
20 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Looking for Data Engineers, immediate joiners only, for Hyderabad, Bengaluru and Noida Location. * Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks.* Role and responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Preferred candidate profile : 5+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Interested, call: Rose (9873538143 / WA : 8595800635) rose2hiresquad@gmail.com
Posted 1 week ago
8.0 - 13.0 years
13 - 17 Lacs
Bengaluru
Work from Office
We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karntaka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelinesmandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring)Training & Certification "¢ Apache Kafka Administration Snowflake Fundamentals/Advanced Training "¢ Experience 8 years of experience in a technical role working with AWSAt least 2 years in a leadership or management role
Posted 1 week ago
5.0 - 10.0 years
7 - 14 Lacs
Mumbai, Goregaon, Mumbai (All Areas)
Work from Office
Opening for the Insurance Company. **Looking someone with 30 days notice period** Location : Mumbai (Lower Parel) Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives Interested candidates shared resume on snehal@topgearconsultants.com
Posted 1 week ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 2 weeks ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in Marketing, Supply Chain Optimization, Network Security and Personalization rely on. Position Overview As a Senior Engineer on the Search Team , you serve as a specialist in the engineering team that supports the product. You help develop and gain insight in the application architecture. You can distill an abstract architecture into concrete design and influence the implementation. You show expertise in applying the appropriate software engineering patterns to build robust and scalable systems. You are an expert in programming and apply your skills in developing the product. You have the skills to design and implement the architecture on your own, but choose to influence your fellow engineers by proposing software designs, providing feedback on software designs and/or implementation. You leverage data science in solving complex business problems. You make decisions based on data. You show good problem solving skills and can help the team in triaging operational issues. You leverage your expertise in eliminating repeat occurrences. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Experience with Search Engines like SOLR and Elastic Search Strong hands-on programming skills in Java, Kotlin, Micronaut, Python, Experience on Pyspark, SQL, Hadoop/Hive is added advantage Experience on streaming systems like Kakfa. Experience on Kafka Streams is added advantage. Experience in MLOps is added advantage Experience in Data Engineering is added advantage Strong analytical thinking skills with an ability to creatively solve business problems, innovating new approaches where required Able to produce reasonable documents/narrative suggesting actionable insights Self-driven and results oriented Strong team player with ability to collaborate effectively across geographies/time zones Know More About Us here: Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 2 weeks ago
3.0 - 6.0 years
10 - 17 Lacs
Pune
Remote
Kafka/MSK Linux In-depth understanding of Kafka broker configurations, zookeepers, and connectors Understand Kafka topic design and creation. Good knowledge in replication and high availability for Kafka system ElasticSearch/OpenSearch
Posted 2 weeks ago
6.0 - 11.0 years
12 - 30 Lacs
Hyderabad
Work from Office
Proficient in Java 8 , Kafka Must have Experience with Junit Test Case Good on Spring boot, Microservices, SQL , ActiveMQ & Restful API
Posted 2 weeks ago
4.0 - 8.0 years
27 - 42 Lacs
Hyderabad
Work from Office
Job Summary We are looking for an experienced Infra Dev Specialist with 4 to 8 years of experience to join our team. The ideal candidate will have expertise in KSQL Kafka Schema Registry Kafka Connect and Kafka. This role involves working in a hybrid model with day shifts and does not require travel. The candidate will play a crucial role in developing and maintaining our infrastructure to ensure seamless data flow and integration. Responsibilities Develop and maintain infrastructure solutions using KSQL Kafka Schema Registry Kafka Connect and Kafka. Oversee the implementation of data streaming and integration solutions to ensure high availability and performance. Provide technical support and troubleshooting for Kafka-related issues to minimize downtime and ensure data integrity. Collaborate with cross-functional teams to design and implement scalable and reliable data pipelines. Monitor and optimize the performance of Kafka clusters to meet the demands of the business. Ensure compliance with security and data governance policies while managing Kafka infrastructure. Implement best practices for data streaming and integration to enhance system efficiency. Conduct regular reviews and updates of the infrastructure to align with evolving business needs. Provide training and support to team members on Kafka-related technologies and best practices. Develop and maintain documentation for infrastructure processes and configurations. Participate in code reviews and contribute to the continuous improvement of the development process. Stay updated with the latest trends and advancements in Kafka and related technologies. Contribute to the overall success of the team by delivering high-quality infrastructure solutions. Qualifications Possess strong experience in KSQL Kafka Schema Registry Kafka Connect and Kafka. Demonstrate a solid understanding of data streaming and integration concepts. Have a proven track record of troubleshooting and resolving Kafka-related issues. Show expertise in designing and implementing scalable data pipelines. Exhibit knowledge of security and data governance practices in managing Kafka infrastructure. Display proficiency in monitoring and optimizing Kafka cluster performance. Have experience in providing technical support and training to team members. Be skilled in developing and maintaining infrastructure documentation. Stay informed about the latest trends in Kafka and related technologies. Possess excellent communication and collaboration skills. Have a proactive approach to problem-solving and continuous improvement. Demonstrate the ability to work effectively in a hybrid work model. Show commitment to delivering high-quality infrastructure solutions. Certifications Required Certified Apache Kafka Developer
Posted 3 weeks ago
4.0 - 7.0 years
8 - 13 Lacs
Pune
Work from Office
Responsibilities: * Monitor Kafka clusters for performance & availability * Manage Kafka broker instances & replication strategies * Collaborate with dev teams on data pipeline design & implementation Food allowance Health insurance Provident fund Annual bonus
Posted 3 weeks ago
5.0 - 8.0 years
16 - 25 Lacs
Bengaluru
Hybrid
Senior Software Engineer (Backend-Java) India SRS is unlocking the possibilities of the new electric mobility future by delivering innovative software and services that empower utilities, cities, communities, and automakers to deploy EV charging infrastructure at scale. Our technology is connecting people to their destinations in a safer, cleaner, and smarter way. Headquartered in Los Angeles, CA, the companys global footprint spans across three continents with deployments in 13 different countries. At SRS , we are looking for candidates who want to be a part of something bigger than themselves. What you will do: The ideal candidate is an integral part of a fast-paced development team that builds an integrated product suite of Enterprise applications in the EV Charging network domain. The Candidate will participate in the technical design and implementation of one or more components of the product. This candidate works closely with rest of cross-functional team to produce design documents, implement product features, and develop and execute unit tests. Responsible for designing, developing, and delivering web and microservice APIs based applications. Develop consumer-facing features and architectural components to meet company demands. Collaborate with cross functional teams including our Global Engineering teams in an Agile development environment. Proven experience successfully optimizing applications for scalability. Utilize problem solving skills to implement creative solutions to tough problems. Advocate for best-in-class technology solutions for large scale enterprise applications. Who were looking for: If you find passion in the Companys mission, your qualifications and interest align with the expectations below, we would love to talk to you about this position. • Bachelor’s Degree in Computer Science/ Engineering or equivalent experience required. • 5-8 years of software development experience. • 5+ years of Java server-side design and development experience. Must Have: • Excellent knowledge of RESTful APIs • High proficiency in J2EE, Spring, Spring Boot and Hibernate • Experience with Data Model, SQL, and No-SQL. • Experience with AWS, RDS, Docker, Kubernetes. • Distributed Caching (Redis), Queuing technologies (ActiveMQ, Kafka), Elastic Search • Excellent knowledge of Microservices Architecture and implementation. • Experience working in a small team setting along with offshore development team. • Strong verbal and written communication skills: proven ability to lead both vertically and horizontally to achieve results; thrives in a dynamic, fast-paced, environment and do what it takes to deliver results Good to Have: • Experience with APM tool like Stackify, NewRelic. • Experience in Electric Grid management solutions. • Experience in Angular or similar JavaScript frameworks. • Experience with GitHub/Bitbucket, Jira, Scrum, SonarCloud and CI/CD processes. • Working knowledge of Linux • Experience working on software-as-a-service (SaaS), large scale distributed systems and relational/No-SQL database
Posted 3 weeks ago
10.0 - 15.0 years
12 - 16 Lacs
Pune, Bengaluru
Work from Office
We are seeking a talented and experienced Kafka Architect with migration experience to Google Cloud Platform (GCP) to join our team. As a Kafka Architect, you will be responsible for designing, implementing, and managing our Kafka infrastructure to support our data processing and messaging needs, while also leading the migration of our Kafka ecosystem to GCP. You will work closely with our engineering and data teams to ensure seamless integration and optimal performance of Kafka on GCP. Responsibilities: Discovery, analysis, planning, design, and implementation of Kafka deployments on GKE, with a specific focus on migrating Kafka from AWS to GCP. Design, architect and implement scalable, high-performance Kafka architectures and clusters to meet our data processing and messaging requirements. Lead the migration of our Kafka infrastructure from on-premises or other cloud platforms to Google Cloud Platform (GCP). Conduct thorough discovery and analysis of existing Kafka deployments on AWS. Develop and implement best practices for Kafka deployment, configuration, and monitoring on GCP. Develop a comprehensive migration strategy for moving Kafka from AWS to GCP. Collaborate with engineering and data teams to integrate Kafka into our existing systems and applications on GCP. Optimize Kafka performance and scalability on GCP to handle large volumes of data and high throughput. Plan and execute the migration, ensuring minimal downtime and data integrity. Test and validate the migrated Kafka environment to ensure it meets performance and reliability standards. Ensure Kafka security on GCP by implementing authentication, authorization, and encryption mechanisms. Troubleshoot and resolve issues related to Kafka infrastructure and applications on GCP. Ensure seamless data flow between Kafka and other data sources/sinks. Implement monitoring and alerting mechanisms to ensure the health and performance of Kafka clusters. Stay up to date with Kafka developments and GCP services to recommend and implement new features and improvements. Requirements: Bachelors degree in computer science, Engineering, or related field (Masters degree preferred). Proven experience as a Kafka Architect or similar role, with a minimum of [5] years of experience. Deep knowledge of Kafka internals and ecosystem, including Kafka Connect, Kafka Streams, and KSQL. In-depth knowledge of Apache Kafka architecture, internals, and ecosystem components. Proficiency in scripting and automation for Kafka management and migration. Hands-on experience with Kafka administration, including cluster setup, configuration, and tuning. Proficiency in Kafka APIs, including Producer, Consumer, Streams, and Connect. Strong programming skills in Java, Scala, or Python. Experience with Kafka monitoring and management tools such as Confluent Control Center, Kafka Manager, or similar. Solid understanding of distributed systems, data pipelines, and stream processing. Experience leading migration projects to Google Cloud Platform (GCP), including migrating Kafka workloads. Familiarity with GCP services such as Google Kubernetes Engine (GKE), Google Cloud Storage, Google Cloud Pub/Sub, and Big Query. Excellent communication and collaboration skills. Ability to work independently and manage multiple tasks in a fast-paced environment.
Posted 3 weeks ago
5.0 - 7.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
5+ years of working experience with industry-standard messaging systems Apache Kafka, Apache Pulsar, Rabbit MQ.Experience configuring Kafka for large-scale deployments is desirableHands-on experience with building reactive microservices using any popular Java stack.Experience building applications using Java 11 best practices and functional interfaces.Understanding Kubernetes custom operators is desirableExperience building stateful streaming applications using Kafka streams, Apache Flink is a plus.Experience with open telemetry/ tracing / Jaeger is a plus. Career Level - IC3 The role requires proven experience in managing Kafka in large deployments and distributed architectures. Extensive knowledge in configuring Kafka using various industry-driven architectural patterns. You must be passionate about building distributed messaging cloud services running on Oracle Cloud Infrastructure. Experience with pub-sub architectures using Kafka or Pulsar or point-to-point messaging with queues is desirable. Experience building distributed systems with traceability in a high-volume messaging environment. Each team owns its service deployment pipeline to production. Career Level - IC3
Posted 3 weeks ago
6.0 - 10.0 years
15 - 30 Lacs
Pune
Work from Office
Role & responsibilities Mandatory Skills: Kafka Streams Mandatory Skills Description: • Strong, in-depth, and hands-on knowledge & understanding of Core & Advanced Java concepts. • Good knowledge and hands-on experience in working with Spring-Springboot/Camel, JUnit, Hibernate to build Microservices. • Experienced and familiar with Shell Scripts, Unix, PL/SQL, Databases (Oracle/Postgres), IBM MQ & JMS. • Good knowledge & working experience in Cloud components. • Hands-on experience in building distributed systems, Java messaging technologies (Kafka), and databases. • Strong object oriented analysis & design skills. • Good domain knowledge of Investment Banking - Trade Settlement Systems & Payment. • Good communication & presentation skills. • Hands-on experience with Agile methodologies & metrics like Velocity, Burndown chart, Story points, etc. • Strong organizational and quality assurance skills. • Exposure to tools like GITLAB, DevOps like TeamCity, Nexus, Maven/Gradle, etc. Preferred candidate profile
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
17069 Jobs | Dublin
Wipro
9221 Jobs | Bengaluru
EY
7581 Jobs | London
Amazon
5941 Jobs | Seattle,WA
Uplers
5895 Jobs | Ahmedabad
Accenture in India
5813 Jobs | Dublin 2
Oracle
5703 Jobs | Redwood City
IBM
5669 Jobs | Armonk
Capgemini
3478 Jobs | Paris,France
Tata Consultancy Services
3259 Jobs | Thane