Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, implementing, and maintaining scalable event-streaming architectures that support real-time data. Your duties will include designing, building, and managing Kafka clusters using Confluent Platform and Kafka Cloud services (AWS MSK, Confluent Cloud). You will also be involved in developing and maintaining Kafka topics, schemas (Avro/Protobuf), and connectors for data ingestion and processing pipelines. Monitoring and ensuring the reliability, scalability, and security of Kafka infrastructure will be crucial aspects of your role. Collaboration with application and data engineering teams to integrate Kafka with other AWS-based services (e.g., Lambda, S3, EC2, Redshift) is essential. Additionally, you will implement and manage Kafka Connect, Kafka Streams, and ksqlDB where applicable. Optimizing Kafka performance, troubleshooting issues, and managing incidents will also be part of your responsibilities. To be successful in this role, you should have at least 3-5 years of experience working with Apache Kafka and Confluent Kafka. Strong knowledge of Kafka internals such as brokers, zookeepers, partitions, replication, and offsets is required. Experience with Kafka Connect, Schema Registry, REST Proxy, and Kafka security is also important. Hands-on experience with AWS services like EC2, IAM, CloudWatch, S3, Lambda, VPC, and Load balancers is necessary. Proficiency in scripting and automation using tools like Terraform, Ansible, or similar is preferred. Familiarity with DevOps practices and tools such as CI/CD pipelines, monitoring tools like Prometheus/Grafana, Splunk, Datadog, etc., is beneficial. Experience with containerization using Docker and Kubernetes is an advantage. Having a Confluent Certified Developer or Administrator certification, AWS Certified, experience with CICD tools like AWS Code Pipeline, Harness, and knowledge of containers (Docker, Kubernetes) will be considered as additional assets for this role.,
Posted 1 week ago
8.0 - 13.0 years
8 - 13 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are seeking a seasoned Full Stack Software Engineer with a strong background in backend engineering and proficiency in frontend development. The ideal candidate will have extensive experience in Java and Kotlin programming, a deep understanding of functional programming principles, and expertise in real-time data streaming with Apache Kafka. Additionally, proficiency in UI programming using either React or Angular is essential. Key Responsibilities Kafka Expertise: Develop and maintain data streaming solutions using Apache Kafka. Ensure the seamless integration of Kafka with other systems. Backend Development: Design, develop, and maintain robust and scalable backend systems using Kotlin and Java Develop and maintain user interfaces using React or Angular, collaborating with UI/UX designers to implement responsive and intuitive designs, ensuring their technical feasibility, and optimizing applications for speed and scalability. Java Development: Write clean, maintainable, and efficient Java code. Lead the development of key components and services. Collaboration: Work closely with product managers, software engineers, and other stakeholders to deliver high-quality software solutions. Performance Tuning: Identify and address performance bottlenecks in the system. Implement solutions to enhance system performance and scalability. Monitoring and Troubleshooting: Implement monitoring and logging solutions to ensure the health and performance of applications. Troubleshoot and resolve issues as they arise. Continuous Improvement: Stay up-to-date with the latest industry trends and technologies. Continuously seek opportunities to improve existing processes and solutions. We are looking for candidates with a proven performance track record with the following: Required Qualifications: Education: Bachelors or Master's degree in Computer Science, Engineering, or a related field. Experience: Minimum of 8+ years of experience in application architecture and software development. Technical Skills: Proficiency in Java : Strong understanding of Java SE and EE, including multithreading, concurrency, and design patterns. Frameworks : Experience with Spring, Spring Boot, Hibernate, and JPA. JavaScript/TypeScript : Proficient in modern JavaScript (ES6+) and TypeScript. UI/UX Principles : Knowledge of responsive design, cross-browser compatibility, and web accessibility standards. Event-Driven Architecture and Kafka: In-depth knowledge of Apache Kafka, including setup, configuration, partitioning, replication, producers, consumers, and Kafka Connect. Experience with Kafka topic design, retention policies, and offsets management. Ability to design and implement stream processing applications using Kafka Streams DSL (Domain Specific Language) and Processor API. Solid understanding of microservices architecture and RESTful API design. Experience with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Familiarity with cloud platforms (e.g., AWS, GCP, Azure) is a plus. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a team-oriented environment. Demonstrated ability to lead and mentor junior engineers.
Posted 2 weeks ago
5.0 - 9.0 years
5 - 9 Lacs
Kolkata, West Bengal, India
On-site
Job Description We are seeking a seasoned Full Stack Software Engineer with a strong background in backend engineering and proficiency in frontend development. The ideal candidate will have extensive experience in Java and Kotlin programming, a deep understanding of functional programming principles, and expertise in real-time data streaming with Apache Kafka. Additionally, proficiency in UI programming using either React or Angular is essential. Key Responsibilities Kafka Expertise: Develop and maintain data streaming solutions using Apache Kafka. Ensure the seamless integration of Kafka with other systems. Backend Development: Design, develop, and maintain robust and scalable backend systems using Kotlin and Java Develop and maintain user interfaces using React or Angular, collaborating with UI/UX designers to implement responsive and intuitive designs, ensuring their technical feasibility, and optimizing applications for speed and scalability. Java Development: Write clean, maintainable, and efficient Java code. Lead the development of key components and services. Collaboration: Work closely with product managers, software engineers, and other stakeholders to deliver high-quality software solutions. Performance Tuning: Identify and address performance bottlenecks in the system. Implement solutions to enhance system performance and scalability. Monitoring and Troubleshooting: Implement monitoring and logging solutions to ensure the health and performance of applications. Troubleshoot and resolve issues as they arise. Continuous Improvement: Stay up-to-date with the latest industry trends and technologies. Continuously seek opportunities to improve existing processes and solutions. We are looking for candidates with a proven performance track record with the following: Required Qualifications: Education: Bachelors or Master's degree in Computer Science, Engineering, or a related field. Experience: Minimum of 6+ years of experience in application architecture and software development. Technical Skills: Proficiency in Java : Strong understanding of Java SE and EE, including multithreading, concurrency, and design patterns. Frameworks : Experience with Spring, Spring Boot, Hibernate, and JPA. JavaScript/TypeScript : Proficient in modern JavaScript (ES6+) and TypeScript. UI/UX Principles : Knowledge of responsive design, cross-browser compatibility, and web accessibility standards. Event-Driven Architecture and Kafka: In-depth knowledge of Apache Kafka, including setup, configuration, partitioning, replication, producers, consumers, and Kafka Connect. Experience with Kafka topic design, retention policies, and offsets management. Ability to design and implement stream processing applications using Kafka Streams DSL (Domain Specific Language) and Processor API. Solid understanding of microservices architecture and RESTful API design. Experience with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Familiarity with cloud platforms (e.g., AWS, GCP, Azure) is a plus. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a team-oriented environment. Demonstrated ability to lead and mentor junior engineers.
Posted 2 weeks ago
5.0 - 9.0 years
5 - 9 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are seeking a seasoned Full Stack Software Engineer with a strong background in backend engineering and proficiency in frontend development. The ideal candidate will have extensive experience in Java and Kotlin programming, a deep understanding of functional programming principles, and expertise in real-time data streaming with Apache Kafka. Additionally, proficiency in UI programming using either React or Angular is essential. Key Responsibilities Kafka Expertise: Develop and maintain data streaming solutions using Apache Kafka. Ensure the seamless integration of Kafka with other systems. Backend Development: Design, develop, and maintain robust and scalable backend systems using Kotlin and Java Develop and maintain user interfaces using React or Angular, collaborating with UI/UX designers to implement responsive and intuitive designs, ensuring their technical feasibility, and optimizing applications for speed and scalability. Java Development: Write clean, maintainable, and efficient Java code. Lead the development of key components and services. Collaboration: Work closely with product managers, software engineers, and other stakeholders to deliver high-quality software solutions. Performance Tuning: Identify and address performance bottlenecks in the system. Implement solutions to enhance system performance and scalability. Monitoring and Troubleshooting: Implement monitoring and logging solutions to ensure the health and performance of applications. Troubleshoot and resolve issues as they arise. Continuous Improvement: Stay up-to-date with the latest industry trends and technologies. Continuously seek opportunities to improve existing processes and solutions. We are looking for candidates with a proven performance track record with the following: Required Qualifications: Education: Bachelors or Master's degree in Computer Science, Engineering, or a related field. Experience: Minimum of 6+ years of experience in application architecture and software development. Technical Skills: Proficiency in Java : Strong understanding of Java SE and EE, including multithreading, concurrency, and design patterns. Frameworks : Experience with Spring, Spring Boot, Hibernate, and JPA. JavaScript/TypeScript : Proficient in modern JavaScript (ES6+) and TypeScript. UI/UX Principles : Knowledge of responsive design, cross-browser compatibility, and web accessibility standards. Event-Driven Architecture and Kafka: In-depth knowledge of Apache Kafka, including setup, configuration, partitioning, replication, producers, consumers, and Kafka Connect. Experience with Kafka topic design, retention policies, and offsets management. Ability to design and implement stream processing applications using Kafka Streams DSL (Domain Specific Language) and Processor API. Solid understanding of microservices architecture and RESTful API design. Experience with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Familiarity with cloud platforms (e.g., AWS, GCP, Azure) is a plus. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a team-oriented environment. Demonstrated ability to lead and mentor junior engineers.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At Skillsoft, we propel organizations and people to grow together through transformative learning experiences. We believe every team member has the potential to be AMAZING. Join us in our quest to transform learning and help individuals unleash their edge. What You&aposll Do ? Lead the development of scalable data infrastructure solutions ? Leverage your data engineering expertise to support data stakeholders and mentor less experienced Data Engineers. ? Design and optimize new and existing data pipelines ? Collaborate with a cross-functional product engineering teams and data stakeholders deliver on Codecademys data needs What You&aposll Need ? 8 to 10 years of hands-on experience building and maintaining large scale ETL systems ? Deep understanding of database design and data structures. SQL, & NoSQL. ? Fluency in Python. ? Experience working with cloud-based data platforms (we use AWS) ? SQL and data warehousing skills -- able to write clean and efficient queries ? Ability to make pragmatic engineering decisions in a short amount of time ? Strong project management skills; a proven ability to gather and translate requirements from stakeholders across functions and teams into tangible results What Will Make You Stand Out ? Experience with tools in our current data stack: Apache Airflow, Snowflake, dbt, FastAPI, S3, & Looker. ? Experience with Kafka, Kafka Connect, and Spark or other data streaming technologies ? Familiarity with the database technologies we use in production: Snowflake, Postgres, and MongoDB. ? Comfort with containerization technologies: Docker, Kubernetes, etc. More About Skillsoft Skillsoft delivers online learning, training, and talent solutions to help organizations unleash their edge . Leveraging immersive, engaging content, Skillsoft enables organizations to unlock the potential in their best assets their people and build teams with the skills they need for success. Empowering 36 million learners and counting, Skillsoft democratizes learning through an intelligent learning experience and a customized, learner-centric approach to skills development with resources for Leadership Development, Business Skills, Technology & Development, Digital Transformation, and Compliance. Skillsoft is partner to thousands of leading global organizations, including many Fortune 500 companies. The company features three award-winning systems that support learning, performance and success: Skillsoft learning content, the Percipio intelligent learning experience platform, which offers measurable impact across the entire employee lifecycle. Learn more at www.skillsoft.com. Thank you for taking the time to learn more about us. If this opportunity intrigues you, we would love for you to apply! NOTE TO EMPLOYMENT AGENCIES: We value the partnerships we have built with our preferred vendors. Skillsoft does not accept unsolicited resumes from employment agencies. All resumes submitted by employment agencies directly to any Skillsoft employee or hiring manager in any form without a signed Skillsoft Employment Agency Agreement on file and search engagement for that position will be deemed unsolicited in nature. No fee will be paid in the event the candidate is subsequently hired as a result of the referral or through other means. Skillsoft is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information, and other legally protected categories. Show more Show less
Posted 3 weeks ago
12.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Global Technology Partners is a premier partner for digital transformation, with a diverse team of software engineering experts in the US and India. They combine strategic thinking, innovative design, and robust engineering to deliver exceptional results for their clients. Job Summary We are seeking a highly experienced and visionary Principal/Lead Java Architect to play a pivotal role in designing and evolving our next-generation, high-performance, and scalable event-driven platforms. This role demands deep expertise in Java, extensive experience with Kafka as a core component of event streaming architectures, and a proven track record of leading architectural design and implementation across complex enterprise systems. You will be instrumental in defining technical strategy, establishing best practices, and mentoring engineering teams to deliver robust and resilient solutions. Key Responsibilities: Architectural Leadership: Lead the design, development, and evolution of highly scalable, resilient, and performant event-driven architectures using Java and Kafka. Define architectural patterns, principles, and standards for event sourcing, CQRS, stream processing, and microservices integration with Kafka. Drive technical vision and strategy for our core platforms, ensuring alignment with business objectives and long-term technology roadmap. Conduct architectural reviews, identify technical debt, and propose solutions for continuous improvement. Stay abreast of emerging technologies and industry trends, evaluating their applicability and recommending adoption where appropriate. Design & Development: Design and implement robust, high-throughput Kafka topics, consumers, producers, and streams (Kafka Streams/KSQL). Architect and design Java-based microservices that effectively integrate with Kafka for event communication and data synchronization. Lead the selection and integration of appropriate technologies and frameworks for event processing, data serialization, and API development. Develop proof-of-concepts (POCs) and prototypes to validate architectural choices and demonstrate technical feasibility. Contribute hands-on to critical path development when necessary, demonstrating coding excellence and leading by example. Kafka Ecosystem Expertise: Deep understanding of Kafka internals, distributed systems concepts, and high-availability configurations. Experience with Kafka Connect for data integration, Schema Registry for data governance, and KSQL/Kafka Streams for real-time stream processing. Proficiency in monitoring, optimizing, and troubleshooting Kafka clusters and related applications. Knowledge of Kafka security best practices (authentication, authorization, encryption). Technical Governance & Mentorship: Establish and enforce architectural governance, ensuring adherence to design principles and coding standards. Mentor and guide engineering teams on best practices for event-driven architecture, Kafka usage, and Java development. Foster a culture of technical excellence, collaboration, and continuous learning within the engineering organization. Communicate complex technical concepts effectively to both technical and non-technical stakeholders. Performance, Scalability & Reliability: Design for high availability, fault tolerance, and disaster recovery. Define and implement strategies for performance optimization, monitoring, and alerting across the event-driven ecosystem. Ensure solutions are scalable to handle significant data volumes and transaction rates. Required Skills & Experience: 12+ years of progressive experience in software development, with at least 5+ years in an Architect role designing and implementing large-scale enterprise solutions. Expert-level proficiency in Java (Java 8+, Spring Boot, Spring Framework). Deep and extensive experience with Apache Kafka: Designing and implementing Kafka topics, producers, and consumers. Hands-on experience with Kafka Streams API or KSQL for real-time stream processing. Familiarity with Kafka Connect, Schema Registry, and Avro/Protobuf. Understanding of Kafka cluster operations, tuning, and monitoring. Strong understanding and practical experience with Event-Driven Architecture (EDA) principles and patterns: Event Sourcing, CQRS, Saga, Choreography vs. Orchestration. Extensive experience with Microservices architecture principles and patterns. Proficiency in designing RESTful APIs and asynchronous communication mechanisms. Experience with relational and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra). Solid understanding of cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes). Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication, presentation, and interpersonal skills. Show more Show less
Posted 3 weeks ago
4.0 - 8.0 years
3 - 12 Lacs
Mumbai, Maharashtra, India
On-site
4+ years of experience developing medium to large Java applications Experience working with Git Experience working in a CI/CD environment Experience in streaming data applications Kafka Experience with Docker/Kubernetes and development of containerized applications Experience working in an Agile development methodology Experience with project management tools: Rally, JIRA, Confluence, Bit Bucket Excellent communication skills - verbal & written Self-motivated, passionate, well organized individual with demonstrated problem solving skills Experience in building distributed Machine Learning systems.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Key Skills: Confluent Kafka, Kafka Connect, Schema Registry, Kafka Brokers, KSQL, KStreams, Java/J2EE, Troubleshooting, RCA, Production Support. Roles & Responsibilities: Design and develop Kafka Pipelines. Perform unit testing of the code and prepare test plans as required. Analyze, design, and develop programs in a development environment. Support applications and jobs in the production environment for issues or failures. Develop operational documents for applications, including DFD, ICD, HLD, etc. Troubleshoot production issues and provide solutions within defined SLA. Prepare RCA (Root Cause Analysis) document for production issues. Provide permanent fixes to production issues. Experience Requirement: 5-10 yeras of experience working with Confluent Kafka. Hands-on experience with Kafka Connect using Schema Registry. Strong knowledge of Kafka brokers and KSQL. Familiarity with Kafka Control Center, Zookeepers, and KStreams is good to have. Experience with Java/J2EE is a plus. Education: B.E., B.Tech.
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
You are a results-driven Data Project Manager (PM) responsible for leading data initiatives within a regulated banking environment, focusing on leveraging Databricks and Confluent Kafka. Your role involves overseeing the successful end-to-end delivery of complex data transformation projects aligned with business and regulatory requirements. In this position, you will be required to lead the planning, execution, and delivery of enterprise data projects using Databricks and Confluent. This includes developing detailed project plans, delivery roadmaps, and work breakdown structures, as well as ensuring resource allocation, budgeting, and adherence to timelines and quality standards. Collaboration with data engineers, architects, business analysts, and platform teams is essential to align on project goals. You will act as the primary liaison between business units, technology teams, and vendors, facilitating regular updates, steering committee meetings, and issue/risk escalations. Your technical oversight responsibilities include managing solution delivery on Databricks for data processing, ML pipelines, and analytics, as well as overseeing real-time data streaming pipelines via Confluent Kafka. Ensuring alignment with data governance, security, and regulatory frameworks such as GDPR, CBUAE, and BCBS 239 is crucial. Risk and compliance management are key aspects of your role, involving ensuring regulatory reporting data flows comply with local and international financial standards and managing controls and audit requirements in collaboration with Compliance and Risk teams. The required skills and experience for this role include 7+ years of Project Management experience within the banking or financial services sector, proven experience in leading data platform projects, a strong understanding of data architecture, pipelines, and streaming technologies, experience in managing cross-functional teams, and proficiency in Agile/Scrum and Waterfall methodologies. Technical exposure to Databricks (Delta Lake, MLflow, Spark), Confluent Kafka (Kafka Connect, kSQL, Schema Registry), Azure or AWS Cloud Platforms, integration tools, CI/CD pipelines, and Oracle ERP Implementation is expected. Preferred qualifications include PMP/Prince2/Scrum Master certification, familiarity with regulatory frameworks, and a strong understanding of data governance principles. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field. Key performance indicators for this role include on-time, on-budget delivery of data initiatives, uptime and SLAs of data pipelines, user satisfaction, and compliance with regulatory milestones.,
Posted 1 month ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Chennai
Work from Office
Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solute zions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 1 month ago
5.0 - 10.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 5+ Yrs exp in Kafka Administrator Kafka Administrator Required Skills & Experience: Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 1 month ago
3.0 - 6.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .
Posted 2 months ago
8.0 - 13.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 8+ Yrs exp in Kafka Administrator Mandatory Skill: kSQL DB Developers who must have hands on experience in writing the Ksql queries. Kafka Connect development experience. Kafka Client Stream Applications Developer Confluent Terraform Provider Skill: 8+ years of experience in Development project and Support project experience 3+ years of hands on experience in Kafka Understanding Event Streaming patterns and when to apply these patterns Designing building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka Working with different database solutions for data extraction, updates and insertions. Identity and Access Management space including relevant protocols and standards such as OAuth, OIDC, SAML, LDAP etc. Knowledge of networking protocols such as TCP, HTTP/2, WebSockets etc. Candidate must work in Australia timings [AWST]., Interview mode will be Face to Face Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 2 months ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 2 months ago
8.0 - 10.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karn?taka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelines mandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring) Training & Certification . Apache Kafka Administration Snowflake Fundamentals/Advanced Training . Experience 8 years of experience in a technical role working with AWS At least 2 years in a leadership or management role About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 months ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 2 months ago
5.0 - 10.0 years
5 - 10 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 5+ Yrs exp in Confluent Kafka Administrator-Technology Lead Kafka Administrator Required Skills & Experience: Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 2 months ago
5.0 - 10.0 years
6 - 11 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Kafka Administrator Required Skills & Experience: Looking exp around 5+ Years. Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Share me your updated resume recruiter.wtr26@walkingtree.in
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City