Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Kafka Integration. Experience: 5-8 Years.
Posted 1 month ago
3.0 - 10.0 years
18 - 22 Lacs
Hyderabad
Work from Office
WHAT YOU'LL DO Lead the development of scalable data infrastructure solutions Leverage your data engineering expertise to support data stakeholders and mentor less experienced Data Engineers. Design and optimize new and existing data pipelines Collaborate with a cross-functional product engineering teams and data stakeholders deliver on Codecademy’s data needs WHAT YOU'LL NEED 8 to 10 years of hands-on experience building and maintaining large scale ETL systems Deep understanding of database design and data structures. SQL, & NoSQL. Fluency in Python. Experience working with cloud-based data platforms (we use AWS) SQL and data warehousing skills -- able to write clean and efficient queries Ability to make pragmatic engineering decisions in a short amount of time Strong project management skills; a proven ability to gather and translate requirements from stakeholders across functions and teams into tangible results WHAT WILL MAKE YOU STAND OUT Experience with tools in our current data stack: Apache Airflow, Snowflake, dbt, FastAPI, S3, & Looker. Experience with Kafka, Kafka Connect, and Spark or other data streaming technologies Familiarity with the database technologies we use in production: Snowflake, Postgres, and MongoDB. Comfort with containerization technologies: Docker, Kubernetes, etc.
Posted 1 month ago
2.0 - 7.0 years
12 - 16 Lacs
Hyderabad
Work from Office
WHAT YOU'LL DO Build scalable data infrastructure solutions Design and optimize new and existing data pipelines Integrate new data sources into our existing data architecture Collaborate with a cross-functional product engineering teams and data stakeholders deliver on Codecademy’s data needs WHAT YOU'LL NEED 3 to 5 years of hands-on experience building and maintaining large scale ETL systems Deep understanding of database design and data structures. SQL, & NoSQL. Fluency in Python. Experience working with cloud-based data platforms (we use AWS) SQL and data warehousing skills -- able to write clean and efficient queries Ability to make pragmatic engineering decisions in a short amount of time Strong project management skills; a proven ability to gather and translate requirements from stakeholders across functions and teams into tangible results WHAT WILL MAKE YOU STAND OUT Experience with tools in our current data stack: Apache Airflow, Snowflake, dbt, FastAPI, S3, & Looker. Experience with Kafka, Kafka Connect, and Spark or other data streaming technologies Familiarity with the database technologies we use in production: Snowflake, Postgres, and MongoDB. Comfort with containerization technologies: Docker, Kubernetes, etc.
Posted 1 month ago
5.0 - 9.0 years
10 - 17 Lacs
Bengaluru
Work from Office
Role name Kafka Platform Engineer No of years experience 5 + Years of relavant skill Detailed JD Kafka Platform Engineer We are seeking a highly skilled and motivated Kafka Platform Engineer to join our team. As a Kafka Platform Engineer, you will be responsible for operating and managing our Kafka cluster, ensuring its scalability, reliability, and security. You will collaborate with cross-functional teams to design, implement, and optimize Kafka solutions that meet the needs of our business. This is a key role in modernizing our application infrastructure and adopting industry best practices. Primary Skills: Strong expertise in operating and administering Kafka clusters. Experience in performance tuning and troubleshooting of middleware technologies, applying them to infrastructure. Proficiency in shell scripting and/or Python/Python, with specific experience in administering Kafka. Experience with Java application servers on cloud platforms is a significant advantage. Provide operational support for the Kafka cluster, ensuring high availability and stability 24/7 (on-call support). Utilize infrastructure as code (IaC) principles to provision and manage Kafka infrastructure. Work Location Bangalore (No remote access, need to operate from base location) Client Interview / F2F Applicable Yes
Posted 1 month ago
9.0 - 14.0 years
15 - 30 Lacs
Bengaluru
Work from Office
We are seeking an experienced Tech Lead with a passion for crafting scalable backend services and intuitive front-end experiences. This is an exciting opportunity to contribute to the design and development of complex, high-performance enterprise applications, particularly within our loyalty platform ecosystem . You will work in a collaborative Agile environment , take ownership of technical components, and mentor junior engineers. How Youll Make an Impact: Agile Development: Actively participate in all phases of Agile development, including planning, backlog grooming, coding, testing, and retrospectives. End-to-End Ownership: Own the development and integration of loyalty platform components, including REST APIs , batch jobs , and message queues . Domain Expert: Serve as a domain expert in at least one technology area, demonstrating leadership and ownership across feature development. Cross-Functional Collaboration: Collaborate closely with Product Owners and QA engineers to understand and refine acceptance criteria and technical specifications. Design & Architecture Leadership: Drive design and architecture discussions, contributing simple yet scalable solutions to complex business problems. Documentation: Create and maintain detailed documentation for business logic, configuration settings, and integration points. TDD & Testing: Develop unit and integration tests using TDD practices and frameworks like JUnit and Mockito . Mentorship: Guide junior developers through code reviews, pair programming, and knowledge-sharing sessions. Coding Best Practices: Promote coding best practices, clean architecture , and SOLID principles across the team. Effort Estimation: Accurately estimate effort, flag risks early, and ensure timely delivery of features within scope. Continuous Improvement: Proactively identify areas for improvement in code quality, performance, and DevOps practices . Production Support: Support application deployment, monitoring, and issue resolution in production environments. What You Need to Be Successful: Technical Skills: Backend Development: Proficient in Java (preferably JDK 17+ ), Spring Boot , and Spring Batch . Frontend Development: Experience with Angular (version 7+), HTML5 , CSS3 , and TypeScript . REST APIs: Strong experience in designing and developing RESTful APIs using JSON . Cloud & Microservices: Hands-on experience building cloud-native microservices on AWS , Azure , or Oracle Cloud . Database: Strong SQL skills, including experience with multi-table queries and query optimization using execution plans (preferably with Oracle or PostgreSQL ). Messaging Queues: Familiarity with RabbitMQ , Kafka Streams , or ActiveMQ . Containerization & Orchestration: Exposure to Docker , Kubernetes , and using kubectl for cluster configuration. DevOps & CI/CD: Experience with Git/Bitbucket , Gradle , Bamboo , or similar CI/CD tools . Soft Skills: Excellent written and verbal communication skills. Strong analytical and problem-solving capabilities. Ability to work independently and collaboratively in a cross-functional team environment. Mentorship mindset and willingness to support peer development. A proactive attitude toward continuous learning and innovation. Preferred Experience: Prior experience in Loyalty , Banking , Accounting , or other transactional domains . Working knowledge of monitoring tools , debugging distributed systems , and performance tuning .
Posted 1 month ago
3.0 - 8.0 years
5 - 8 Lacs
Mumbai
Work from Office
Role Overview: Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities: * Design, schedule, and monitor DAGs for ETL/ELT pipelines * Integrate Airflow with Cloudera services and external APIs * Implement retries, alerts, logging, and failure recovery * Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Experience3-8 years * Expertise in Airflow 2.x, Python, Bash * Knowledge of CI/CD for Airflow DAGs * Proven experience with Cloudera CDP, Spark/Hive-based data pipelines * Integration with Kafka, REST APIs, databases
Posted 1 month ago
8.0 - 13.0 years
20 - 35 Lacs
Chennai
Hybrid
We are looking for someone with: Strong and demonstrable problem-solving ability Comfortable with self-management and on-the-job learning Ability to share knowledge across team(s) Demonstrable initiative and logical thinking Passion about emerging technologies and self development Strong computer science fundamentals Collaborative work-ethic Strong problem-solving and analytical skills Excellent communication skills Knowledge of applying object oriented and functional programming styles to real world problems. Ideally (but not restrictive) you should have: Hands on experience (5+) years using Java and/or Scala Knowledge of continuous integration and continuous delivery Knowledge of microservice architecture Working experience with TDD & BDD Experience building REST API's Experience working with Docker General knowledge of agile software development concepts and processes Proficient understanding of code versioning tools, such as Git Working experience with Jira, Confluence Nice to haves: Special interest in functional programming Knowledge of reactive manifesto Knowledge of streaming data Experience with Akka, Play Framework or Lagom Experience working with Kafka Knowledge of NoSQL Cloud based development with AWS, Microsoft Azure, Google Cloud etc. Commercial exposure with ELK stack
Posted 1 month ago
5.0 - 8.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Kafka Integration. Experience5-8 Years.
Posted 1 month ago
8.0 - 10.0 years
13 - 17 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of the role is to create exceptional integration architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. Do 1. Define integration architecture for the new deals/ major change requests in existing deals a. Creates an enterprise-wide integration architecture that ensures that systems are seamlessly integrated while being scalable, reliable, and manageable. b. Provide solutioning for digital integration for RFPs received from clients and ensure overall design assurance i. Analyse applications, exchange points, data formats, connectivity requirements, technology environment, enterprise specifics, client requirements to set an integration solution design framework/ architecture ii. Provide technical leadership to the design, development and implementation of integration solutions through thoughtful use of modern technology iii. Define and understand current state integration solutions and identify improvements, options & tradeoffs to define target state solutions iv. Clearly articulate, document and use integration patterns, best practices and processes. v. Evaluate and recommend products and solutions to integrate with overall technology ecosystem vi. Works closely with various IT groups to transition tasks, ensure performance and manage issues through to resolution vii. Document integration architecture covering logical, deployment and data views mentioning all the artefacts in detail viii. Validate the integration solution/ prototype from technology, cost structure and customer differentiation point of view ix. Identify problem areas and perform root cause analysis of integration architectural design and solutions and provide relevant solutions to the problem x. Collaborating with sales, program/project, consulting teams to reconcile solutions to architecture xi. Tracks industry integration trends and relates these to planning current and future IT needs c. Provides technical and strategic input during the project planning phase in the form of technical architectural designs and recommendations d. Collaborates with all relevant parties in order to review the objectives and constraints of solutions and determine conformance with the Enterprise Architecture. e. Identifies implementation risks and potential impacts. 2. Enable Delivery Teams by providing optimal delivery solutions/ frameworks a. Build and maintain relationships with technical leaders, product owners, peer architects and other stakeholders to become a trusted advisor b. Develops and establishes relevant integration metrics (KPI/SLA) to drive results c. Identify risks related to integration and prepares a risk mitigation plan d. Ensure quality assurance of the integration architecture or design decisions and provides technical mitigation support to the delivery teams e. Leads the development and maintenance of integration framework and related artefacts f. Develops trust and builds effective working relationships through respectful, collaborative engagement across individual product teams g. Ensures integration architecture principles, patterns and standards are consistently applied to all the projects h. Ensure optimal Client Engagement i. Support pre-sales team while presenting the entire solution design and its principles to the client ii. Coordinate with the client teams to ensure all requirements are met and create an effective integration solution iii. Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor 3. Competency Building and Branding a. Ensure completion of necessary trainings and certifications on integration middleware b. Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas and solve new customer problems based on market and customer research c. Develop and present a point of view of Wipro on digital integration by writing white papers, blogs etc. d. Help in attaining market recognition through analyst rankings, client testimonials and partner credits e. Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) f. Mentor developers, designers and Junior architects in the project for their further career development and enhancement g. Contribute to the integration practice by conducting selection interviews etc. 4. Team Management a. Resourcing i. Anticipating new talent requirements as per the market/ industry trends or client requirements ii. Support in hiring adequate and right resources for the team through conducting interviews b. Talent Management i. Ensure adequate onboarding and training for the team members to enhance capability & effectiveness c. Performance Management i. Provide inputs to project manager in setting appraisal objectives for the team, conduct timely performance reviews and provide constructive feedback to own direct reports (if present) Deliver No. Performance Parameter Measure 1. Support sales team to create wins % of proposals with Quality Index >7, timely support of the proposals, identifying opportunities/ leads to sell services within/ outside account (lead generation), no. of proposals led 2. Delivery Responsibility in Projects/Programs and Accounts (a) Solution acceptance of Integration architecture (from client and/or internal Wipro architecture leadership), and (b) effective implementation of integration-approach/solution component by way of sufficient integration-design, methods guidelines and tech-know how of team 3. Delivery support CSAT, delivery as per cost, quality and timelines, Identify and develop reusable components, Recommend tools for reuse, automation for improved productivity and reduced cycle times 4. Capability development % trainings and certifications completed, increase in ACE certifications, thought leadership content developed (white papers, Wipro PoVs) Mandatory Skills: Kafka Integration. Experience8-10 Years.
Posted 1 month ago
10.0 - 20.0 years
30 - 40 Lacs
Hyderabad
Work from Office
Job Description Kafka/Integration Architect Position brief: Kafka/Integration Architect is responsible for designing, implementing, and managing Kafka-based streaming data pipelines and messaging solutions. This role involves configuring, deploying, and monitoring Kafka clusters to ensure the high availability and scalability of data streaming services. The Kafka Architect collaborates with cross-functional teams to integrate Kafka into various applications and ensures optimal performance and reliability of the data infrastructure. Kafka/Integration Architect play a critical role in driving data-driven decision-making and enabling real-time analytics, contributing directly to the company’s agility, operational efficiency and ability to respond quickly to market changes. Their work supports key business initiatives by ensuring that data flows seamlessly across the organization, empowering teams with timely insights and enhancing the customer experience. Location: Hyderabad Primary Role & Responsibilities: Design, implement, and manage Kafka-based data pipelines and messaging solutions to support critical business operations and enable real-time data processing. Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability to maximize uptime and support business growth. Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow, enhancing decision-making and operational efficiency. Collaborate with development teams to integrate Kafka into applications and services. Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases, NoSQL data stores, and cloud storage, enabling faster data insights. Implement security measures to protect Kafka clusters and data streams, safeguarding sensitive information and maintaining regulatory compliance. Optimize Kafka configurations for performance, reliability, and scalability. Automate Kafka cluster operations using infrastructure-as-code tools like Terraform or Ansible to increase operational efficiency and reduce manual overhead. Provide technical support and guidance on Kafka best practices to development and operations teams, enhancing their ability to deliver reliable, high-performance applications. Maintain documentation of Kafka environments, configurations, and processes to ensure knowledge transfer, compliance, and smooth team collaboration. Stay updated with the latest Kafka features, updates, and industry best practices to continuously improve data infrastructure and stay ahead of industry trends. Required Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Ability to translate business requirements into technical solutions. Working Experience and Qualification: Education: Bachelor’s or master’s degree in computer science, Information Technology or related field. Experience: Proven experience of 8-10 years as a Kafka Architect or in a similar role. Skills: Strong knowledge of Kafka architecture, including brokers, topics, partitions and replicas. Experience with Kafka security, including SSL, SASL, and ACLs. Proficiency in configuring, deploying, and managing Kafka clusters in cloud and on-premises environments. Experience with Kafka stream processing using tools like Kafka Streams, KSQL, or Apache Flink. Solid understanding of distributed systems, data streaming and messaging patterns. Proficiency in Java, Scala, or Python for Kafka-related development tasks. Familiarity with DevOps practices, including CI/CD pipelines, monitoring, and logging. Experience with tools like Zookeeper, Schema Registry, and Kafka Connect. Strong problem-solving skills and the ability to troubleshoot complex issues in a distributed environment. Experience with cloud platforms like AWS, Azure, or GCP. Preferred Skills: (Optional) Kafka certification or related credentials, such as: Confluent Certified Administrator for Apache Kafka (CCAAK) Cloudera Certified Administrator for Apache Kafka (CCA-131) AWS Certified Data Analytics – Specialty (with a focus on streaming data solutions) Knowledge of containerization technologies like Docker and Kubernetes. Familiarity with other messaging systems like RabbitMQ or Apache Pulsar. Experience with data serialization formats like Avro, Protobuf, or JSON. Company Profile: WAISL is an ISO 9001:2015, ISO 20000-1:2018, ISO 22301:2019 certified, and CMMI Level 3 Appraised digital transformation partner for businesses across industries with a core focus on aviation and related adjacencies. We transform airports and relative ecosystems through digital interventions with a strong service excellence culture. As a leader in our chosen space, we deliver world-class services focused on airports and their related domains, enabled through outcome-focused next-gen digital/technology solutions. At present, WAISL is the primary technology solutions partner for Indira Gandhi International Airport, Delhi, Rajiv Gandhi International Airport, Hyderabad, Manohar International Airport, Goa, Kannur International Airport, Kerala, and Kuwait International Airport, and we expect to soon provide similar services for other airports in India and globally. WAISL, as a digital transformation partner, brings proven credibility in managing and servicing 135+Mn passengers, 80+ airlines, core integration, deployment, and real-time management experience of 2000+ applications vendor agnostically in highly complex technology converging ecosystems. This excellence in managed services delivered by WAISL has enabled its customer airports to be rated amongst the best-in-class service providers by Skytrax and ACI awards, and to win many innovation and excellence awards
Posted 1 month ago
7.0 - 8.0 years
13 - 18 Lacs
Hyderabad
Work from Office
Primary Skills Data Engineer with Python, Kafka Streaming & Spark Years of experience 6 to 10 Years Job Description Key Responsibilities: • Develop real-time data streaming applications using Apache Kafka and Kafka Streams. • Build and optimize large-scale batch and stream processing pipelines with Apache Spark. • Containerize applications and manage deployments using OpenShift and Kubernetes. • Collaborate with DevOps teams to ensure CI/CD pipelines are robust and scalable. • Write unit tests and conduct code reviews to maintain code quality and reliability. • Work closely with Product and Data Engineering teams to understand requirements and translate them into technical solutions. • Troubleshoot and debug production issues across multiple environments. Required qualifications to be successful in this role • Strong programming skills in Java/Python. • Hands-on experience with Apache Kafka, Kafka Streams, and event-driven architecture. • Solid knowledge of Apache Spark (batch and streaming). • Experience with OpenShift, Kubernetes, and container orchestration. • Familiarity with microservices architecture, RESTful APIs, and distributed systems. • Experience with build tools such as Maven or Gradle. • Familiar with Git, Jenkins, CI/CD pipelines, and Agile development practices. • Excellent problem-solving skills and ability to work in a fast-paced environment. Education & Experience: • Bachelor's or Master's degree in Computer Science, Engineering, or related field. • Minimum 6 years of experience in backend development with Java and related technologies. Preferred Skills (Nice to Have): • Knowledge of cloud platforms like AWS, Azure, or GCP. • Understanding of security best practices in cloud-native environments. • Familiarity with SQL/NoSQL databases (e.g., PostgreSQL, Cassandra, MongoDB). • Experience with Scala or Python for Spark jobs is a plus. Location Hyderabad Only Notice period Immediate to 30 Days Shift 1 PM to 10 PM, Initial 8 weeks WFO, Later Hybrid No. of positions 8
Posted 1 month ago
5.0 - 10.0 years
6 - 15 Lacs
Bengaluru
Work from Office
Greetings!!! If you're interested please apply by clicking below link https://bloomenergy.wd1.myworkdayjobs.com/BloomEnergyCareers/job/Bangalore-Karnataka/Staff-Engineer---Streaming-Analytics_JR-19447 Role & responsibilities Our team at Bloom Energy embraces the unprecedented opportunity to change the way companies utilize energy. Our technology empowers businesses and communities to responsibly take charge of their energy. Our energy platform has three key value propositions: resiliency, sustainability, and predictability. We provide infrastructure that is flexible for the evolving net zero ecosystem. We have deployed more than 30,000 fuel cell modules since our first commercial shipments in 2009, sending energy platforms to data centers, hospitals, manufacturing facilities, biotechnology facilities, major retail stores, financial institutions, telecom facilities, utilities, and other critical infrastructure customers around the world. Our mission is to make clean, reliable energy affordable globally. We never stop striving to improve our technology, to expand and improve our company performance, and to develop and support the many talented employees that serve our mission! Role & responsibilities: Assist in developing distributed learning algorithms Responsible for building real-time analytics on cloud and edge devices Responsible for developing scalable data pipelines and analytics tools Solve challenging data and architectural problems using cutting edge technology Cross functional collaboration with data scientists / data engineering / firmware controls teams Skills and Experience: Strong Java/ Scala programming/debugging ability and clear design patterns understanding, Python is a bonus Understanding of Kafka/ Spark / Flink / Hadoop / HBase etc. internals (Hands on experience in one or more preferred) Implementing data wrangling, transformation and processing solutions, demonstrated experience of working with large datasets Knowhow of cloud computing platforms like AWS/GCP/Azure beneficial Exposure to data lakes and data warehousing concepts, SQL, NoSQL databases Working on REST APIs, gRPC are good to have skills Ability to adapt to new technology, concept, approaches, and environment faster Problem-solving and analytical skills Must have a learning attitude and improvement mindset
Posted 1 month ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Chennai
Work from Office
Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solute zions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 1 month ago
5.0 - 8.0 years
10 - 17 Lacs
Noida, Gurugram, Greater Noida
Work from Office
5+ yrs in Python, Django Microservices Architecture and API development Deploy via Kubernetes Redis for performance tuning Celery for distributed task DBs: PostgreSQL, Time-series Redis, Celery, RabbitMQ/Kafka Microservices architecture exp
Posted 1 month ago
5.0 - 10.0 years
12 - 18 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Design and develop Kafka Pipelines. Perform Unit testing of the code and prepare test plans as required. Analyze, design and develop programs in development environment. Support application & jobs in production environment for abends or issues.
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Dubai, Pune, Chennai
Hybrid
Job Title: Confluent CDC System Analyst Role Overview: A leading bank in the UAE is seeking an experienced Confluent Change Data Capture (CDC) System Analyst/ Tech lead to implement real-time data streaming solutions. The role involves implementing robust CDC frameworks using Confluent Kafka , ensuring seamless data integration between core banking systems and analytics platforms. The ideal candidate will have deep expertise in event-driven architectures, CDC technologies, and cloud-based data solutions . Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solutions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Hyderabad
Work from Office
This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problemsolving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile
Posted 1 month ago
8.0 - 13.0 years
15 - 19 Lacs
Noida
Work from Office
About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field. Heres What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices
Posted 1 month ago
4.0 - 8.0 years
6 - 12 Lacs
Bengaluru
Work from Office
We are seeking an experienced Data Engineer to join our dynamic product development team. In this role, you will be responsible for designing, building, and optimizing data pipelines that ensure efficient data processing and insightful analytics. You will work collaboratively with cross-functional teams, including data scientists, software developers, and product managers, to transform raw data into actionable insights while adhering to best practices in data architecture, security, and scalability. Role & responsibilities * Design, build, and maintain scalable ETL processes to ingest, process, and store large datasets. * Collaborate with cross-functional teams to integrate data from various sources, ensuring data consistency and quality. * Leverage Microsoft Azure services for data storage, processing, and analytics, integrating with our CI/CD pipeline on Azure Repos. * Continuously optimize data workflows for performance and scalability, identifying bottlenecks and implementing improvements. * Deploy and monitor ML/GenAI models in production environments. * Develop and enforce data quality standards, data validation checks, and ensure compliance with security and privacy policies. * Work closely with backend developers (PHP/Node/Python) and DevOps teams to support seamless data operations and deployment. * Stay current with industry trends and emerging technologies to continually enhance data strategies and methodologies. Required Skills & Qualifications * Minimum of 4+ years in data engineering or a related field. * In depth understanding of streaming technologies like Kafka, Spark Streaming. * Strong proficiency in SQL, Python, Spark SQL - data manipulation, data processing, and automation. * Solid understanding of ETL/ELT frameworks, data pipeline design, data modelling, data warehousing and data governance principles. * Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. * Proficient in Azure technologies like ADB, ADF, SQL (capability of writing complex SQL queries), PySpark, Python, Synapse, Fabric, Delta Tables, Unity CatLog. * Deep understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and data warehousing solutions (e.g., Snowflake, Redshift, Big Query). * Good knowledge of Agile, SDLC/CICD practices and tools with a good understanding of distributed systems. * Proven ability to work effectively in agile/scrum teams, collaborating across disciplines. * Excellent analytical, troubleshooting, problem-solving skills and attention to detail. Preferred candidate profile * Experience with NoSQL databases and big data processing frameworks e.g., Apache Spark. * Knowledge of data visualization and reporting tools. * Strong understanding of data security, governance, and compliance best practices. * Effective communication skills with an ability to translate technical concepts to non-technical stakeholders. * Knowledge of AI-OPS and LLM Data pipelines. Why Join GenXAI? * Innovative Environment: Work on transformative projects in a forward-thinking, collaborative setting. * Career Growth: Opportunities for professional development and advancement within a rapidly growing company. * Cutting-Edge Tools: Gain hands-on experience with industry-leading technologies and cloud platforms. * Collaborative Culture: Join a diverse team where your expertise is valued, and your ideas make an impact.
Posted 1 month ago
6.0 - 8.0 years
27 - 30 Lacs
Pune, Ahmedabad, Chennai
Work from Office
Technical Skills Must Have: 8+ years overall IT industry experience, with 5+ years in a solution or technical architect role using service and hosting solutions such as private/public cloud IaaS, PaaS and SaaS platforms. 5+ years of hands-on development experience with event driven architecture-based implementation. Achieved one or more of the typical solution and technical architecture certifications e.g. Microsoft, MS Azure Certification, TOGAF, AWS Cloud Certified, SAFe, PMI, and SAP etc. Hand-on experience with: o Claims-based authentication (SAML/OAuth/OIDC), MFA, JIT, and/or RBAC / Ping etc. o Architecting Mission critical technology components with DR capabilities. o Multi-geography, multi-tier service design and management. o Project financial management, solution plan development and product cost estimation. o Supporting peer teams and their responsibilities; such as infrastructure, operations, engineering, info-security. o Configuration management and automation tools such as Azure DevOps, Ansible, Puppet, Octopus, Chef, Salt, etc. o Software development full lifecycle methodologies, patterns, frameworks, libraries and tools. o Relational, graph and/or unstructured data technologies such as SQL Server, Azure SQL, Cosmos, Azure Data Lake, HD Insights, Hadoop, Neo4j etc. o Data management and data governance technologies. o Experience in data movement and transformation technologies. o AI and Machine Learning tools such as Azure ML etc. o Architecting mobile applications that are either independent applications or supplementary addons (to intranet or extranet). o Cloud security controls including tenant isolation, encryption at rest, encryption in transit, key management, vulnerability assessments, application firewalls, SIEM, etc. o Apache Kafka, Confluent Kafka, Kafka Streams, and Kafka Connect. o Proficient in NodeJS, Java, Scala, or Python languages.
Posted 1 month ago
5.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Required skills Over 7 years of experience as Full Stack Engineer Experience in selected programming languages (e.g. Python) and Java/J2EE platform Experience UI technologies React/Next JS Experience in building REST API In-depth knowledge of relational databases (e.g. PostgreSQL, MySQL) and NoSQL databases (e.g. MongoDB) Experience in Snowflake and Databricks. Experience with Cloud Tec Experience with Kafka technologies, Azure, Kubernetes, Snowflake, Github, Copilot Experience in large scale implementation of Open-Source Technologies Generative AI, Large Language Models, and Chatbot technologies Strong knowledge of data integration Strong Data Analytics experience with application enablement. Strong Experience in driving Customer Experience Familiarity with agile development Experience in Healthcare Clinical Domains Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Fullstack MERN. Experience5-8 Years.
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain Kafka-based data pipelines for real-time processing. Implement Kafka producer and consumer applications for efficient data flow. Optimize Kafka clusters for performance, scalability, and reliability. Design and manage Grafana dashboards for monitoring Kafka metrics. Integrate Grafana with Elasticsearch, or other data sources. Set up alerting mechanisms in Grafana for Kafka system health monitoring. Collaborate with DevOps, data engineers, and software teams. Ensure security and compliance in Kafka and Grafana implementations. Requirements: 8+ years of experience in configuring Kafka, ElasticSearch and Grafana Strong understanding of Apache Kafka architecture and Grafana visualization. Proficiency in .Net, or Python for Kafka development. Experience with distributed systems and message-oriented middleware. Knowledge of time-series databases and monitoring tools. Familiarity with data serialization formats like JSON. Expertise in Azure platforms and Kafka monitoring tools. Good problem-solving and communication skills. Mandate : Create the Kafka dashboards , Python/.NET Note: Candidate must be immediate joiner.
Posted 2 months ago
6.0 - 7.0 years
11 - 14 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across our organization. Develop Kafka producers, consumers, and stream processing applications. Implement Kafka Connect connectors and configure Kafka clusters. Optimize Kafka performance and troubleshoot related issues. Utilize Confluent tools like Schema Registry, Control Center, and ksqlDB. Collaborate with cross-functional teams and ensure compliance with data policies. Qualifications: Bachelors degree in Computer Science or related field. Confluent Certified Developer for Apache Kafka certification. Strong programming skills in Java/Python. In-depth Kafka architecture and Confluent platform experience. Experience with cloud platforms and containerization (Docker, Kubernetes) is a plus. Experience with data warehousing and data lake technologies. Experience with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code tools such as Terraform, or CloudFormation.
Posted 2 months ago
4.0 - 9.0 years
5 - 13 Lacs
Thane, Goregaon, Mumbai (All Areas)
Work from Office
Opening for Leading Insurance company. **Looking for Immediate Joiner and 30 Days** Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives. Interested candidates shared resume on snehal@topgearconsultants.com
Posted 2 months ago
8.0 - 13.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 8+ Yrs exp in Kafka Administrator Mandatory Skill: kSQL DB Developers who must have hands on experience in writing the Ksql queries. Kafka Connect development experience. Kafka Client Stream Applications Developer Confluent Terraform Provider Skill: 8+ years of experience in Development project and Support project experience 3+ years of hands on experience in Kafka Understanding Event Streaming patterns and when to apply these patterns Designing building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka Working with different database solutions for data extraction, updates and insertions. Identity and Access Management space including relevant protocols and standards such as OAuth, OIDC, SAML, LDAP etc. Knowledge of networking protocols such as TCP, HTTP/2, WebSockets etc. Candidate must work in Australia timings [AWST]., Interview mode will be Face to Face Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City