Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
0 - 3 Lacs
Bangalore Rural, Chennai, Bengaluru
Work from Office
Greetings from Sight Spectrum Technologies We are hiring for Data Lead Position. Experience:8+Yrs Location: Bangalore, Chennai Description: Required Skills: * Proficiency in multiple programming languages - ideally Python * Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm) * Proficiency in at least one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar) * Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar) * Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,) * Strong organizational, problem-solving and critical thinking skills; Strong documentation skills Preferred skills: * Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation) If Interested kindly share your resume to roopavahini@sightspectrum.in
Posted 20 hours ago
8.0 - 13.0 years
85 - 90 Lacs
Noida
Work from Office
About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field.
Posted 1 day ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain Kafka-based data pipelines for real-time processing. Implement Kafka producer and consumer applications for efficient data flow. Optimize Kafka clusters for performance, scalability, and reliability. Design and manage Grafana dashboards for monitoring Kafka metrics. Integrate Grafana with Elasticsearch, or other data sources. Set up alerting mechanisms in Grafana for Kafka system health monitoring. Collaborate with DevOps, data engineers, and software teams. Ensure security and compliance in Kafka and Grafana implementations. Requirements: 8+ years of experience in configuring Kafka, ElasticSearch and Grafana Strong understanding of Apache Kafka architecture and Grafana visualization. Proficiency in .Net, or Python for Kafka development. Experience with distributed systems and message-oriented middleware. Knowledge of time-series databases and monitoring tools. Familiarity with data serialization formats like JSON. Expertise in Azure platforms and Kafka monitoring tools. Good problem-solving and communication skills. Mandate : Create the Kafka dashboards , Python/.NET Note: Candidate must be immediate joiner.
Posted 2 days ago
4.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Who we are. About Stripe. Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.. About The Team. The Reporting Platform Data Foundations group maintains and evolves the core systems that power reporting data for Stripe's users. We're responsible for Aqueduct, the data ingestion and processing platform that powers core reporting data for millions of businesses on Stripe. We integrate with the latest Data Platform tooling, such as Falcon for real-time data. Our goal is to provide a robust, scalable, and efficient data infrastructure that enables clear and timely insights for Stripe's users.. What you'll do. As a Software Engineer on the Reporting Platform Data Foundations group, you will lead efforts to improve and redesign core data ingestion and processing systems that power reporting for millions of Stripe users. You'll tackle complex challenges in data management, scalability, and system architecture.. Responsibilities. Design and implement a new backfill model for reporting data that can handle hundreds of millions of row additions and updates efficiently. Revamp the end-to-end experience for product teams adding or changing API-backed datasets, improving ergonomics and clarity. Enhance the Aqueduct Dependency Resolver system, responsible for determining what critical data to update for Stripe’s users based on events. Areas include error management, observability, and delegation of issue resolution to product teams. Lead integration with the latest Data Platform tooling, such as Falcon for real-time data, while managing deprecation of older systems. Implement and improve data warehouse management practices, ensuring data freshness and reliability. Collaborate with product teams to understand their reporting needs and data requirements. Design and implement scalable solutions for data ingestion, processing, and storage. Onboard, spin up, and mentor engineers, and set the group’s technical direction and strategy. Who you are. We're looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.. Minimum Requirements. 8+ years of professional experience writing high quality production level code or software programs.. Extensive experience in designing and implementing large-scale data processing systems. Strong background in distributed systems and data pipeline architectures. Proficiency in at least one modern programming language (e.g., Go, Java, Python, Scala). Experience with big data technologies (e.g., Hadoop, Flink, Spark, Kafka, Pinot, Trino, Iceberg). Solid understanding of data modeling and database systems. Excellent problem-solving skills and ability to tackle complex technical challenges. Strong communication skills and ability to work effectively with cross-functional teams. Experience mentoring other engineers and driving technical initiatives. Preferred Qualifications. Experience with real-time data processing and streaming systems. Knowledge of data warehouse technologies and best practices. Experience in migrating legacy systems to modern architectures. Contributions to open-source projects or technical communities. In-office expectations. Office-assigned Stripes in most of our locations are currently expected to spend at least 50% of the time in a given month in their local office or with users. This expectation may vary depending on role, team and location. For example, Stripes in Stripe Delivery Center roles in Mexico City, Mexico and Bengaluru, India work 100% from the office. Also, some teams have greater in-office attendance requirements, to appropriately support our users and workflows, which the hiring manager will discuss. This approach helps strike a balance between bringing people together for in-person collaboration and learning from each other, while supporting flexibility when possible.. Pay and benefits. Stripe does not yet include pay ranges in job postings in every country. Stripe strongly values pay transparency and is working toward pay transparency globally.. Show more Show less
Posted 4 days ago
2.0 - 5.0 years
7 - 11 Lacs
Gurugram
Work from Office
About NCR Atleos Overview Data is at the heart of our global financial network. In fact, the ability to consume, store, analyze and gain insight from data has become a key component of our competitive advantage. Our goal is to build and maintain a leading-edge data platform that provides highly available , consistent data of the highest quality for all users of the platform, including our customers, operations teams and data scientists. We focus on evolving our platform to deliver exponential scale to NCR Atleos , powering our future growth. Data & AI Engineers at NCR Atleos experience working at one of the largest and most recognized financial companies in the world, while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. They partner with data and AI experts to deliver high quality AI solutions and derived data to our consumers. We are looking for Data & AI Engineers who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Engineers looking to work in the areas of orchestration, data modelling , data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. Responsibilities As a Data Engineer, you will be joining a Data & AI team transforming our global financial network and improving the quality of our products and services we provide to our customers. and you will be responsible for designing, implementing, and maintaining data pipelines and systems to support the organizations data needs. Your role will involve collaborating with data scientists, analysts, and other stakeholders to ensure data accuracy, reliability, and accessibility. Key Responsibilities Data Pipeline DevelopmentDesign, build, and maintain scalable and efficient data pipelines to collect, process, and store structured and unstructured data from various sources. Data IntegrationIntegrate data from multiple sources such as databases, APIs, flat files, and streaming platforms into centralized data repositories. Data ModelingDevelop and optimize data models and schemas to support analytical and operational requirements. Implement data transformation and aggregation processes as needed. Data Quality AssuranceImplement data validation and quality assurance processes to ensure the accuracy, completeness, and consistency of data throughout its lifecycle. Performance Optimization Monitor and optimize data processing and storage systems for performance, reliability, and cost-effectiveness. Identify and resolve bottlenecks and inefficiencies in data pipelines and leverage Automation and AI to improve overall Operations. Infrastructure ManagementManage and configure cloud-based or on-premises infrastructure components such as databases, data warehouses, compute clusters, and data processing frameworks. CollaborationCollaborate with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders to understand data requirements and deliver solutions that meet business objectives . Documentation and Best PracticesDocument data pipelines, systems architecture, and best practices for data engineering. Share knowledge and provide guidance to colleagues on data engineering principles and techniques. Continuous ImprovementStay updated with the latest technologies, tools, and trends in data engineering and recommend improvements to existing processes and systems. Qualifications and Skills: Bachelors degree or higher in Computer Science, Engineering, or a related field. Proven experience in data engineering or related roles, with a strong understanding of data processing concepts and technologies. Mastery of programming languages such as Python, Java, or Scala. Knowledge of database systems such as SQL, NoSQL, and data warehousing solutions. Knowledge of stream processing technologies such as Kafka or Apache Beam. Experience with distributed computing frameworks such as Apache Spark, Hadoop, or Apache Flink . Experience deploying pipelines in cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience in implementing enterprise systems in production setting for AI, natural language processing. Exposure to self-supervised learning, transfer learning, and reinforcement learning is a plus . Have full stack experience to build the best fit solutions leveraging Large Language Models (LLMs) and Generative AI solutions with focus on privacy, security, fairness. Have good engineering skills to design the output from the AI with nodes and nested nodes in JSON or array, HTML formats for as-is consumption and display on the dashboards/portals. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data visualization tools such as Tableau or Power BI. EEO Statement NCR Atleos is an equal-opportunity employer. It is NCR Atleos policy to hire, train, promote, and pay associates based on their job-related qualifications, ability, and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation, gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status, or any other factor protected by law. Statement to Third Party Agencies To ALL recruitment agenciesNCR Atleos only accepts resumes from agencies on the NCR Atleos preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Atleos employees, or any NCR Atleos facility. NCR Atleos is not responsible for any fees or charges associated with unsolicited resumes.
Posted 4 days ago
7.0 - 11.0 years
0 - 0 Lacs
Gurugram
Work from Office
Key Responsibilities: Design, develop, and deploy enterprise-grade applications using Java Spring Boot on Red Hat OpenShift . Build high-throughput, real-time data pipelines using Apache Kafka and ensure efficient data flow and processing. Implement complex transformation, routing, and orchestration logic using Apache Camel and Enterprise Integration Patterns (EIPs) . Develop microservices that interact with multiple protocols and data sources (HTTP, JMS , SQL/NoSQL databases). Integrate and utilize Apache Flink (or similar frameworks) for complex event processing and stream analytics. Embed observability and monitoring using tools such as Prometheus , Grafana , ELK , and OpenTelemetry to ensure system health and performance. Work closely with AI/ML teams to integrate intelligent features or enable data-driven services. Champion best practices by writing unit/integration tests , conducting code reviews , and supporting CI/CD pipelines . Analyze, troubleshoot, and optimize application and pipeline performance in production environments . Location: Gurugram Key Skills: Java, Spring boot, Apache Kafka, Apache Flink, Apache Camel, kuberbnetes, Docker.
Posted 4 days ago
11.0 - 21.0 years
50 - 100 Lacs
Bengaluru
Hybrid
Our Engineering team is driving the future of cloud securitydeveloping one of the worlds largest, most resilient cloud-native data platforms. At Skyhigh Security, were enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, were looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What We’re Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering including deep proficiency with AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink and Elasticsearch with experience building production-grade data pipelines. Strong programming skills in Java for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (Hbase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (e.g., LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (e.g., BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility.
Posted 5 days ago
1.0 - 6.0 years
7 - 11 Lacs
Mumbai
Work from Office
The Role: We are looking for a highly motivated and experienced engineer to join our team in developing the next-generation AI Agent enhanced Communications platform capable of seamlessly integrating and expanding across various channels such as voice calls, mobile applications, texting, email, and social media posts As a unified communication platform, it enables message delivery to customers and internal staff across several channels like Email, SMS, In-App messaging, and Social Media This platform is utilized by applications that cover areas such as discovery, sales, orders, ownership, and service across all business sectors, including Vehicle, Energy, Insurance, and more The platform guarantees the effective delivery of marketing campaigns and interactions between advisors and customers, Responsibilities: Design, development, and implementation of scalable applications that involves problem solving Must have Leverage technologies like Golang, Apache Kafka, Postgress, Opensearch Experience with integrating with LLM and inferring responses Nice to have Java, Apache Flink, Clickhouse Promote software engineering best practices via code reviews, building tools and documentation Leverage your existing skills while learning and implementing new, open-source technologies as Tesla grows, Work with product managers, content producers, QA engineers and release engineers to own your solution from development to production Define and develop unit tests and unit test libraries to ensure code development is robust and production ready, Drive software process improvements that enable progressively increased team efficiency, Requirements: BS or MS in Computer Science or equivalent discipline Expert Experience in developing scalable golang applications including SQL and NOSQL daatabases and other opensource technologies, Design software architecture based on business requirements, strategy, and priorities Good unit testing and integration testing practices Experience with message queue architecture Experience with Docker and Kubernetes Agile/SCRUM Software Development Process experience
Posted 5 days ago
8.0 - 10.0 years
7 - 10 Lacs
Bengaluru
Work from Office
The Digital and eCommerce team currently operates several B2B websites and direct digital sales channels via a globally deployed cloud-based platform that are a growth engine for org's life science business. We provide a comprehensive catalog of all products, enabling our customers to find products and purchase products as well as get detailed scientific information on those products. ESSENTIAL JOB FUNCTIONS Work as part of an Agile development team, taking ownership for one or more services Provides leadership to the Agile Development team, driving technical designs to support business goals Ensuring the entire team exemplifies excellence in design, code, test and operation A willingness to lead by example embracing change and foster a Growth and Learning culture on the team Mentoring team members through code review, design reviews Taking a lead role, working with product owners to help refine the backlog, breaking down features and epics into executable stories Have a high quality software mindset making sure that the code you write works Design and implement robust data pipelines to ingest, process, and store large volumes of data from various sources. Stay updated with the latest technologies and trends in data engineering and propose innovative solutions. QUALIFICATIONS Education: Bachelors/Masters degree in computer science or equivalent. Mandatory Skills: 8-10+ years of hands-on software engineering experience Recent experience in Java, Kotlin, Oracle, Postgres, Data handling, Spring, Spring Boot Experience in developing REST services. Experience in unit test frameworks. Ability to provide solutions based on business requirements. Ability to collaborate with cross-functional teams. Ability to work with global teams and a flexible work schedule. Must have excellent problem-solving skills and be customer-centric. Excellent communication skills. Preferred Skills: Experience with Microservices, CI/CD, Event Oriented Architectures and Distributed Systems Experience with cloud environments (e.g., Google Cloud Platform, Azure, Amazon Web Services, etc.) Experience leading product oriented engineering development teams is a plus Familiarity with web technologies (e,g,, JavaScript, HTML, CSS), data manipulation (e.g., SQL), and version control systems (e.g., GitHub, GitLab) Familiarity with DevOps practices/principles, Agile/Scrum methodologies, CI/CD pipelines and the product development lifecycle Familiarity with modern web APIs and full stack frameworks. Experience with Docker, Kubernetes, Kafka, in memory caching is a plus Experience with Apache Airflow, Apache Flink, Google BigQuery is a plus Experience developing eCommerce systems especially B2B eCommerce is a plus.
Posted 5 days ago
5.0 - 8.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Job Description : We are looking for a highly skilled and experienced Full Stack Developer with deep expertise in microservices architecture, AWS, and modern JavaScript frameworks. The ideal candidate will have a strong background in building scalable applications and working with cloud-native technologies. Key Responsibilities : Design and develop microservices-based solutions using best practices and design patterns Build and consume RESTful APIs and GraphQL endpoints Develop scalable backend services using Node.js and Next.js Work with containerization tools like Docker and serverless principles on AWS Implement CI/CD pipelines and automation scripts Perform contract testing using PactFlow Monitor applications using tools such as New Relic and Datadog Follow agile delivery methodologies with story slicing and iterative planning Design and work with event-driven and event-sourcing architecture Ensure code quality through unit testing and automation practices Required Skills : Architecture & Development: Microservices principles and design patterns Event-driven/event-sourcing architecture RESTful APIs and GraphQL Programming Languages: JavaScript TypeScript Frameworks & Tools: Node.js Next.js Jest for unit testing Cloud & DevOps: AWS (EC2, ECS, S3, SNS, SQS, Lambda, API Gateway, CloudWatch) AWS CDK and other DevOps tools Docker and serverless architecture Jenkins, Buildkite (CI/CD pipelines) Database: DynamoDB Redis Practices: Iterative agile delivery Story slicing Continuous delivery Contract testing (PACTFlow) Automation testing and scripting Monitoring: New Relic Datadog or similar observability tools Preferred Qualifications : Experience in a fast-paced, agile development environment Strong problem-solving and communication skills Ability to collaborate effectively with cross-functional teams
Posted 6 days ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Monday to Friday (WFO) Timings : 9 am to 6 pm Desired Skills Expertise: Strong experience and mathematical understanding in one or more of Natural Language Understanding, Computer Vision, Machine Learning, and Optimization Proven track record in effectively building and deploying ML systems using frameworks such as PyTorch, TensorFlow, Keras, scikit-learn, etc. Expertise in modular, typed, and object-oriented Python programming Proficiency with core data science languages (such as Python, R, Scala), and familiarity & flexibility with data systems (e.g., SQL, NoSQL, knowledge graphs) Experience with financial data analysis, time series forecasting, and risk modeling Knowledge of financial regulations and compliance requirements in the fintech industry Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes) Understanding of blockchain technology and its applications in fintech Experience with real-time data processing and streaming analytics (e.g., Apache Kafka, Apache Flink) Excellent communication skills with a desire to work in multidisciplinary teams Ability to explain complex technical concepts to non-technical stakeholders
Posted 6 days ago
5.0 - 10.0 years
15 - 30 Lacs
Chennai
Hybrid
Job Summary: We are looking for a highly skilled Backend Data Engineer to join our growing FinTech team. In this role, you will design and implement robust data models and architectures, build scalable data ingestion pipelines, and ensure data quality across financial datasets. You will play a key role in enabling data-driven decision-making by developing efficient and secure data infrastructure tailored to the fast-paced FinTech environment. Key Responsibilities: Design and implement scalable data models and data architecture to support financial analytics, risk modeling, and regulatory reporting. Build and maintain data ingestion pipelines using Python or Java to process high-volume, high-velocity financial data from diverse sources. Lead data migration efforts from legacy systems to modern cloud-based platforms. Develop and enforce data validation processes to ensure accuracy, consistency, and compliance with financial regulations. Create and manage task schedulers to automate data workflows and ensure timely data availability. Collaborate with product, engineering, and data science teams to deliver reliable and secure data solutions. Optimize data processing for performance, scalability, and cost-efficiency in a cloud environment. Required Skills & Qualifications: Proficiency in Python and/or Java for backend data engineering tasks. Strong experience in data modelling , ETL/ELT pipeline development , and data architecture . Hands-on experience with data migration and transformation in financial systems. Familiarity with task scheduling tools (e.g., Apache Airflow, Cron, Luigi). Solid understanding of SQL and experience with relational and NoSQL databases. Knowledge of data validation frameworks and best practices in financial data quality. Experience with cloud platforms (AWS, GCP, or Azure), especially in data services. Understanding of data security , compliance , and regulatory requirements in FinTech. Preferred Qualifications: Experience with big data technologies (e.g., Spark, Kafka, Hadoop). Familiarity with CI/CD pipelines , containerization (Docker), and orchestration (Kubernetes). Exposure to financial data standards (e.g., FIX, ISO 20022) and regulatory frameworks (e.g., GDPR, PCI-DSS). Role & responsibilities Preferred candidate profile
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Kochi
Work from Office
Develop user-friendly web applications using Java and React.js while ensuring high performance. Design, develop, test, and deploy robust and scalable applications. Building and consuming RESTful APIs. Collaborate with the design and development teams to translate UI/UX design wireframes into functional components. Optimize applications for maximum speed and scalability. Stay up-to-date with the latest Java and React.js trends, techniques, and best practices. Participate in code reviews to maintain code quality and ensure alignment with coding standards. Identify and address performance bottlenecks and other issues as they arise. Help us shape the future of Event Driven technologies, including contributing to Apache Kafka, Strimzi, Apache Flink, Vert.x and other relevant open-source projects. Collaborate within a dynamic team environment to comprehend and dissect intricate requirements for event processing solutions. Translate architectural blueprints into actualized code, employing your technical expertise to implement innovative and effective solutions. Conduct comprehensive testing of the developed solutions, ensuring their reliability, efficiency, and seamless integration. Provide ongoing support for the implemented applications, responding promptly to customer inquiries, resolving issues, and optimizing performance. Serve as a subject matter expert, sharing insights and best practices related to product development, fostering knowledge sharing within the team. Continuously monitor the evolving landscape of event-driven technologies, remaining updated on the latest trends and advancements. Collaborate closely with cross-functional teams, including product managers, designers, and developers, to ensure a holistic and harmonious product development process. Take ownership of technical challenges and lead your team to ensure successful delivery, using your problem-solving skills to overcome obstacles. Mentor and guide junior developers, nurturing their growth and development by providing guidance, knowledge transfer, and hands-on training. Engage in agile practices, contributing to backlog grooming, sprint planning, stand-ups, and retrospectives to facilitate effective project management and iteration. Foster a culture of innovation and collaboration, contributing to brainstorming sessions and offering creative ideas to push the boundaries of event processing solutions. Maintain documentation for the developed solutions, ensuring comprehensive and up-to-date records for future reference and knowledge sharing. Involve in building and orchestrating containerized services Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Proven 5+ years of experience as aFull stack developer (Java and React.js) with a strong portfolio of previous projects. Proficiency in Java, JavaScript, HTML, CSS, and related web technologies. Familiarity with RESTfulAPIs and their integration into applications. Knowledge of modern CICD pipelines and tools like Jenkinsand Travis. Strong understanding of version control systems, particularly Git. Good communication skills and the ability to articulate technical concepts to both technical and non-technical team members. Familiarity with containerizationand orchestrationtechnologies like Docker and Kubernetes for deploying event processing applications. Proficiency in troubleshootingand debugging. Exceptional problem-solving and analytical abilities, with a knack for addressing technical challenges. Ability to work collaboratively in an agile and fast-paced development environment. Leadership skills to guide and mentorjunior developers, fostering their growth and skill development. Strong organizational and time management skills to manage multiple tasks and priorities effectively. Adaptability to stay current with evolving event-driven technologies and industry trends. Customer-focused mindset, with a dedication to delivering solutions that meet or exceed customer expectations. Creative thinking and innovation mindset to drive continuous improvement and explore new possibilities. Collaborative and team-oriented approach to work, valuing open communication and diverse perspectives. Preferred technical and professional ex
Posted 1 week ago
4.0 - 7.0 years
9 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 week ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Platform Engineer to build and maintain scalable, secure, and reliable data infrastructure for analytics and real-time processing. Key Responsibilities: Design and manage data pipelines, storage layers, and ingestion frameworks. Build platforms for batch and streaming data processing (Spark, Kafka, Flink). Optimize data systems for scalability, fault tolerance, and performance. Collaborate with data engineers, analysts, and DevOps to enable data access. Enforce data governance, access controls, and compliance standards. Required Skills & Qualifications: Proficiency with distributed data systems (Hadoop, Spark, Kafka, Airflow). Strong SQL and experience with cloud data platforms (Snowflake, BigQuery, Redshift). Knowledge of data warehousing, lakehouse, and ETL/ELT pipelines. Experience with infrastructure as code and automation. Familiarity with data quality, security, and metadata management. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 1 week ago
8.0 - 13.0 years
25 - 40 Lacs
Chennai
Work from Office
Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming
Posted 1 week ago
5.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Your Role and Responsibilities As a Technical Support Professional, you should have experience in a customer-facing leadership capacity. This role necessitates exceptional customer relationship management skills along with a solid technical grasp of the product/s they will support. The Technical Support Professional is expected to adeptly manage conflicting priorities, thrive under pressure, and autonomously navigate tasks with minimal active guidance. The successful applicant should possess a comprehensive understanding of IBM support, development, and service processes and deliveries. Knowledge of other IBM business procedures and professional training in mediation or conflict resolution would be advantageous. Your primary responsibilities include: Direct Problem-Solving Experience:Previous experience in addressing client issues is valuable, along with a demonstrated ability to effectively resolve problems. Strong Communication Skills: Ability to communicate clearly with both internal and external clients through spoken and written channels. Business Networking ExperienceIn-depth experience and understanding of the IBM and/or OEM support organizations, facilitating effective networking and collaboration. Excellent Coordination, Leadership & Organizational Skills: Exceptional coordination and organizational abilities, capable of leading diverse teams and multitasking within a team-based business network environment. Proficiency in project management is beneficial. Excellence in Client Service & Client Satisfaction:Personal commitment to pursuing client satisfaction and continuous improvement in the delivery of client problem resolution. Language Skills: Proficiency in English is required, with fluency in multiple languages considered advantageous. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Bachelor's Degree Experience5+ years Basic knowledge in Operating system administration (Windows, Linux) Basic knowledge in database administration (DB2, Oracle, MS SQL) EnglishFluent in speaking and writing Analytical thinking, structured problem-solving techniques Strong positive customer service attitude with sensitivity to client satisfaction. Must be a self-starter, quick learner, and enjoy working in a challenging, fast paced environment. Strong analytical and troubleshooting skills, including problem recreation, analyzing logs and traces, debugging complex issues to determine a course of action and recommend solutions. Preferred technical and professional experience Master's Degree in Information Technology Knowledge with OpenShift Knowledge with Apache Flink and Kafka Knowledge with Kibana Knowledge with Containerization and Kubernetes Knowledge with scripting (including Python, JavaScript) Knowledge with products of IBM's Digital Business Automation Product Family Knowledge with Process/Data Mining Knowledge with Containerization Basic knowledge of process/data mining Basic knowledge of LDAP Basic knowledge of AI technologies Fluent in speaking and writing in English Experience in Technical Support is a plus
Posted 1 week ago
3.0 - 7.0 years
17 - 20 Lacs
Bengaluru
Work from Office
Job Title :Industry & Function AI Data Engineer + S&C GN Management Level :09 - Consultant Location :Primary - Bengaluru, Secondary - Gurugram Must-Have Skills :Data Engineering expertise, Cloud platforms:AWS, Azure, GCP, Proficiency in Python, SQL, PySpark and ETL frameworks Good-to-Have Skills :LLM Architecture, Containerization tools:Docker, Kubernetes, Real-time data processing tools:Kafka, Flink, Certifications like AWS Certified Data Analytics Specialty, Google Professional Data Engineer,Snowflake,DBT,etc. Job Summary : As a Data Engineer, you will play a critical role in designing, implementing, and optimizing data infrastructure to power analytics, machine learning, and enterprise decision-making. Your work will ensure high-quality, reliable data is accessible for actionable insights. This involves leveraging technical expertise, collaborating with stakeholders, and staying updated with the latest tools and technologies to deliver scalable and efficient data solutions. Roles & Responsibilities: Build and Maintain Data Infrastructure:Design, implement, and optimize scalable data pipelines and systems for seamless ingestion, transformation, and storage of data. Collaborate with Stakeholders:Work closely with business teams, data analysts, and data scientists to understand data requirements and deliver actionable solutions. Leverage Tools and Technologies:Utilize Python, SQL, PySpark, and ETL frameworks to manage large datasets efficiently. Cloud Integration:Develop secure, scalable, and cost-efficient solutions using cloud platforms such as Azure, AWS, and GCP. Ensure Data Quality:Focus on data reliability, consistency, and quality using automation and monitoring techniques. Document and Share Best Practices:Create detailed documentation, share best practices, and mentor team members to promote a strong data culture. Continuous Learning:Stay updated with the latest tools and technologies in data engineering through professional development opportunities. Professional & Technical Skills: Strong proficiency in programming languages such as Python, SQL, and PySpark Experience with cloud platforms (AWS, Azure, GCP) and their data services Familiarity with ETL frameworks and data pipeline design Strong knowledge of traditional statistical methods, basic machine learning techniques. Knowledge of containerization tools (Docker, Kubernetes) Knowing LLM, RAG & Agentic AI architecture Certification in Data Science or related fields (e.g., AWS Certified Data Analytics Specialty, Google Professional Data Engineer) Additional Information: The ideal candidate has a robust educational background in data engineering or a related field and a proven track record of building scalable, high-quality data solutions in the Consumer Goods sector. This position offers opportunities to design and implement cutting-edge data systems that drive business transformation, collaborate with global teams to solve complex data challenges and deliver measurable business outcomes and enhance your expertise by working on innovative projects utilizing the latest technologies in cloud, data engineering, and AI. About Our Company | Accenture Qualification Experience :Minimum 3-7 years in data engineering or related fields, with a focus on the Consumer Goods Industry Educational Qualification :Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field
Posted 1 week ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement efficient data pipelines for data processing.- Optimize data storage and retrieval processes to enhance performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of ETL processes and data modeling.- Experience with cloud platforms such as AWS or Azure.- Knowledge of programming languages like Python or Java. Additional Information:- The candidate should have a minimum of 3 years of experience in Data Engineering and flink- The candidate must have Flink knowledge.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
5 - 9 Lacs
Pune
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications function seamlessly to support organizational goals. You will also participate in testing and refining applications to enhance user experience and efficiency, while staying updated on industry trends and best practices to continuously improve your contributions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows to ensure clarity and consistency.- Engage in code reviews and provide constructive feedback to peers to foster a culture of continuous improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling and ETL processes.- Experience with cloud platforms such as AWS or Azure for data storage and processing.- Familiarity with programming languages such as Python or Java for application development.- Knowledge of database management systems, including SQL and NoSQL databases. Additional Information:- The candidate should have minimum 3 years of experience in Data Engineering and Flink.- The candidate must have Flink knowledge.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Work from Office
What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: Strong on programming languages like Python, Java One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 1 week ago
5.0 - 9.0 years
12 - 18 Lacs
Hyderabad
Work from Office
Job Description : Position: Sr.Data Engineer Experience: Minimum 7 years Location: Hyderabad Job Summary: What Youll Do Design and build efficient, reusable, and reliable data architecture leveraging technologies like Apache Flink, Spark, Beam and Redis to support large-scale, real-time, and batch data processing. Participate in architecture and system design discussions, ensuring alignment with business objectives and technology strategy, and advocating for best practices in distributed data systems. Independently perform hands-on development and coding of data applications and pipelines using Java, Scala, and Python, including unit testing and code reviews. Monitor key product and data pipeline metrics, identify root causes of anomalies, and provide actionable insights to senior management on data and business health. Maintain and optimize existing datalake infrastructure, lead migrations to lakehouse architectures, and automate deployment of data pipelines and machine learning feature engineering requests. Acquire and integrate data from primary and secondary sources, maintaining robust databases and data systems to support operational and exploratory analytics. Engage with internal stakeholders (business teams, product owners, data scientists) to define priorities, refine processes, and act as a point of contact for resolving stakeholder issues. Drive continuous improvement by establishing and promoting technical standards, enhancing productivity, monitoring, tooling, and adopting industry best practices. What Youll Bring Bachelors degree or higher in Computer Science, Engineering, or a quantitative discipline, or equivalent professional experience demonstrating exceptional ability. 7+ years of work experience in data engineering and platform engineering, with a proven track record in designing and building scalable data architectures. Extensive hands-on experience with modern data stacks, including datalake, lakehouse, streaming data (Flink, Spark), and AWS or equivalent cloud platforms. Cloud - AWS Apache Flink/Spark , Redis Database platform- Databricks. Proficiency in programming languages such as Java, Scala, and Python(Good to have) for data engineering and pipeline development. Expertise in distributed data processing and caching technologies, including Apache Flink, Spark, and Redis. Experience with workflow orchestration, automation, and DevOps tools (Kubernetes,git,Terraform, CI/CD). Ability to perform under pressure, managing competing demands and tight deadlines while maintaining high-quality deliverables. Strong passion and curiosity for data, with a commitment to data-driven decision making and continuous learning. Exceptional attention to detail and professionalism in report and dashboard creation. Excellent team player, able to collaborate across diverse functional groups and communicate complex technical concepts clearly. Outstanding verbal and written communication skills to effectively manage and articulate the health and integrity of data and systems to stakeholders. Please feel free to contact us: 9440806850 Email ID : careers@jayamsolutions.com
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Senior Data Engineer Our Mission SPAN is enabling electrification for all We are a mission-driven company designing, building, and deploying products that electrify the built environment, reduce carbon emissions, and slow the effects of climate change. Decarbonization is the process to reduce or remove greenhouse gas emissions, especially carbon dioxide, from entering our atmosphere. Electrification is the process of replacing fossil fuel appliances that run on gas or oil with all-electric upgrades for a cleaner way to power our lives. At SPAN, we believe in: Enabling homes and vehicles powered by clean energy Making electrification upgrades possible Building more resilient homes with reliable backup Designing a flexible and distributed electrical grid The Role As a Data Engineer you would be working to design, build, test and create infrastructure necessary for real time analytics and batch analytics pipelines. You will work with multiple teams within the org to provide analysis, insights on the data. You will also be involved in writing ETL processes that support data ingestion. You will also guide and enforce best practices for data management, governance and security. You will build infrastructure to monitor these data pipelines / ETL jobs / tasks and create tooling/infrastructure for providing visibility into these. Responsibilities We are looking for a Data Engineer with passion for building data pipelines, working with product, data science and business intelligence teams and delivering great solutions. As a part of the team you:- Acquire deep business understanding on how SPAN data flows from IoT device to cloud through the system and build scalable and optimized data solutions that impact many stakeholders. Be an advocate for data quality and excellence of our platform. Build tools that help streamline the management and operation of our data ecosystem. Ensure best practices and standards in our data ecosystem are shared across teams. Work with teams within the company to build close relationships with our partners to understand the value our platform can bring and how we can make it better. Improve data discovery by creating data exploration processes and promoting adoption of data sources across the company. Have a desire to write tools and applications to automate work rather than do everything by hand. Assist internal teams in building out data logging, alerting and monitoring for their applications Are passionate about CI/CD process. Design, develop and establish KPIs to monitor analysis and provide strategic insights to drive growth and performance. About You Required Qualifications Bachelor's Degree in a quantitative discipline: computer science, statistics, operations research, informatics, engineering, applied mathematics, economics, etc. 5+ years of relevant work experience in data engineering, business intelligence, research or related fields. Expert level production-grade, programming experience in at least one of these languages (Python, Kotlin, or other JVM based languages) Experience in writing clean, concise and well structured code in one of the above languages. Experience working with Infrastructure-as-code tools: Pulumi, Terraform, etc. Experience working with CI/CD systems: Circle-CI, Github Actions, Argo-CD, etc. Experience managing data engineering infrastructure through Docker and Kubernetes Experience working with latency data processing solutions like Flink, Prefect, AWS Kinesis, Kafka, Spark Stream processing etc. Experience with SQL/Relational databases, OLAP databases like Snowflake. Experience working in AWS: S3, Glue, Athena, MSK, EMR, ECR etc. Bonus Qualifications Experience with the Energy industry Experience with building IoT and/or hardware products Understanding of electrical systems and residential loads Experience with data visualization using Tableau. Experience in Data loading tools like FiveTran as well as data debugging tools such as DataDog Life at SPAN Our Bengaluru team plays a pivotal role in SPANs continued growth and expansion. Together, were driving engineering , product development , and operational excellence to shape the future of home energy solutions. As part of our team in India, youll have the opportunity to collaborate closely with our teams in the US and across the globe. This international collaboration fosters innovation, learning, and growth, while helping us achieve our bold mission of electrifying homes and advancing clean energy solutions worldwide. Our in-office culture offers the chance for dynamic interactions and hands-on teamwork, making SPAN a truly collaborative environment where every team members contribution matters. Our climate-focused culture is driven by a team of forward-thinkers, engineers, and problem-solvers who push boundaries every day. Do mission-driven work: Every role at SPAN directly advances clean energy adoption. Bring powerful ideas to life: We encourage diverse ideas and perspectives to drive stronger products. Nurture an innovation-first mindset: We encourage big thinking and bold action. Deliver exceptional customer value: We value hard work, and the ability to deliver exceptional customer value. Benefits at SPAN India Generous paid leave Comprehensive Insurance & Health Benefits Centrally located office in Bengaluru with easy access to public transit, dining, and city amenities Interested in joining our team? Apply today and well be in touch with the next steps!
Posted 1 week ago
10.0 - 15.0 years
35 - 50 Lacs
Hyderabad, Bengaluru
Work from Office
Job Title: Senior Kafka Engineer Location: Hyderabad / Bangalore Work Mode: Work from Office | 24/7 Rotational Shifts Type: Full-Time Experience: 8+ Years About the Role: Were hiring a Senior Kafka Engineer to manage and enhance our Kafka infrastructure on AWS and Confluent Platform. You’ll lead efforts in building secure, scalable, and reliable data streaming solutions for high-impact FinTech systems. Key Responsibilities: Manage and optimize Kafka and Confluent deployments on AWS Design and maintain Kafka producers, consumers, streams, and connectors Define schema, partitioning, and retention policies Monitor performance using Prometheus, Grafana, and Confluent tools Automate infrastructure using Terraform, Helm, and Kubernetes (EKS) Ensure high availability, security, and disaster recovery Collaborate with teams and share Kafka best practices Required Skills: 8+ years in platform engineering, 5+ with Kafka & Confluent Strong Java or Python Kafka client development Hands-on with Schema Registry, Control Center, ksqlDB Kafka deployment on AWS (MSK or EC2) Kafka Connect, Streams, and schema tools Kubernetes (EKS), Terraform, Prometheus, Grafana Nice to Have: FinTech or regulated industry experience Knowledge of TLS, SASL/OAuth, RBAC Experience with Flink or Spark Streaming Kafka governance and multi-tenancy
Posted 2 weeks ago
8.0 - 13.0 years
13 - 17 Lacs
Bengaluru
Work from Office
We are currently seeking a Cloud Solution Delivery Lead Consultant to join our team in bangalore, Karntaka (IN-KA), India (IN). Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelinesmandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring)Training & Certification "¢ Apache Kafka Administration Snowflake Fundamentals/Advanced Training "¢ Experience 8 years of experience in a technical role working with AWSAt least 2 years in a leadership or management role
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France