Jobs
Interviews

16 Kafta Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

delhi

On-site

You will be responsible for helping build production-grade systems based on ClickHouse. This includes providing advice on designing schemas, planning clusters, and working on environments ranging from single node setups to clusters with hundreds of nodes. You will also work on cloud-managed ClickHouse services and infrastructure projects related to ClickHouse. Additionally, you will be involved in improving ClickHouse itself by fixing bugs, enhancing documentation, creating test cases, and studying new usage patterns, ClickHouse functions, and integration with other products. Your tasks will include installation of multiple node clusters, configuration, backup and recovery, and maintenance of ClickHouse databases. Monitoring and optimizing database performance to ensure high availability and responsiveness will be crucial. You will troubleshoot database issues, identify and resolve performance bottlenecks, design and implement database backup and recovery strategies, and develop database security policies and procedures. Collaboration with development teams to optimize database schema design and queries will be essential. You are expected to provide technical guidance and support to development and operations teams. Experience with big data stack components such as Hadoop, Spark, Kafka, Nifi, as well as data science and data analysis will be beneficial. Knowledge of SRE/DevOps stacks, monitoring/system management tools like Prometheus, Ansible, ELK, and version control using git is required. Handling support calls from customers using ClickHouse, which includes diagnosing problems, designing applications, deploying/upgrading ClickHouse, and operations, will also be part of your responsibilities.,

Posted 4 days ago

Apply

5.0 - 10.0 years

17 - 19 Lacs

Pune

Work from Office

A Candidate has Hands-on exp. working with tech. like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume. Also Know. of selected programming languages e.g. Python, Pyspark .In-depth know. of relational database e.g. PostgreSQL, MySQL . Required Candidate profile A Candidate has Must Hands-on exp. working with tech. like Hadoop, Hive, Pig, Oozie, Map Reduce, Spark, Sqoop, Kafka, Flume. Also Know. Python, Pyspark ,PostgreSQL, MySQL .

Posted 5 days ago

Apply

4.0 - 8.0 years

4 - 8 Lacs

Hyderabad, Telangana, India

On-site

Function:Software EngineeringBackend Development Java J2EE Spring Requirements: Bachelor's degree (STEM preferred) and minimum 3 years of experience in Software development. Strong programming experience and background in Enterprise Java, Microservices & Springboot Demonstrable experience in developing, designing, building, deploying and maintenance of enterprise cloud solutions(we use Azure) in AWS, Azure and or GCP Demonstrable experience with microservices based architecture on Cloudat scale. Experience with implementation of Event-driven architecture using Kafka or similar technologies Experience with API management platforms & providing / consuming RESTful APIs Experience with varied tools such as Spring Boot, REST, Hazelcast, Cassandra, NoSQL, RDBMS, Docker, Kubernetes, Kafka, Reactive programming.

Posted 1 week ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

Noida

Work from Office

Role & responsibilities Drive microservices architecture design and evolution, owning the roadmap (service boundaries, integration, tech choices) for scalability, and defining Kubernetes container sizing and resource allocation best practices. Deep expertise in microservices architecture, designing RESTful/event-driven services, defining boundaries, optimizing communication, with experience in refactoring/greenfield and cloud patterns (Saga, Circuit Breaker). Lead platform improvements, overseeing technical enhancements for AI-driven features like our AI Mapping Tool for smarter capabilities. Architect comprehensive observability, deploying metrics, tracing, logging tools (OpenTelemetry, Prometheus, Grafana, Loki, Tempo) for real-time monitoring and high uptime. Define container sizing and lead Kubernetes performance benchmarking, analyzing bottlenecks to guide resource tuning and scaling for platform growth. Provide deployment/infrastructure expertise, guiding Helm for Kubernetes and collaborating on infrastructure needs (Terraform a plus). Lead tooling/automation enhancements, streamlining deployment via Helm improvements, simpler YAML, and pre-deployment validation to reduce errors. Lead evolution to event-driven, distributed workflows, decoupling orchestrators with RabbitMQ and patterns like Saga/pub-sub, integrating Redis for state/caching, improving fault tolerance/scalability. Collaborate across teams and stakeholders for architectural alignment, translating requirements into design and partnering for seamless implementation. Mentor engineers on coding, design, and architecture best practices, leading reviews and fostering engineering excellence. Responsible for documenting architecture decisions (diagrams, ADRs), clearly communicating complex technical concepts for roadmap transparency. Preferred candidate profile Required 5+ years in software engineering, significant experience in designing distributed systems, and a proven track record of improving scalability/maintainability. Extensive production experience with Kubernetes and Docker, proficient in deploying, scaling, and managing apps on clusters, including cluster management on major cloud platforms. Proficient in deployment automation/config management, required Helm charts experience, familiar with CI/CD/GitOps, and Terraform/IaC exposure is a plus. Strong experience implementing observability via monitoring/logging frameworks (Prometheus, Grafana, ELK/Loki, tracing), able to instrument applications, and proven in optimizing distributed system performance. Hands-on with message brokers (RabbitMQ/Kafka) and distributed data stores like Redis, skilled in asynchronous system design and solution selection. Excellent technical communication and leadership, proven ability to lead architectural discussions/build consensus, comfortable driving projects and collaborating with Agile, cross-functional teams. Adept at technical documentation/diagrams, with an analytical mindset for evaluating new technologies and foreseeing design impacts on scalability, security, and maintainability.

Posted 1 week ago

Apply

4.0 - 8.0 years

15 - 30 Lacs

Noida

Work from Office

Role & responsibilities Drive microservices architecture design and evolution, owning the roadmap (service boundaries, integration, tech choices) for scalability, and defining Kubernetes container sizing and resource allocation best practices. Deep expertise in microservices architecture, designing RESTful/event-driven services, defining boundaries, optimizing communication, with experience in refactoring/greenfield and cloud patterns (Saga, Circuit Breaker). Lead platform improvements, overseeing technical enhancements for AI-driven features like our AI Mapping Tool for smarter capabilities. Architect comprehensive observability, deploying metrics, tracing, logging tools (OpenTelemetry, Prometheus, Grafana, Loki, Tempo) for real-time monitoring and high uptime. Define container sizing and lead Kubernetes performance benchmarking, analyzing bottlenecks to guide resource tuning and scaling for platform growth. Provide deployment/infrastructure expertise, guiding Helm for Kubernetes and collaborating on infrastructure needs (Terraform a plus). Lead tooling/automation enhancements, streamlining deployment via Helm improvements, simpler YAML, and pre-deployment validation to reduce errors. Lead evolution to event-driven, distributed workflows, decoupling orchestrators with RabbitMQ and patterns like Saga/pub-sub, integrating Redis for state/caching, improving fault tolerance/scalability. Collaborate across teams and stakeholders for architectural alignment, translating requirements into design and partnering for seamless implementation. Mentor engineers on coding, design, and architecture best practices, leading reviews and fostering engineering excellence. Responsible for documenting architecture decisions (diagrams, ADRs), clearly communicating complex technical concepts for roadmap transparency. Preferred candidate profile Required 5+ years in software engineering, significant experience in designing distributed systems, and a proven track record of improving scalability/maintainability. Extensive production experience with Kubernetes and Docker, proficient in deploying, scaling, and managing apps on clusters, including cluster management on major cloud platforms. Proficient in deployment automation/config management, required Helm charts experience, familiar with CI/CD/GitOps, and Terraform/IaC exposure is a plus. Strong experience implementing observability via monitoring/logging frameworks (Prometheus, Grafana, ELK/Loki, tracing), able to instrument applications, and proven in optimizing distributed system performance. Hands-on with message brokers (RabbitMQ/Kafka) and distributed data stores like Redis, skilled in asynchronous system design and solution selection. Excellent technical communication and leadership, proven ability to lead architectural discussions/build consensus, comfortable driving projects and collaborating with Agile, cross-functional teams. Adept at technical documentation/diagrams, with an analytical mindset for evaluating new technologies and foreseeing design impacts on scalability, security, and maintainability.

Posted 1 week ago

Apply

6.0 - 10.0 years

12 - 18 Lacs

Chennai

Work from Office

Lead design, development and implementation of Java and Spring Boot-based solutions. Strong background in software development, system architecture and cross functional collaboration.

Posted 1 week ago

Apply

4.0 - 9.0 years

1 - 3 Lacs

Hyderabad

Work from Office

This is an in-person office role in Hyderabad. About Role Were hiring a Senior Backend Engineer to design and scale services that power video, audio, and text based inputs to our multimodal LLM systems. Youll build high-performance APIs, orchestration layers, and async pipelines that can handle hundreds of thousands of concurrent users , spiky upload traffic, and compute-intensive inference workflows. Youll work closely with DevOps, ML, and frontend teams to build horizontally scalable , observability-rich , and fault-tolerant backend services that run in a cloud-native, Kubernetes-based environment. This is a foundational role with direct impact on system performance, developer velocity, and the quality of AI-driven experiences. Key Responsibilities Build scalable services in Python (FastAPI preferred) with a focus on stateless APIs , idempotent operations , and graceful failure modes . Design systems with horizontal scaling , partition tolerance , and data consistency trade-offs in mind. Build asynchronous processing pipelines for video ingestion, media preprocessing (e.g., encoding, chunking), and LLM inference triggering. Architect and operate messaging infrastructure (Kafka, RabbitMQ, Redis Streams, AWS SQS, Celery) for decoupled and resilient task execution. Optimize backend infrastructure using Kubernetes (HPA/VPA, Cluster Autoscaler), sharded databases , and replica-aware routing . Design, build, and maintain database schemas for high-throughput data tracking (uploads, job states, LLM output), optimizing for large-scale workloads. Contribute to architectural decisions involving service mesh , API gateways , and observability pipelines . Work with engineers across the stack including Node.js/React services to ensure seamless integration and data flow across the system. Maintain high-quality CI/CD pipelines and deployment workflows, with support for canary/blue-green strategies and rollback safety. Must-Have Skills Expert-level Python backend development for high-throughput, low-latency systems (FastAPI, asyncio, uvicorn/gunicorn stack). Strong experience with message queues and async task execution using Kafka, RabbitMQ, Redis Streams, SQS, or Celery. Deep familiarity with Kubernetes orchestration: autoscaling, rollout strategies, pod/network tuning, and node pool management. Strong command over relational (PostgreSQL) and NoSQL (MongoDB, Redis) databases: modeling, indexing, tuning, partitioning. Knowledge of distributed caching , rate limiting , circuit breakers , and backpressure strategies. Familiarity with chunked/resumable uploads , file session tracking, and large-object handling. Exposure to CQRS , event sourcing , message-driven design , and domain-driven architecture . Experience with observability stacks : Prometheus, Loki, Grafana, Tempo, and OpenTelemetry for distributed tracing, logging, and metrics.

Posted 1 month ago

Apply

2.0 - 4.0 years

8 - 10 Lacs

Bengaluru

Work from Office

Role & responsibilities Node + angular JD Requirements Develop, test, and maintain high-quality web applications using Angular. Design and implement responsive user interfaces, enhancing user experience. Create and optimize backend services using Node.js and Java for reliability, performance, and scalability. Collaborate with cross-functional teams to define, design, and deliver new features. Take ownership of projects, from initial planning to production deployment, ensuring alignment with business goals. Identify and resolve performance bottlenecks, fix bugs, and provide continuous improvement. Ensure code quality through regular code reviews, testing, and adhering to best practices. Stay up-to-date with the latest technologies and frameworks to ensure applications remain modern and maintainable. Qualifications: Experience in development with Kafka, Redis, Web Socket, DB SQL/Postgresql – any similar technologies Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent work experience). Proven experience with Angular (latest versions), Node.js, and Java in a production environment. Strong understanding of HTML, CSS, and JavaScript; experience with responsive design. Experience with RESTful APIs and asynchronous programming. Familiarity with version control (e.g., Git), and CI/CD pipelines. Strong problem-solving skills, with a proactive and solution-oriented approach. Ability to take ownership of projects and independently drive them to success. Excellent communication and collaboration skills. Nice to Have: Experience with cloud platforms (AWS, Azure, etc.). Knowledge of database technologies, both SQL and NoSQL. Familiarity with DevOps practices and containerization (Docker, Kubernetes).

Posted 1 month ago

Apply

8.0 - 13.0 years

40 - 55 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Staff Software Engineer( Fullstack) Experience: 8 - 14 Years Exp Salary: Competitive Preferred Notice Period : less than or equal to 30 days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Chennai Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : React Js OR Angular OR Node.js OR Java OR Python and AWS OR Azure and Kafka OR CI - CD OR Jenkins FourKites (One of Uplers' Clients) is Looking for: Staff Software Engineer( Fullstack) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Description As a Staff Software Engineer Fullstack (AI/ML), you will play a pivotal role in designing and developing scalable, intelligent, and user-centric applications that integrate AI/ML capabilities into real-world logistics solutions. Youll work at the intersection of data science and product engineering, contributing to strategic technical decisions and mentoring teams to build robust, high-performing systems. What You'll Do Design, develop, and deploy end-to-end fullstack applications frontend (React/Angular) and backend (Java, Python, Node.js). Lead technical architecture and system design for complex, AI/ML-enabled product features. Collaborate closely with Data Science teams to integrate ML models into production environments using scalable MLOps practices. Build high-availability microservices and data pipelines with clean, maintainable code. Partner with Product, UX, and Engineering leadership to align on technical feasibility, scalability, and product vision. Ensure quality, security, and performance through code reviews, testing strategies, and CI/CD automation. Mentor junior engineers and contribute to a strong engineering culture. Stay current with emerging trends in AI/ML, cloud infrastructure, and scalable system design. What Youll Bring Education: Bachelors or Masters degree in Computer Science, Engineering, or a related technical field. Experience: 10+ years of software development experience with a strong foundation in fullstack engineering and distributed systems. Proficient in backend technologies like Java, Python, Node.js, and frontend frameworks such as React or Angular. Proven experience working on AI/ML-driven products understanding of model integration, versioning, inference performance, and lifecycle management. Experience building scalable solutions on cloud platforms (AWS, GCP, or Azure). Strong knowledge of RESTful APIs, microservices, and event-driven architectures (Kafka, etc.). Experience with DevOps practices, CI/CD, Docker, Kubernetes, and infrastructure automation. Ability to work in cross-functional, agile teams across global time zones. Strong communication, leadership, and decision-making skills. Nice to Have Domain knowledge in logistics, supply chain, or transportation systems. Familiarity with MLOps tools like MLflow, SageMaker, TensorFlow Serving, or similar. Exposure to real-time data processing frameworks (e.g., Apache Spark, Flink). Contributions to open-source projects or technical blogs. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: FourKites is the global leader in AI-driven supply chain transformation technologies, helping the world's biggest brands and businesses turn data into action like never before. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

8.0 - 13.0 years

40 - 55 Lacs

Kolkata, Hyderabad, Chennai

Hybrid

Staff Software Engineer( Fullstack) Experience: 8 - 14 Years Exp Salary: Competitive Preferred Notice Period : less than or equal to 30 days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Chennai Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : React Js OR Angular OR Node.js OR Java OR Python and AWS OR Azure and Kafka OR CI - CD OR Jenkins FourKites (One of Uplers' Clients) is Looking for: Staff Software Engineer( Fullstack) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Description As a Staff Software Engineer Fullstack (AI/ML), you will play a pivotal role in designing and developing scalable, intelligent, and user-centric applications that integrate AI/ML capabilities into real-world logistics solutions. Youll work at the intersection of data science and product engineering, contributing to strategic technical decisions and mentoring teams to build robust, high-performing systems. What You'll Do Design, develop, and deploy end-to-end fullstack applications frontend (React/Angular) and backend (Java, Python, Node.js). Lead technical architecture and system design for complex, AI/ML-enabled product features. Collaborate closely with Data Science teams to integrate ML models into production environments using scalable MLOps practices. Build high-availability microservices and data pipelines with clean, maintainable code. Partner with Product, UX, and Engineering leadership to align on technical feasibility, scalability, and product vision. Ensure quality, security, and performance through code reviews, testing strategies, and CI/CD automation. Mentor junior engineers and contribute to a strong engineering culture. Stay current with emerging trends in AI/ML, cloud infrastructure, and scalable system design. What You’ll Bring Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Experience: 10+ years of software development experience with a strong foundation in fullstack engineering and distributed systems. Proficient in backend technologies like Java, Python, Node.js, and frontend frameworks such as React or Angular. Proven experience working on AI/ML-driven products — understanding of model integration, versioning, inference performance, and lifecycle management. Experience building scalable solutions on cloud platforms (AWS, GCP, or Azure). Strong knowledge of RESTful APIs, microservices, and event-driven architectures (Kafka, etc.). Experience with DevOps practices, CI/CD, Docker, Kubernetes, and infrastructure automation. Ability to work in cross-functional, agile teams across global time zones. Strong communication, leadership, and decision-making skills. Nice to Have Domain knowledge in logistics, supply chain, or transportation systems. Familiarity with MLOps tools like MLflow, SageMaker, TensorFlow Serving, or similar. Exposure to real-time data processing frameworks (e.g., Apache Spark, Flink). Contributions to open-source projects or technical blogs. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: FourKites is the global leader in AI-driven supply chain transformation technologies, helping the world's biggest brands and businesses turn data into action like never before. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

8.0 - 13.0 years

40 - 55 Lacs

New Delhi, Bengaluru, Mumbai (All Areas)

Hybrid

Staff Software Engineer( Fullstack) Experience: 8 - 14 Years Exp Salary: Competitive Preferred Notice Period : less than or equal to 30 days Shift : 9:30 AM to 6:30 PM IST Opportunity Type: Hybrid - Chennai Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : React Js OR Angular OR Node.js OR Java OR Python and AWS OR Azure and Kafka OR CI - CD OR Jenkins FourKites (One of Uplers' Clients) is Looking for: Staff Software Engineer( Fullstack) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Description As a Staff Software Engineer Fullstack (AI/ML), you will play a pivotal role in designing and developing scalable, intelligent, and user-centric applications that integrate AI/ML capabilities into real-world logistics solutions. Youll work at the intersection of data science and product engineering, contributing to strategic technical decisions and mentoring teams to build robust, high-performing systems. What You'll Do Design, develop, and deploy end-to-end fullstack applications frontend (React/Angular) and backend (Java, Python, Node.js). Lead technical architecture and system design for complex, AI/ML-enabled product features. Collaborate closely with Data Science teams to integrate ML models into production environments using scalable MLOps practices. Build high-availability microservices and data pipelines with clean, maintainable code. Partner with Product, UX, and Engineering leadership to align on technical feasibility, scalability, and product vision. Ensure quality, security, and performance through code reviews, testing strategies, and CI/CD automation. Mentor junior engineers and contribute to a strong engineering culture. Stay current with emerging trends in AI/ML, cloud infrastructure, and scalable system design. What You’ll Bring Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Experience: 10+ years of software development experience with a strong foundation in fullstack engineering and distributed systems. Proficient in backend technologies like Java, Python, Node.js, and frontend frameworks such as React or Angular. Proven experience working on AI/ML-driven products — understanding of model integration, versioning, inference performance, and lifecycle management. Experience building scalable solutions on cloud platforms (AWS, GCP, or Azure). Strong knowledge of RESTful APIs, microservices, and event-driven architectures (Kafka, etc.). Experience with DevOps practices, CI/CD, Docker, Kubernetes, and infrastructure automation. Ability to work in cross-functional, agile teams across global time zones. Strong communication, leadership, and decision-making skills. Nice to Have Domain knowledge in logistics, supply chain, or transportation systems. Familiarity with MLOps tools like MLflow, SageMaker, TensorFlow Serving, or similar. Exposure to real-time data processing frameworks (e.g., Apache Spark, Flink). Contributions to open-source projects or technical blogs. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: FourKites is the global leader in AI-driven supply chain transformation technologies, helping the world's biggest brands and businesses turn data into action like never before. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

10.0 - 16.0 years

35 - 40 Lacs

Pune

Work from Office

: Job TitleLead Engineer, VP LocationPune , India Role Description As a Lead Engineer for Technology in our Technology team, you will partner with business, technology managers to create Designs, solutions and strongly engineered applications. Youll be an integral part of the banks technology infrastructure, guiding the Feature teams through the engineering practices and leading by example. You will enable the digital environment that helps our people share their knowledge, expertise and real passion for our business. Deutsche Bank is investing heavily in technology, which means we are investing in you. Join us here, and youll constantly be looking ahead. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What well offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Your key responsibilities: Accountable for L3 / Application support teams in Trade Finance Passion to deliver state of the art solutions and be hand-on technologist Lead a team of engineers and create high performing teams. Needs to be a high code committer. Provide a stable production environment by resolving production incidents Lead the team of software engineers and QA members within the L3 / Application support function Maintain hygiene, Risk and Control and Stability at to core to every delivery Be a role model for the team Look beyond and go the extra mile to add value to the technology organization beyond the functional role Lead by example and have client centricity at the core Your skills and experience: 12+ years of relevant experience A hands-on technologist with expertise in Java based technologies, Spring, Microservices architecture, CI/CD pipeline Oracle SQL Ability to solve business problems with an acute engineering mindset Strong analytical and Design skills Proficient communication skills. Proficient English language skills (written/verbal) Ability to work in virtual teams and in matrixed organizations Prior experience with Incident Management and Problem Management Keeps pace with technical innovation Java and J2EE TechnologiesSpring MVC, JMS, Spring Batch, Oracle, Unix Command, CI/CD, GIT/SVN Junit/Mockito (Any Unit Test Frameworks) Sonar Qube/ Emma Code Coverage and Code quality tools Familiarity with build tools such as Ant, Maven, and Gradle. CI/CD experience with Jenkins, Teamcity Performance monitoring Tools like Java Heap Analyzer, Visual JVM, JMX Console Micro ServicesSpring Boot, Spring Cloud Kubernetes, API Gateway, SSL, OATH, JWT, Spring JPA, Kafta Updated relevant skill certification A Bachelors degree Passionate about technology and is a go-getter How well support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Chennai

Work from Office

Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming

Posted 1 month ago

Apply

2.0 - 5.0 years

9 - 17 Lacs

Pune

Work from Office

Job Description: Looking for an experienced Big Data Developer to develop, maintain, and optimize our big data solutions. The ideal candidate will have strong expertise in Java, Spark, API development, and a deep understanding of data engineering and big data tools. Key Responsibilities: - Design, develop, and maintain big data solutions using Java and Apache Spark. - Develop and manage APIs for data ingestion, processing, and extraction. - Collaborate with data scientists, data analysts, and other stakeholders to understand data requirements and deliver efficient solutions. - Optimize and tune big data workflows for performance and scalability. - Implement data engineering best practices to ensure data quality, integrity, and security. - Work with big data tools and technologies such as Hadoop, HDFS, Hive, HBase, Kafka, and others as needed. - Troubleshoot and resolve issues related to data processing and workflows. - Stay updated with the latest trends and advancements in big data technologies and tools. Required Qualifications: - Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - 4-8 years of experience in big data development and data engineering. - Proficiency in Java and experience with Apache Spark. - Experience in API development and integration. - Strong understanding of data engineering principles and big data concepts. - Familiarity with big data tools such as Hadoop, HDFS, Hive, HBase, and Kafka. - Experience with SQL and NoSQL databases. - Excellent problem-solving and analytical skills. - Strong communication and collaboration skills. Preferred Qualifications: - Experience with cloud platforms such as, Spark, Data/Delta lakes, Modern Data stack. - Knowledge of containerization technologies like Docker and Kubernetes. - Experience with continuous integration and continuous deployment (CI/CD) pipelines. - Familiarity with machine learning frameworks and libraries. Benefits: - Competitive salary and benefits package. - Opportunity to work with cutting-edge technologies in a dynamic and collaborative environment. - Professional growth and development opportunities.

Posted 1 month ago

Apply

5.0 - 8.0 years

8 - 15 Lacs

Pune

Work from Office

Core Technical Skills: Design and develop robust backend solutions using Java 11+/17 and Spring Boot. Build, test, and maintain scalable microservices in a cloud environment (AWS). Work with Kafka or other messaging systems for event-driven architecture. Write clean, maintainable code with high test coverage using. Tools & Reporting: Java 11+/17, SpringBoot, AWS, Kafta Soft Skills: Strong communication, coordination with app teams Analytical thinking and problem-solving Ability to work independently or collaboratively Snowflake architecture & performance tuning, Oracle DB, SQL optimization, Data governance, RBAC, Data replication, Time Travel & Cloning, Dynamic data masking, OEM & AWR reports, Apps DBA experience.

Posted 2 months ago

Apply

9.0 - 14.0 years

50 - 85 Lacs

Noida

Work from Office

About the Role We are looking for a Staff Engineer specialized in Master Data Management to design and develop our next-generation MDM platform. This role is ideal for engineers who have created or contributed significantly to MDM solutions. Youll lead the architecture and development of our core MDM engine, focusing on data modeling, matching algorithms, and governance workflows that enable our customers to achieve a trusted, 360-degree view of their critical business data. A Day in the Life Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Architect and develop scalable, fault-tolerant MDM platform components that handle various data domains. Design and implement sophisticated entity matching and merging algorithms to create golden records across disparate data sources. Develop or Integrate flexible data modeling frameworks that can adapt to different industries and use cases. Create robust data governance workflows, including approval processes, audit trails, and role-based access controls. Build data quality monitoring and remediation capabilities into the MDM platform. Collaborate with product managers, solution architects, and customers to understand industry-specific MDM requirements. Develop REST APIs and integration patterns for connecting the MDM platform with various enterprise systems. Mentor junior engineers and promote best practices in MDM solution development. Lead technical design reviews and contribute to the product roadmap What You Need 8+ years of software engineering experience, with at least 5 years focused on developing master data management solutions or components. Proven experience creating or significantly contributing to commercial MDM platforms, data integration tools, or similar enterprise data management solutions. Deep understanding of MDM concepts including data modeling, matching/merging algorithms, data governance, and data quality management. Strong expertise in at least one major programming language such as Java, Scala, Python, or Go. Experience with database technologies including relational (Snowflake, Databricks, PostgreSQL) and NoSQL systems (MongoDB, Elasticsearch). Knowledge of data integration patterns and ETL/ELT processes. Experience designing and implementing RESTful APIs and service-oriented architectures. Understanding of cloud-native development and deployment on AWS, or Azure. Familiarity with containerization (Docker) and orchestration tools (Kubernetes). Experience with event-driven architectures and messaging systems (Kafka, RabbitMQ). Strong understanding of data security and privacy considerations, especially for sensitive master data.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies