Jobs
Interviews

13218 Kafka Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 years

0 Lacs

India

Remote

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Location: Any Infosys DC Cloud Application Principal Engineers having skill set of database development, architectural designing, implementation and performance tuning on both on-premise and cloud technologies. Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing Multi-Agent & Agentic RAG workflows in production. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help build an embedded AI CoPilot across the different products at NetSkope What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Drive the end-to-end development and deployment of CoPilot, an embedded assistant, powered by cutting-edge Multi-Agent Workflows. This will involve designing and implementing complex interactions between various AI agents & tools to deliver seamless, context-aware assistance across our product suite Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps & LLMOps best practices to deploy and monitor machine learning models & agentic workflows in production. Implement comprehensive evaluation and observability strategies for the CoPilot Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Collaborate with cloud architects and security analysts to develop cloud-native security solutions x platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Has built & deployed a multi-agent or agentic RAG workflow in production. Expertise in prompt engineering patterns such as chain of thought, ReAct, zero/few shot. Experience in Langgraph/Autogen/ AWS Bedrock/ Pydantic AI/ Crew AI Strong understanding of MLOps practices and tools (e.g., Sagemaker/MLflow/ Kubeflow/ Airflow/ Dagster). Experience with evaluation & observability tools like Langfuse/ Arize Phoenix/ Langsmith. Data Engineering Proficiency in working with vector databases such as PGVector, Pinecone, and Weaviate. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Software Engineering Expertise in Python with experience in one other language (C++/Java/Go) for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Experience of building & consuming MCP clients & servers. Experience with asynchronous programming, including web-sockets, FastAPI, and Sanic. Good-to-Have Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like Pytorch, TensorFlow and Scikit-learn. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Graph database knowledge is a plus. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Netskope is committed to implementing equal employment opportunities for all employees and applicants for employment. Netskope does not discriminate in employment opportunities or practices based on religion, race, color, sex, marital or veteran statues, age, national origin, ancestry, physical or mental disability, medical condition, sexual orientation, gender identity/expression, genetic information, pregnancy (including childbirth, lactation and related medical conditions), or any other characteristic protected by the laws or regulations of any jurisdiction in which we operate. Netskope respects your privacy and is committed to protecting the personal information you share with us, please refer to Netskope's Privacy Policy for more details.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Full Stack Developer Job Location: Hyderabad Notice Period : 15 Days Role Overview: · Play a crucial role in driving Company mission to simplify and innovate construction management. · Collaborate with diverse clients worldwide, helping them transform complex workflows. · Thrive in a fast-paced, tech-driven environment that encourages continuous learning and growth. · Advance your career by delivering real impact on large scale infrastructure and construction projects. Key Responsibilities: · We are looking for a tech enthusiast with a knack for full stack developer. Eager to dive into code and bring ideas to life. · Own features from brainstorming to deployment—handling everything from database architecture to front-end performance. · Optimize and Scale: Ensure that our platform is high-performing, scalable, and future-proof. You will be part of laying the groundwork for big, exciting growth. · Collaborate & Conquer: Work closely with our design, product, and AI teams to integrate machine learning and automation features into our platform, pushing the boundaries of what tech can do in construction. · Write clean, efficient, and maintainable code track record that talks. Required Qualifications: · Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. · Equivalent practical experience may be acceptable with a strong portfolio and leadership track record. · 3+ years of experience with either MEAN (MongoDB, Express, Angular, Nodejs) or MERN (MongoDB, Express, React, Nodejs) stack. · Hands-on experience in designing and building scalable, secure full-stack applications in a microservices or monolithic architecture. · Strong proficiency in Angular 15+, RxJS, NgRx (or other state management libraries). · Solid understanding of TypeScript, JavaScript, HTML5, and CSS3. · Experience building responsive and cross-browser applications. · Familiar with Angular CLI, lazy loading, routing, and component-based architecture. · Proficiency in MongoDB and its query syntax, aggregation framework. · Knowledge of Mongoose (ORM). Understanding of schema design, indexing, and performance tuning. Nice-to-have: · Experience with GraphQL, Socket.IO, or WebRTC. · Understanding of Server-Side Rendering (SSR) using Next.js (for MERN) or Angular Universal (for MEAN). · Knowledge of Redis, Kafka, or other message queues · Familiarity with multi-tenant architecture or SaaS product engineering. What We Offer: · Grow with purpose: Accelerate your career with hands-on learning and expert mentorship. · Culture that empowers: Join a team where your ideas matter and diversity is celebrated. · Perks that matter: Enjoy flexible work options and benefits designed to support your work-life balance. · Make a real impact: Work on advanced solutions that simplify construction and help build smarter cities and communities worldwide.

Posted 1 day ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Pune

Work from Office

What You'll Do The Global Analytics & Insights (GAI) team is looking for a Senior Data Engineer to lead our build of the data infrastructure for Avalara's core data assets- empowering us with accurate, data to lead data backed decisions. As A Senior Data Engineer, you will help architect, implement, and maintain our data infrastructure using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will immerse yourself in our financial, marketing, and sales data to become an expert of Avalara's domain. You will have deep SQL experience, an understanding of modern data stacks and technology, a desire to build things the right way using modern software principles, and experience with data and all things data related. What Your Responsibilities Will Be You will architect repeatable, reusable solutions to keep our technology stack DRY Conduct technical and architecture reviews with engineers, ensuring all contributions meet quality expectations You will develop scalable, reliable, and efficient data pipelines using dbt, Python, or other ELT tools Implement and maintain scalable data orchestration and transformation, ensuring data accuracy, consistency Collaborate with cross-functional teams to understand complex requirements and translate them into technical solutions Build scalable, complex dbt models Demonstrate ownership of complex projects and calculations of core financial metrics and processes Work with Data Engineering teams to define and maintain scalable data pipelines. Promote automation and optimization of reporting processes to improve efficiency. You will be reporting to Senior Manager What You'll Need to be Successful Bachelor's degree in Computer Science or Engineering, or related field 6+ years experience in data engineering field, with advanced SQL knowledge 4+ years of working with Git, and demonstrated experience collaborating with other engineers across repositories 4+ years of working with Snowflake 3+ years working with dbt (dbt core) 3+ years working with Infrastructure as Code Terraform 3+ years working with CI CD, and demonstrated ability to build and operate pipelines AWS Certified Terraform Certified Experience working with complex Salesforce data Snowflake, dbt certified

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description At Freshworks workflow engine as a service, we are passionate about helping product teams connect their users with what matters most in milliseconds! Providing a first-class experience at scale is a challenging endeavor: users expect to find, in the blink of an eye, highly relevant items with little to no effort. The core workflow engine as a service platform’s offering relies on a rock-solid Search engine, whose efficiency, configurability and stability made the fame of workflow engine platform in Freshworks. This platform is called to support data several orders of magnitude larger than what it currently handles. Today, the platform needs you to make it scale further, and to make it faster, smarter, more powerful. Qualifications 3-7 years of Experience in programming with expertise in data structures and algorithms. Experience in troubleshooting complex distributed systems such as Kafka or conductor or N8N or Orkes or any Pub/Sub services. and finding failure scenarios. Expertise in Golang and/or Java and understanding of how it impacts performance/reliability of applications. Deep understanding and practical experience of Elasticsearch/Solr stack Extensive experience of designing complex solutions, data pipelines and microservice architecture in production environments Experience working with large datasets and high traffic Rigor in high code quality, automated testing, and other engineering best practices Experience at our current stage and beyond (high growth, lots of change and building internal infrastructure). Additional Information Experience in Kubernetes Expertise in Datastores - Postgres, MySQL, Elasticsearch Hands on k8s is a big plus Sensitivity to Site Reliability Engineering Skilled communicator and have a proven record of leading work across disciplines. Solid understanding of the Software as a Service (SaaS) model At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Requisition ID # 23WD68308 Position Overview This is an exciting opportunity to be part of a dynamic team at Autodesk Pune, India. We are looking for Engineers who love finding efficient, scalable and thoughtful solutions to a variety of technical and product challenges. As Principal Backend Engineer, you will work to deliver top quality Web and Cloud Software. You will be working as Principal Engineer on a full stack scrum team and contributing directly to the maturity of Fusion Team, which is a Data Management Backbone for Autodesk's Fusion 360. And will be dealings with problems of scale, service sustainability, and directly be part of team that will work on continuously evolving Fusion Team Family of Micro-service(s) to cater to the Platform Needs of new Features in Fusion Data & Fusion 360. Responsibilities Develop high-level designs and implement high scalable web and cloud solutions Collaborate with architects, product designers, product managers, engineers to implement high-quality solutions Provide leadership to the team in conjunction with Development Manager, Product Owner and Architect Proactively and continuously evaluate and address issues related to performance, stability, scalability, and extensibility of the systems Research and prototype new ideas, technologies and patterns relating to the task Take full ownership of the initiative and deliver it end to end solution Minimum Qualifications Bachelor’s degree in any Engineering discipline from reputed engineering college 8+ years of strong programming skills in Web and Cloud Technologies Strong in OOAD concepts and methodologies Hands-on experience in Java 8+, Spring, Spring Boot Hands-on experience of developing RESTful web services and integrating the same with heterogeneous clients Hands-on experience with Relational and NoSQL database technologies e.g. MySQL, DynamoDB Hands-on experience with object caching technologies such as Redis Hands-on experience with event streaming platform/cluster like Kafka, RabbitMQ and Zookeeper Hands-on experience with Build and CI/CD tools like Ant, Gradle, Docker, Jenkins Hands-on experience with unit and automation testing tools such as Junit, Rest Assured, JMeter, etc Hands-on Experience with Amazon web services - ECS, Lambda, Batch, Step Function, API Gateway, Cloud Front, S3, SQS, etc Preferred experience with log aggregation and operational event monitoring technologies like Splunk or New Relic or ELK Stack Proficient in web servers, application servers like Tomcat, Nginx and scripting in Linux environment Working knowledge of UML, software design patterns, distributed systems, resiliency and security Familiarity with the software development process, including specification, quality assurance, and documentation Strong communication skills with the ability to converse fluently with multiple types of audiences ranging from managers, architects, and partners Flexibility under changing conditions and the ability to multi-task between projects Preferred Qualifications Nice to have experience with Python and Groovy Nice to have experience with search technologies like Lucene, SOLR, Elastic search Nice to have experience with JavaScript frameworks like AngularJS, React.js, Redux Experience in Agile Scrum development methodology is a plus Keen interest in new technologies and their timely adoption Ability to work in a heterogeneous environment consisting of multiple languages and technologies Ability to convert complex software solutions into well-written technical design documents At Autodesk, we're building a diverse workplace and an inclusive culture to give more people the chance to imagine, design, and make a better world. Autodesk is proud to be an equal opportunity employer and considers all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender, gender identity, national origin, disability, veteran status or any other legally protected characteristic. We also consider for employment all qualified applicants regardless of criminal histories, consistent with applicable law. Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site). If you have any questions or require support, contact Autodesk Careers. Salary is one part of Autodesk’s competitive package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package.

Posted 1 day ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Pune, Gurugram

Work from Office

Job Profile: Work on Amdocs OSS Product with ample opportunity to grow professionally, functionally and technically. Develop software of high complexity in Java, J2EE and PL-SQL. Excellent debugging and analytical skills. Adhere to the organizations software development standards in agile environment. Lead a task, development, and Unit testing. Work closely on all levels with internal and external teams. Fast, energetic, diligent, highly motivated with open mind and self-learner Technical Requirements Must have: Must have Excellent knowledge of Java J2EE JUNIT Eclipse Web Services(REST/SOAP) XML XSL Oracle PL-SQL/Spring/Maven Knowledge of Docker, Kubernetes 6 months+ plus knowledge Knowledge of Spring Boot 6 months+ plus knowledge. Must have Knowledge PostgreSQL Knowledge of AWS, Tomcat Ability to work in a challenging mode. Zero defect mindset. Experience in Agile environment. Functional Knowledge in OSS area - Domain Should be able to perform following tasks: Create micro-service using Spring Boot. Package micro-services as docker images. Create Kubernetes objects such as ingress, service, pod, deployment, etc. Good to have: Should have prior development experience in one of the OSS Application (Inventory COTS product ARM Netcracker Granite.) Knowledge of Cassandra, Kafka, Knowledge of Cucumber, Elasticsearch/Kibana Should be able to perform following tasks: Deploy and support applications in Kubernetes cluster. Create and deploy helm charts.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Senior Java Spring Boot Developer Experience: 6+ Years Location: Mysore and Pune Job Description: We are seeking an experienced Senior Java Spring Boot Developer with 6+ years of hands-on experience in building scalable, high-performance microservices using Java, Spring Boot, and Spring JPA. The ideal candidate will have strong expertise in designing and developing RESTful APIs, microservices architecture, and cloud-native applications. As part of our team, you will work on enterprise-grade applications, collaborate with cross-functional teams, and contribute to the full software development lifecycle. Mandatory Skills: ✔ 6+ years of Java development (Java 8/11/17). ✔ Strong Spring Boot & Spring JPA experience. ✔ Microservices architecture (design, development, deployment). ✔ RESTful API development & integration. ✔ Database expertise (SQL/NoSQL – PostgreSQL, MySQL, MongoDB). ✔ Testing frameworks (JUnit, Mockito). ✔ Agile methodologies & CI/CD pipelines. Key Responsibilities: Design & Development: Develop high-performance, scalable microservices using Spring Boot. Design and implement RESTful APIs following best practices. Use Spring JPA/Hibernate for database interactions (SQL/NoSQL). Implement caching mechanisms (Redis, Ehcache) for performance optimization. Microservices Architecture: Build and maintain cloud-native microservices (Docker, Kubernetes). Integrate with message brokers (Kafka, RabbitMQ) for event-driven systems. Ensure fault tolerance, resilience, and scalability (Circuit Breaker, Retry Mechanisms). Database & Performance: Optimize database queries (PostgreSQL, MySQL, MongoDB). Implement connection pooling, indexing, and caching strategies. Monitor and improve application performance (JVM tuning, profiling). Testing & Quality Assurance: Write unit & integration tests (JUnit, Mockito, Test Containers). Follow TDD/BDD practices for robust code quality. Perform code reviews and ensure adherence to best practices. DevOps & CI/CD: Work with Docker, Kubernetes, and cloud platforms (AWS/Azure). Set up and maintain CI/CD pipelines (Jenkins, GitHub Actions). Automate deployments and monitoring (Prometheus, Grafana). Collaboration & Agile: Work in Agile/Scrum teams with daily standups, sprint planning, and retrospectives. Collaborate with frontend, QA, and DevOps teams for seamless delivery.

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description At Freshworks workflow engine as a service, we are passionate about helping product teams connect their users with what matters most in milliseconds! Providing a first-class experience at scale is a challenging endeavor: users expect to find, in the blink of an eye, highly relevant items with little to no effort. The core workflow engine as a service platform’s offering relies on a rock-solid Search engine, whose efficiency, configurability and stability made the fame of workflow engine platform in Freshworks. This platform is called to support data several orders of magnitude larger than what it currently handles. Today, the platform needs you to make it scale further, and to make it faster, smarter, more powerful. Qualifications 3-7 years of Experience in programming with expertise in data structures and algorithms. Experience in troubleshooting complex distributed systems such as Kafka or conductor or N8N or Orkes or any Pub/Sub services. and finding failure scenarios. Expertise in Golang and/or Java and understanding of how it impacts performance/reliability of applications. Deep understanding and practical experience of Elasticsearch/Solr stack Extensive experience of designing complex solutions, data pipelines and microservice architecture in production environments Experience working with large datasets and high traffic Rigor in high code quality, automated testing, and other engineering best practices Experience at our current stage and beyond (high growth, lots of change and building internal infrastructure). Additional Information Experience in Kubernetes Expertise in Datastores - Postgres, MySQL, Elasticsearch Hands on k8s is a big plus Sensitivity to Site Reliability Engineering Skilled communicator and have a proven record of leading work across disciplines. Solid understanding of the Software as a Service (SaaS) model At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚨 Urgent Hiring: Java Spring Boot Developer – 6+ Years Experience | Gurgaon (On-site) 📍 Location: Gurgaon , India Type: Full-Time (preferred) Job Summary : We are seeking a highly skilled and motivated Java Spring Boot Developer to join our engineering team. This role focuses on developing and deploying scalable, event-driven applications on OpenShift , with data ingestion from Apache Kafka and transformation logic written in Apache Camel . The ideal candidate should possess a strong understanding of enterprise integration patterns, stream processing, and protocols, and have experience with observability tools and concepts in AI-enhanced applications. Key Responsibility : Design, develop, and deploy Java Spring Boot ( must ) applications on OpenShift ( ready to learn RedHat open shift or already have Kubernetes experience). Build robust data pipelines with Apache Kafka ( must ) for high-throughput ingestion and real-time processing. Implement transformation and routing logic using Apache Camel (basic knowledge and ready to learn) and Enterprise Integration Patterns (EIPs). Develop components that interface with various protocols including HTTP , JMS , and database systems (SQL/NoSQL). Utilize Apache Flink or similar tools for complex event and stream processing where necessary. Integrate observability solutions (e.g., Prometheus, Grafana, ELK, Open Telemetry) to ensure monitoring, logging, and alerting. Collaborate with AI/ML teams to integrate or enable AI-driven capabilities within applications. Write unit and integration tests, participate in code reviews, and support CI/CD practices. Troubleshoot and optimize application performance and data flows in production environments Required Skills & Qualification 5+ years of hands-on experience in Java development with strong proficiency in Spring Boot Solid experience with Apache Kafka (consumer/producer patterns, schema registry, Kafka Streams is a plus) Experience with stream processing technologies such as Apache Flink, Kafka Streams, or Spark Streaming. Proficient in Apache Camel and understanding of EIPs (routing, transformation, aggregation, etc.). Strong grasp of various protocols (HTTP, JMS, TCP) and messaging paradigms. In-depth understanding of database concepts – both relational and NoSQL. Knowledge of observability tools and techniques – logging, metrics, tracing. Exposure to AI concepts (basic understanding of ML model integration, AI-driven decisions, etc.). Troubleshoot and optimize application performance and data flows in production environments ⚠️ Important Notes Only candidates with a notice period of 20 days or less will be considered PF account is Must for joining Full time If you have already applied for this job with us, please do not submit a duplicate application. Budget is limited and max CTC based on years of experience and expertise. 📬 How to Apply Email your resume to career@strive4x.net with the subject line: Java Spring Boot Developer - Gurgaon Please include the following details Full Name Mobile Number Current Location Total Experience (in years) Current Company Current CTC Expected CTC Notice Period Are you open to relocating to Gurgaon (Yes/No)? Do you have PF account (Yes/No)? Do you prefer full time or Contract or both ? 👉 Know someone who fits the role? Tag or share this with them #JavaJobs #SpringBoot #GurgaonJobs #Kafka #ApacheCamel #OpenShift #HiringNow #SoftwareJobs #SeniorDeveloper #Microservices #Strive4X

Posted 1 day ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Senior Java Spring Boot Developer Experience: 6+ Years Location: Mysore and Pune Job Description: We are seeking an experienced Senior Java Spring Boot Developer with 6+ years of hands-on experience in building scalable, high-performance microservices using Java, Spring Boot, and Spring JPA. The ideal candidate will have strong expertise in designing and developing RESTful APIs, microservices architecture, and cloud-native applications. As part of our team, you will work on enterprise-grade applications, collaborate with cross-functional teams, and contribute to the full software development lifecycle. Mandatory Skills: ✔ 6+ years of Java development (Java 8/11/17). ✔ Strong Spring Boot & Spring JPA experience. ✔ Microservices architecture (design, development, deployment). ✔ RESTful API development & integration. ✔ Database expertise (SQL/NoSQL – PostgreSQL, MySQL, MongoDB). ✔ Testing frameworks (JUnit, Mockito). ✔ Agile methodologies & CI/CD pipelines. Key Responsibilities: Design & Development: Develop high-performance, scalable microservices using Spring Boot. Design and implement RESTful APIs following best practices. Use Spring JPA/Hibernate for database interactions (SQL/NoSQL). Implement caching mechanisms (Redis, Ehcache) for performance optimization. Microservices Architecture: Build and maintain cloud-native microservices (Docker, Kubernetes). Integrate with message brokers (Kafka, RabbitMQ) for event-driven systems. Ensure fault tolerance, resilience, and scalability (Circuit Breaker, Retry Mechanisms). Database & Performance: Optimize database queries (PostgreSQL, MySQL, MongoDB). Implement connection pooling, indexing, and caching strategies. Monitor and improve application performance (JVM tuning, profiling). Testing & Quality Assurance: Write unit & integration tests (JUnit, Mockito, Test Containers). Follow TDD/BDD practices for robust code quality. Perform code reviews and ensure adherence to best practices. DevOps & CI/CD: Work with Docker, Kubernetes, and cloud platforms (AWS/Azure). Set up and maintain CI/CD pipelines (Jenkins, GitHub Actions). Automate deployments and monitoring (Prometheus, Grafana). Collaboration & Agile: Work in Agile/Scrum teams with daily standups, sprint planning, and retrospectives. Collaborate with frontend, QA, and DevOps teams for seamless delivery.

Posted 1 day ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Job Title:Solution Architect Experience: 10-18 Years Location:Bangalore WFO Job Description : Engage with customer architecture team to understand customer product ecosystem. Provide solution architecture/technical design to achieve project goals. Expertise in architecture patterns and microservices architecture Key Technical skills Front-end UI - ELM Back-end API - Scala Play2 framework Typelevel ecosystem (Fs2, Cats Effect, Http4s) Database: Mongo DB Messaging: Kafka + Hermes Security/IdP: Okta Cloud/Deployment: AWS (primarily EC2), Kubernetes, Docker DevOps/CI-CD: JIRA, Github, Github actions, Jenkins, ArgoCD Ensures alignment of individual software projects with overall business strategy and technology standards. Collaborates closely with development teams to oversee solution implementation. Have a passion for software development, like to solve complex problems, and have a strong work ethic. Contribute to innovative ideas, technologies, and patterns. Stay updated with industry trends regarding best practices in front-end and back-end technologies. Minimum Requirements: 6+ years of experience in web development Experience in SLDC - designing, developing, testing, implementing, deploying, and maintaining software applications. Experience with a JS framework with state management (React, Angular, Vue, etc.) or ELM language. Proficiency in modern web techniques and standards of HTML, CSS, JavaScript, and design principles Experience with Java/Scala Experience with RESTful applications. Recent 3+ years of Scala functional programming in an enterprise SW environment Experience with frameworks such as Play or Akka Framework is a plus. Experience with GraphQL is a plus Proficiency in database management systems (SQL & NoSQL) such as MongoDB. Experience with Microservices architecture Experience in source control (Git), creating pull requests, and utilizing feature branching strategies.

Posted 1 day ago

Apply

8.0 - 13.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Overview We are seeking a highly skilled Senior Cloud Engineer to join our team, with a focus on designing, deploying, and managing cloud infrastructure and Java-based applications on AWS and Azure . The ideal candidate will have extensive experience with: Cloud Services : AWS (EC2, S3, ALB, Route 53, RDS, EKS, ECS, Fargate) Containerization : Docker, Kubernetes Databases : MySQL, Cassandra, MongoDB, DocumentDB You will be responsible for: Implementing end-to-end encryption for secure data transmission Optimizing CI/CD pipelines using Terraform , Git , and ArgoCD Ensuring systems are secure , scalable , and highly available This role involves close collaboration with cross-functional teams to support business-critical applications and infrastructure. Responsibilities Cloud Infrastructure Management Design, deploy, and manage AWS infrastructure using services like EC2 , S3 , Application Load Balancer (ALB) , Target Groups , Route 53 , and Private Hosted Zones for DNS resolution within VPCs. Implement and maintain end-to-end encryption for data in transit, including SSL offloading at ALB and re-encryption to backend targets (e.g., EC2, ECS, EKS) using AWS Certificate Manager (ACM) for SSL/TLS certificates. Configure secure SSL ciphers and protocols (e.g., TLS 1.2/1.3 ) to meet compliance requirements (e.g., PCI DSS , HIPAA ). Set up and manage RDS (MySQL, SQL Server) for relational databases, including database backups and restores using automated snapshots and point-in-time recovery. Deploy and manage NoSQL databases like Cassandra , MongoDB , and AWS DocumentDB for high-performance, scalable data storage. Containerization and Orchestration Deploy and manage containerized Java applications using Amazon EKS (Kubernetes) and Amazon ECS (with Fargate for serverless container management). Develop and maintain Docker images for Java-based microservices, ensuring secure and optimized container configurations. Integrate Kafka for event-driven architectures, configuring producers, consumers, and topics for real-time data streaming. Ensure secure communication for Kafka clusters using SSL/TLS within AWS environments. Infrastructure as Code (IaC) and Automation Write and maintain Terraform scripts to provision and manage AWS resources (e.g., VPCs, ALB, EC2, RDS, EKS, ECS, Route 53, S3). Implement CI/CD pipelines using Git , AWS CodePipeline , AWS CodeBuild , and ArgoCD for automated deployment of Java applications to EKS/ECS clusters. Develop reusable pipeline templates for consistent deployments across development, staging, and production environments. Automate infrastructure provisioning and application deployments using GitOps principles with ArgoCD. Security and Compliance Implement end-to-end encryption for internal and external traffic, ensuring HTTPS for all communications within the AWS environment (e.g., ALB to EC2/ECS/EKS, database connections). Configure ACM for managing SSL/TLS certificates and integrate with ALB , CloudFront , and Route 53 for secure domain routing. Secure Cassandra , MongoDB , and DocumentDB with SSL/TLS for client-to-node and internode communication. Work with security teams to enforce encryption standards and validate compliance using AWS tools like CloudTrail and CloudWatch . Application Deployment and Optimization Deploy and optimize Java-based applications (e.g., Spring Boot , Java EE ) on EC2 , ECS , or EKS , ensuring high availability and scalability. Configure ALB with Target Groups to route traffic to Java applications, implementing health checks and load balancing strategies. Optimize performance for Kafka , Cassandra , and MongoDB in production environments, including schema design and cluster configuration. Monitoring and Troubleshooting Monitor infrastructure and applications using AWS CloudWatch , setting up alarms and dashboards for EC2, RDS, EKS, and ECS. Troubleshoot issues related to SSL/TLS configurations, database performance, and application deployment failures. Ensure robust backup and restore strategies for RDS, Cassandra, MongoDB, and DocumentDB to minimize data loss and downtime. Collaboration and Communication Collaborate with platform, infra, and business teams to align on project requirements, cost estimates, and timelines (e.g., as discussed in LATAM and SUK market setups). Provide technical guidance to stakeholders, ensuring clear communication on infrastructure changes and security implementations. Document processes, configurations, and pipeline templates for knowledge sharing and compliance. Qualifications Technical Expertise 8+ years of experience with AWS services : EC2, S3, ALB, Target Groups, Route 53, Private Hosted Zones, RDS (MySQL, SQL Server), EKS, ECS, Fargate, ACM. Strong knowledge of SSL/TLS , including SSL offloading and SSL ciphers for securing data in transit. Hands-on experience with Docker and Kubernetes (EKS) for containerized deployments. Proficiency in Terraform for Infrastructure as Code and Git for version control. Experience building CI/CD pipelines using AWS CodePipeline, CodeBuild, or ArgoCD. Expertise in Java-based application deployment (e.g., Spring Boot, Java EE) on AWS. In-depth knowledge of Kafka for event streaming, Cassandra , MongoDB , and AWS DocumentDB for NoSQL database management. Familiarity with RDS backup and restore processes, including automated snapshots and point-in-time recovery. Additional Requirements Experience configuring end-to-end encryption for internal network traffic (e.g., ALB to backend, database connections) using HTTPS and SSL/TLS. Ability to optimize Cassandra and MongoDB clusters for performance and scalability, including schema design and SSL configuration. Knowledge of GitOps workflows using ArgoCD for Kubernetes-based deployments. Strong understanding of VPC networking , security groups, and private subnets for secure AWS architectures. Soft Skills Excellent problem-solving skills and attention to detail. Strong communication skills to coordinate with cross-functional teams (e.g., platform, infra, security, and business teams). Ability to work in a fast-paced environment and manage multiple priorities.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary: As a Software Engineer II, you'll build best-in-class SaaS solutions that enable efficiency, transparency, and accountability within government agencies. You'll be a key member of our engineering team, delivering new features and improvements in a fast-paced, agile environment using modern technologies, including ReactJS, JavaScript, GraphQL, and NodeJS. A typical day in this role would provide opportunities to design, implement and optimize complex applications for speed and scalability, and resolve various unique technical challenges. Strong collaboration skills with the product, UX, and platform engineers coupled with the ability to execute quickly will enable your success. Responsibilities: Develop, test, and deploy high-quality software with minimum supervision Write clean, efficient, and well-documented code following industry best practices. Troubleshoot, debug, and resolve complex technical issues. Contribute to continuous improvement and operational excellence Participate in code reviews to ensure code quality and adherence to coding standards. Stay up-to-date with emerging technologies and industry trends to drive innovation within the team Contribute to the firm’s Engineering culture of innovation, leading-edge technology adoption and quality Requirements and Preferred Experience : BA/BS in a computer-science-related discipline or equivalent experience required 3+ years of professional experience in developing cloud native applications. Proficiency in Java or C++ or C# or equivalent program language is required. Proficiency in Python, JavaScript, ReactJS, and NodeJS is required Proficiency in data structures, database concepts, algorithms and observability is required Working knowledge of event-driven architecture and Kafka preferred Working knowledge of Elasticsearch/OpenSearch preferred

Posted 1 day ago

Apply

6.0 - 11.0 years

16 - 31 Lacs

Hyderabad

Hybrid

Location:Only Hyderabad Experience:6years work mode:Hybrid Skills required : Java 8+, Spring, Spring Boot, Kafka/Any JMS, Basic Knowledge on AWS Strong SQL knowledge

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position Title: Full stack Engineer (Golang) Experience: 3-5 years Location: Ahmedabad About Motadata: Motadata is a renowned IT monitoring and management software company that has been transforming how businesses manage their ITOps since its inception. Our vision is to revolutionize the way organizations extract valuable insights from their IT networks. Bootstrapped since inception, Motadata has built up a formidable product suite comprising cutting-edge solutions, empowering enterprises to make informed decisions and optimize their IT infrastructure. As a market leader, we take pride in our ability to collect and analyze data from various sources, in any format, providing a unified view of IT monitoring data. We are customer-centric and continuously strive for excellence, enabling organizations to tackle complex IT challenges with ease. Follow us on LinkedIn and be part of an innovative team. We’re looking for folks who want to join us in unlocking the true potential of IT together! Position Overview: We are looking for a highly skilled Fullstack Developer (Golang) who excels in both backend and frontend development. The ideal candidate will have strong hands-on experience with Golang, event/message-driven architecture, and proficiency in modern frontend technologies such as React.js, Vue.js, or Next.js. A background in network programming is a plus. Role & Responsibility: • Develop and maintain backend services using Golang • Design microservices with a focus on performance, scalability, and clean architecture • Build RESTful APIs and ensure their smooth integration with frontend components • Utilize concurrency and parallelism in Golang using goroutines, channels, etc. • Implement and maintain event-driven/message-driven workflows using Kafka, RabbitMQ, or ActiveMQ • Work with SQL/NoSQL databases such as PostgreSQL, MongoDB, or Redis • Build responsive, interactive user interfaces using React.js, Vue.js, or Next.js • Integrate frontend components with backend services to ensure seamless functionality • Follow UI/UX best practices and collaborate with design teams for implementation • Ensure application performance, responsiveness, and cross-browser compatibility • Participate in Agile practices like sprint planning, daily stand-ups, and retrospectives • Collaborate with QA, product managers, and fellow engineers to deliver high-quality features Write clean, maintainable, and testable code • Continuously learn and apply modern development practices and tools. Above is a summary of expected role and responsibility and must not be considered as an exhaustive list of duties. Skills and Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field • 3–5 years of fullstack development experience • Minimum 2 years of hands-on experience with Golang • Proficiency in React.js, Vue.js, or Next.js • Strong understanding of event-driven/message-driven architecture • Experience with REST APIs, microservices, and backend scalability • Knowledge of network programming is a plus • Familiarity with databases: PostgreSQL, MongoDB, or Redis • Strong understanding of version control (Git) and Agile methodologies • Excellent analytical, problem-solving, and communication skills

Posted 1 day ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Summary... What you'll do... Job Description Summary: Responsible for coding, unit testing, building high performance and scalable applications that meet the needs of millions of Walmart-International customers, in the areas of supply chain management & Customer experience. About Team: Our team collaborates with Walmart International, which has over 5,900 retail units operating outside of the United States under 55 banners in 26 countries including Africa, Argentina, Canada, Central America, Chile, China, India, Japan, and Mexico, to name a few. What you'll do: Design, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from products we build at Walmart Scale Work in high performing agile team environment - sharing innovative ideas and working collaboratively across teams. Work with talented engineers and product visionaries to contribute to the vision and design of our web and mobile products. Be a product-oriented Full Stack Developer creating and experimenting with new ideas that will engage and excite our customers Own and lead the delivery of products working along with a team of junior developers Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. Experience performing root cause analysis on applications to answer specific business questions and identify opportunities for improvement. Utilize industry research to improve Sam’s technology environment. What you'll bring: Bachelor's Degree or Master’s Degree with 6+ years of experience in Computer Science or related field. Deep knowledge of Service Oriented Architecture and experience implementing RESTFUL Web Services. Strong in Java programming and JVM internals (concurrency, multi-threading). Solid design and coding skills in Java and/or Spring framework. Extensive hands-on experience building services using these technologies (Java, J2EE, Spring Boot, Hibernate, JAX). Strong computer science knowledge in algorithms, data structures, database concepts and SQL technologies. Experience with storage technologies such as Cosmos DB, Elastic Search, Hive, Cassandra, Hadoop and Kafka are good to have. Cloud Development experience Good to have experience in HTML5, JavaScript, CSS3, AJAX, GraphQL, React Native, React, Redux, Webpack and Node. Experience in building scalable/highly available distributed systems in production. Understanding of stream processing with knowledge on Kafka. Knowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system. Very good expertise in production support related activities (issue identification, resolution) About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 3years’ experience in software engineering or related area at a technology, retail, or data-driven company. Option 2: 5 years’ experience in software engineering or related area at a technology, retail, or data-driven company. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Certification in Security+, GISF, CISSP, CCSP, or GSEC, Master’s degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 1 year’s experience leading information security or cybersecurity projects Information Technology - CISCO Certification - Certification Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India R-2219378

Posted 1 day ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Summary... What you'll do... Job Description Summary: Responsible for coding, unit testing, building high performance and scalable applications that meet the needs of millions of Walmart-International customers, in the areas of supply chain management & Customer experience. About Team: Our team collaborates with Walmart International, which has over 5,900 retail units operating outside of the United States under 55 banners in 26 countries including Africa, Argentina, Canada, Central America, Chile, China, India, Japan, and Mexico, to name a few. What you'll do: Design, build, test and deploy cutting edge solutions at scale, impacting millions of customers worldwide drive value from products we build at Walmart Scale Work in high performing agile team environment - sharing innovative ideas and working collaboratively across teams. Work with talented engineers and product visionaries to contribute to the vision and design of our web and mobile products. Be a product-oriented Full Stack Developer creating and experimenting with new ideas that will engage and excite our customers Own and lead the delivery of products working along with a team of junior developers Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. Experience performing root cause analysis on applications to answer specific business questions and identify opportunities for improvement. Utilize industry research to improve Sam’s technology environment. What you'll bring: Bachelor's Degree or Master’s Degree with 6+ years of experience in Computer Science or related field. Deep knowledge of Service Oriented Architecture and experience implementing RESTFUL Web Services. Strong in Java programming and JVM internals (concurrency, multi-threading). Solid design and coding skills in Java and/or Spring framework. Extensive hands-on experience building services using these technologies (Java, J2EE, Spring Boot, Hibernate, JAX). Strong computer science knowledge in algorithms, data structures, database concepts and SQL technologies. Experience with storage technologies such as Cosmos DB, Elastic Search, Hive, Cassandra, Hadoop and Kafka are good to have. Cloud Development experience Good to have experience in HTML5, JavaScript, CSS3, AJAX, GraphQL, React Native, React, Redux, Webpack and Node. Experience in building scalable/highly available distributed systems in production. Understanding of stream processing with knowledge on Kafka. Knowledge of Software Engineering best practices with experience on implementing CI/CD, Log aggregation/Monitoring/alerting for production system. Very good expertise in production support related activities (issue identification, resolution) About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 3years’ experience in software engineering or related area at a technology, retail, or data-driven company. Option 2: 5 years’ experience in software engineering or related area at a technology, retail, or data-driven company. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Certification in Security+, GISF, CISSP, CCSP, or GSEC, Master’s degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 1 year’s experience leading information security or cybersecurity projects Information Technology - CISCO Certification - Certification Primary Location... Pardhanani Wilshire Ii, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-2180398

Posted 1 day ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

We are seeking a skilled and experienced Golang Developer to join our AI product engineering team. You will be responsible for building scalable backend systems and APIs that power intelligent applications and services. This is a high-impact role where you’ll work closely with data scientists, AI engineers, and frontend developers to integrate AI models into production-ready software. Key Responsibilities: Design, develop, and maintain high-performance backend services in Go. Develop RESTful APIs and microservices for AI-driven applications. Integrate with machine learning models, inference pipelines, and data processing tools. Optimize system performance and ensure low-latency, high-throughput APIs. Collaborate with DevOps teams to containerize and deploy applications (Docker, Kubernetes). Work closely with AI/ML teams to convert research prototypes into scalable services. Ensure code quality through unit testing, code reviews, and best practices. Contribute to architecture decisions and scalability planning. Required Skills & Qualifications: Strong proficiency in Go (Golang) and experience in building backend services. Hands-on experience with REST APIs, microservices architecture , and distributed systems. Familiarity with AI/ML model integration (e.g., via gRPC, REST, TensorFlow Serving, or ONNX). Experience with PostgreSQL / MongoDB / Redis or other databases. Understanding of containerization (Docker) and orchestration (Kubernetes). Familiarity with message brokers like Kafka, RabbitMQ, or NATS. Experience in working with Git, CI/CD pipelines , and cloud platforms (AWS, GCP, Azure). Good understanding of concurrency patterns , performance tuning, and memory management in Go. What We Offer: Opportunity to work on cutting-edge AI solutions. Flexible and collaborative work environment. Competitive salary and benefits. Career growth and continuous learning opportunities. This is a work-from-office position. Only candidates who are willing to relocate to Trivandrum should apply.

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description: Strong MERN (Mongo DB + Mongoose, Express, React, Node JS) full stack experience using Typescript. Experience in leading in code reviews. Demonstrate understanding of SOLID principles. Demonstrate understanding of TDD or BDD. Good knowledge and experience of writing good unit tests. Strong implementation design using cloud. Microservices architecture, GraphQL, RESTful Webservices, NoSQL & SQL Docker & containerization, CI/CD pipeline design & build Event driven architecture, Kafka. Use of tools such as JIRA, Git, GitHub Strong communication skills Ability to lead discussion and solve problems for the dev team Ability to manage expectations of Product/Business teams in terms of prioritizing quality, aligning to the strategic vision Provide guidance on inline technical documentation Location: DGS India - Pune - Baner M- Agile Brand: Merkle Time Type: Full time Contract Type: Permanent

Posted 1 day ago

Apply

6.0 - 10.0 years

13 - 18 Lacs

Mumbai

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a skilled Data Engineer to design, build, and maintain scalable, secure, and high-performance data solutions. This role spans the full data engineering lifecycle – from research and architecture to deployment and support- within cloud-native environments, with a strong focus on AWS and Kubernetes (EKS). Primary Responsibilities: Data Engineering Lifecycle: Lead research, proof of concept, architecture, development, testing, deployment, and ongoing maintenance of data solutions Data Solutions: Design and implement modular, flexible, secure, and reliable data systems that scale with business needs Instrumentation and Monitoring: Integrate pipeline observability to detect and resolve issues proactively Troubleshooting and Optimization: Develop tools and processes to debug, optimize, and maintain production systems Tech Debt Reduction: Identify and address legacy inefficiencies to improve performance and maintainability Debugging and Troubleshooting: Quickly diagnose and resolve unknown issues across complex systems Documentation and Governance: Maintain clear documentation of data models, transformations, and pipelines to ensure security and governance compliance Cloud Expertise: Leverage advanced skills in AWS and EKS to build, deploy, and scale cloud-native data platforms Cross-Functional Support: Collaborate with analytics, application development, and business teams to enable data-driven solutions Team Leadership: Lead and mentor engineering teams to ensure operational efficiency and innovation Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science or related field 5+ years of experience in data engineering or related roles Proven experience designing and deploying scalable, secure, high-quality data solutions Solid expertise in full Data Engineering lifecycle (research to maintenance) Advanced AWS and EKS knowledge Proficient in CI/CD, IaC, and addressing tech debt Proven skilled in monitoring and instrumentation of data pipelines Proven advanced troubleshooting and performance optimization abilities Proven ownership mindset with ability to manage multiple components Proven effective cross-functional collaborator (DS, SMEs, and external teams). Proven exceptional debugging and problem-solving skills Proven solid individual contributor with a team-first approach At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #njp External Candidate Application Internal Employee Application

Posted 1 day ago

Apply

5.0 - 9.0 years

13 - 18 Lacs

Mumbai

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Optum’s Applied AI team is seeking an experienced and pragmatic Lead Data Scientist to drive the end-to-end ML lifecycle, from problem definition to robust, scalable model deployment and continuous improvement. This role demands deep expertise in machine learning, particularly with advanced transformer models and Large Language Models (LLMs), applied to complex domains such as clinical document understanding and semantic search. Primary Responsibilities: Own the end-to-end data science lifecycle – from problem definition and experimentation to deployment, monitoring, and continuous improvement Design and deploy robust, explainable, and scalable ML models for clinical document understanding, named entity recognition, context disambiguation, and semantic search across prospective and retrospective use cases Lead model development with a focus on production-readiness, incorporating solid MLOps, reproducibility, and experimentation practices Diagnose and optimize model performance, mitigate bias, and ensure analytical integrity, accuracy, and operational efficiency Work hands-on with multi-modal transformer models for tasks like NER, handwriting and form understanding, and document classification Leverage LLMs and SLMs for clinical reasoning, automated annotation, data generation, and downstream distillation Collaborate with cross-functional teams – including ML engineers, annotators, and clinical domain experts – to translate business challenges into deployable AI solutions Implement automated data labeling pipelines using techniques like active learning, weak supervision, and human-in-the-loop systems Ensure reproducibility and operational excellence through Git, DVC, CI/CD pipelines, and orchestration tools (e.g., Airflow, Kafka) Mentor and guide junior scientists and engineers, lead technical design reviews, and set best practices for model architecture and evaluation Continuously identify and close gaps in the ML platform, proposing and implementing innovative solutions to improve performance, scalability, and reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science or adjacent field Advanced degree in a field that emphasizes the use of data science/statistics techniques (e.g., Computer Science, Applied Mathematics, or a field with direct NLP application) 5+ years of experience in Data Science with a focus on Machine Learning and Natural Language Processing Solid understanding of machine learning algorithms, NLP principles, and data modeling principles Proficiency in Python, R, and SQL. Experience in NLP libraries such as NLTK, SpaCy, and BERT Proven excellent Communication Skills Proven flexibility to provide support during critical business periods Proven ability to interpret and present complex data in various formats. Proven solid leadership skills, ability to meet deadlines, and work independently. An analytical mindset for addressing complex business needs Proven positive team player with a drive to learn and contribute to achieving results Willingness to work in varying shifts At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #NJP External Candidate Application Internal Employee Application

Posted 1 day ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Position: Sr Data Operations Years of Experience – 6-8 Years Job Location: S.B Road –Pune, For other locations (Remote) The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Deliver and Operate team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a highly skilled and detail-oriented Software Engineer II for Data Operations team to maintain our data infrastructure, pipelines, and work-flows. You will play a key role in ensuring the smooth ingestion, transformation, validation, and delivery of data across systems. This role is ideal for someone with a strong understanding of data engineering and operational best practices who thrives in high-availability environments. Responsibilities & Skills You should: Monitor and maintain data pipelines and ETL processes to ensure reliability and performance. Automate routine data operations tasks and optimize workflows for scalability and efficiency. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Collaborate with data engineering, analytics, and DevOps teams to support data infrastructure. Implement monitoring, alerting, and logging systems for data pipelines. Maintain and improve data governance, access controls, and compliance with data policies. Support deployment and configuration of data tools, services, and platforms. Participate in on-call rotation and incident response related to data system outages or failures. Required Skills : 5+ years of experience in data operations, data engineering, or a related role. Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL). Proficiency with data pipeline tools (e.g., Apache Airflow). Experience with cloud platforms (AWS, GCP) and cloud-based data services (e.g., Redshift, BigQuery). Familiarity with scripting languages such as Python, Bash, or Shell. Knowledge of version control (e.g., Git) and CI/CD workflows. Qualifications Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Experience with data observability tools (e.g., Splunk, DataDog). Background in DevOps or SRE with focus on data systems. Exposure to infrastructure-as-code (e.g., Terraform, CloudFormation). Knowledge of streaming data platforms (e.g., Kafka, Spark Streaming).

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Technical Architect About the role We’re looking for a hands-on Technical Architect who can shape end-to-end architecture for highly-scalable, low-latency applications (think tens of thousands of TPS), guide engineering teams on design best practices, and champion modern cloud-native patterns. You’ll partner with engineering managers, fellow architects, and leadership to translate business ideas into resilient, observable, and secure systems. Key responsibilities %- Focus What you’ll do 60 % Own high-level and component-level architecture for new and existing backend services (Node.js/Express, Socket.IO), balancing performance, cost, and maintainability. Provide architectural governance: lead design reviews, set coding & unit-testing standards, review pull requests, and establish reusable libraries/packages. Evaluate emerging open-source projects and cloud offerings; pilot proofs-of-concept, publish findings, and influence tech-stack evolution. 20 % Guide migration and green-field build-outs of microservices, serverless functions, and containerized workloads (EKS, Cloud Functions, Lambda, etc.). migration and green-field build-outs of microservices, serverless functions, and containerized workloads (EKS, Cloud Functions, Lambda, etc.). Drive excellence—define CI/CD pipelines, configuration management, automated quality gates, and blue-green or canary deployments. 20 % Mentor senior and mid-level engineers; foster a culture of clean code, DDD, secure coding, and performance first. Required qualifications Experience: 8–10 years in software engineering with 3+ years in an architect or staff-engineer role, designing high-TPS systems. Education: Bachelor’s / Master’s in Computer Science, Engineering, or equivalent hands-on track record. Core stack: Deep Node.js & TypeScript proficiency, Express, Socket.IO. Cloud: Hands-on AWS architecture (serverless, containers, VPC design, IAM, cost optimisation). Architectural patterns: SOA, microservices, DDD, event-driven & CQRS, high-availability / DR topologies, secure multi-tenant design. Data: Solid grounding in RDBMS + NoSQL (e.g., MySQL, MongoDB), caching, and messaging (Kafka, SQS). Quality mindset: Unit/contract testing, CI/CD, static analysis, configuration-as-code, and observability (logs, metrics, tracing). Collaboration: Proven influence across cross-functional teams, excellent written & verbal communication, ability to sell architectural trade-offs to both engineers and execs. Nice-to-have Experience publishing internal NPM modules or design-system-style component libraries. Exposure to WebSockets at scale, edge compute (Edge Workers). Prior work in gaming / fintech / real-time data domains where latency < 100 ms matters.

Posted 1 day ago

Apply

0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

POSITION OVERVIEW : Senior Developer Mandatory Skills: GCP BigQuery, SQL, CloudRun Job Description: Senior Application Developer with Google Cloud Platform. Experience in BigQuery, SQL, CloudRun. Need a Senior Application Developer with GCP Skillset for a project involving re-design and re-platform of legacy Revenue Allocation systemMandatory Skills: GCP BigQuery, SQL, CloudRun. Desired Skills: Linux Shell Scripting is a huge plus; Nice to have - Kafka, MQ Series, Oracle PL/SQL POSITION GENERAL DUTIES AND TASKS : Senior Developer Mandatory Skills: GCP BigQuery, SQL, CloudRun Job Description: Senior Application Developer with Google Cloud Platform experience in BigQuery, SQL, CloudRun. Need a Senior Application Developer with GCP Skillset for a project involving re-design and re-platform of legacy Revenue Allocation system Mandatory Skills: GCP BigQuery, SQL, CloudRun Desired Skills: Linux Shell Scripting is a huge plus; Nice to have - Kafka, MQ Series, Oracle PL/SQL

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies