Home
Jobs

281 Opentelemetry Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Location: Noida, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. aJob Profile: Part of SRE team ( Site reliability engineering) Deployment,proactive Deployment, proactive improvement in Monitoring of complex Kubernetes/micro service architecture based applications on any Cloud Provider (Azure/AWS/GCP) Required Skills: Ability to demonstrate solid skills in Azure/AWS/GCP, Kubernetes, Unix/Linux Platform. Ability to demonstrate knowledge about Cluster, Cloud/VM based solution deployment, and management, including knowledge about networking, servers and storages. Experience in DevOps with Kubernetes. Strong knowledge of CI/CD tools (Jenkins, Bamboo etc.) Experience in cloud platforms and infrastructure automation. Experience on JAVA/Python/Go language to support workloads implemented in these language. Experience on Tomcat and SpringBoot based JAVA workloads Experience on atleast one of the observability platform like Prometheus, Splunk, DataDog, Honeycomb etc. Expertise on Python/Go or similar scripting/languageto develop tools required for SRE job. Experience either in conducting or participating Root Cause Analysis sessions. Must have completed minimum one project end-to-end in a technical DevOps role, preferably in a global organization. Practical understanding of Ansible, Docker, and implementation of the solutions based upon these tools is preferred. Ability to handle escalations Proven ability to learn and apply new skills and processes quickly and train others in team. Demonstrated experience as individual contributor with customer focus and service orientation with solid leadership and coaching skills. Ability to communicate courteously and effectively with customers, third-party vendors and partners. Proficiency with Customer Relationship Management (CRM) software such as JIRA and Confluence. Excellent written and verbal communication skills in English. Desired Skills: Exposure on DataDog, Kafka, Keycloak or similar solutions. Good Exposure on Terraform/Terragrunt. OpenTelemetry Experience Required: 10 - 15 years of total experience Total, mainly on Devops Role. Note: This is an hands-on development and design role. At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now!

Posted 6 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Sarvaha would like to welcome a skilled Observability Engineer with a minimum of 3 years of experience to contribute to designing, deploying, and scaling our monitoring and logging infrastructure on Kubernetes . In this role, you will play a key part in enabling end-to-end visibility across cloud environments by processing Petabyte data scales, helping teams enhance reliability, detect anomalies early, and drive operational excellence. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at https://www.sarvaha.com to know more about us. What You’ll Do Configure and manage observability agents across AWS, Azure & GCP Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack Experience with different language stacks such as Java, Ruby, Python and Go Instrument services using OpenTelemetry and integrate telemetry pipelines Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs Create dashboards, set up alerts, and track SLIs/SLOs Enable RCA and incident response using observability data Secure the observability pipeline You Bring BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering Strong skills in reading and interpreting logs, metrics, and traces Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. Knowledge of OpenTelemetry, IaC, and security best practices Clear documentation of observability processes, logging standards & instrumentation guidelines Ability to proactively identify, debug, and resolve issues using observability data Focused on maintaining data quality and integrity across the observability pipeline Why Join Sarvaha? Top notch remuneration and excellent growth opportunities An excellent, no-nonsense work environment with the very best people to work with Highly challenging software implementation and deployment problems Hybrid Mode. We offered complete work from home even before the pandemic.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Seeking a highly skilled and experienced Senior .NET Developer to work closely with customer teams. This role is pivotal in designing, developing, and, crucially, mentoring others in the adoption of modern Cloud Native Development practices. If you're passionate about pairing, fostering technical growth, and building robust microservices-based solutions with .NET and Podman, we want to hear from you. Job Nature - Contractual for 2 months Job Location - Delhi/NCR Job Mode - WFO (Mon to Fri) Key Responsibilities Lead the design, development, and implementation of high-quality, scalable, and secure microservices using C# and the .NET (Core) ecosystem. Drive the adoption and implementation of Continuous Delivery (CD) pipelines, ensuring efficient and reliable software releases for microservices. Highly skilled in Test-Driven Development (TDD) practices, writing comprehensive unit, integration, and end-to-end tests to ensure code quality and maintainability within a microservices architecture. Design, develop, and deploy .NET microservices within containers, leveraging inner loop practices Utilize Podman/Docker Compose (or similar multi-container tooling compatible with Podman) for local development environments and multi-service microservices application setups. Implement robust API Testing strategies, including automated tests for RESTful APIs across microservices. Integrate and utilize Observability tools and practices (e.g., logging, metrics, tracing) to monitor application health, performance, and troubleshoot issues effectively in a containerized microservices environment. Collaborate closely with product owners, architects, and other developers to translate business requirements into technical solutions, specifically focusing on microservices design. Play a key mentoring role, actively participating in pairing sessions, providing technical guidance, and fostering the development of junior and mid-level engineers in microservices development. Contribute to code reviews with an eye for quality, maintainability, and knowledge transfer within a microservices context. Actively participate in architectural discussions and contribute to technical decision-making, particularly concerning microservices design patterns, containerization strategies with Podman, and overall system architecture. Stay up-to-date with emerging technologies and industry best practices in .NET, microservices, and containerization, advocating for their adoption where appropriate. Troubleshoot and debug complex issues across various environments, including Podman containers and distributed microservices. Required Skills and Experience 7+ years of professional experience in software development with a strong focus on the Microsoft .NET (Core) ecosystem (ideally .NET 6+ or .NET 8+). Expertise in C# and building modern applications with .NET Core. Demonstrable experience designing, developing, and deploying Microservices Architecture. Demonstrable experience with Continuous Delivery (CD) principles and tools (e.g., Azure DevOps, GitLab CI/CD, Jenkins). Proven track record of applying Test-Driven Development (TDD) methodologies. Strong practical experience with Podman, including building and running .NET applications in Podman containers, and an understanding of its daemonless/rootless architecture benefits. Proficiency in using Podman Compose (or similar approaches) for managing multi-container .NET applications locally. Extensive experience with API Testing frameworks and strategies (e.g., Postman, Newman, SpecFlow, Playwright, XUnit/NUnit for integration tests). Deep understanding and practical experience with Observability principles and tools (e.g., Application Insights, Prometheus, Grafana, OpenTelemetry, ELK Stack, Splunk). Solid understanding of RESTful API design and development. Experience with relational databases (e.g., SQL Server, PostgreSQL) and ORMs (e.g., Entity Framework Core). Excellent mentorship and communication skills, with a passion for knowledge sharing and team development. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team.

Posted 1 day ago

Apply

7.0 years

0 Lacs

Delhi

On-site

Indeed logo

Seeking a highly skilled and experienced Senior .NET Developer to work closely with customer teams. This role is pivotal in designing, developing, and, crucially, mentoring others in the adoption of modern Cloud Native Development practices. If you're passionate about pairing, fostering technical growth, and building robust microservices-based solutions with .NET and Podman, we want to hear from you. Job Nature - Contractual for 2 months Job Location - Delhi/NCR Job Mode - WFO (Mon to Fri) Key Responsibilities Lead the design, development, and implementation of high-quality, scalable, and secure microservices using C# and the .NET (Core) ecosystem. Drive the adoption and implementation of Continuous Delivery (CD) pipelines, ensuring efficient and reliable software releases for microservices. Highly skilled in Test-Driven Development (TDD) practices, writing comprehensive unit, integration, and end-to-end tests to ensure code quality and maintainability within a microservices architecture. Design, develop, and deploy .NET microservices within containers, leveraging inner loop practices Utilize Podman/Docker Compose (or similar multi-container tooling compatible with Podman) for local development environments and multi-service microservices application setups. Implement robust API Testing strategies, including automated tests for RESTful APIs across microservices. Integrate and utilize Observability tools and practices (e.g., logging, metrics, tracing) to monitor application health, performance, and troubleshoot issues effectively in a containerized microservices environment. Collaborate closely with product owners, architects, and other developers to translate business requirements into technical solutions, specifically focusing on microservices design. Play a key mentoring role, actively participating in pairing sessions, providing technical guidance, and fostering the development of junior and mid-level engineers in microservices development. Contribute to code reviews with an eye for quality, maintainability, and knowledge transfer within a microservices context. Actively participate in architectural discussions and contribute to technical decision-making, particularly concerning microservices design patterns, containerization strategies with Podman, and overall system architecture. Stay up-to-date with emerging technologies and industry best practices in .NET, microservices, and containerization, advocating for their adoption where appropriate. Troubleshoot and debug complex issues across various environments, including Podman containers and distributed microservices. Required Skills and Experience 7+ years of professional experience in software development with a strong focus on the Microsoft .NET (Core) ecosystem (ideally .NET 6+ or .NET 8+). Expertise in C# and building modern applications with .NET Core. Demonstrable experience designing, developing, and deploying Microservices Architecture. Demonstrable experience with Continuous Delivery (CD) principles and tools (e.g., Azure DevOps, GitLab CI/CD, Jenkins). Proven track record of applying Test-Driven Development (TDD) methodologies. Strong practical experience with Podman, including building and running .NET applications in Podman containers, and an understanding of its daemonless/rootless architecture benefits. Proficiency in using Podman Compose (or similar approaches) for managing multi-container .NET applications locally. Extensive experience with API Testing frameworks and strategies (e.g., Postman, Newman, SpecFlow, Playwright, XUnit/NUnit for integration tests). Deep understanding and practical experience with Observability principles and tools (e.g., Application Insights, Prometheus, Grafana, OpenTelemetry, ELK Stack, Splunk). Solid understanding of RESTful API design and development. Experience with relational databases (e.g., SQL Server, PostgreSQL) and ORMs (e.g., Entity Framework Core). Excellent mentorship and communication skills, with a passion for knowledge sharing and team development. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team.

Posted 1 day ago

Apply

0 years

6 - 9 Lacs

Delhi, India

On-site

Linkedin logo

A fast-growing player in the enterprise SaaS and cloud-native solutions space, we build high-availability, high-throughput platforms that power digital services for Fortune 500 clients across finance, retail, and healthcare. Our engineering culture revolves around clean code, test automation, and DevOps-driven delivery to ship scalable back-end services that keep mission-critical applications running 24×7. Role & Responsibilities Design, develop, and optimize server-side features in Java/Python, ensuring sub-second latency and 99.99% uptime. Implement RESTful and gRPC APIs, integrating with microservices, event streams, and third-party systems. Own end-to-end module lifecycle—architecture, coding, unit/integration testing, containerization, and deployment. Collaborate with Product, Front-End, and DevOps teams to translate business requirements into production-ready services. Profile, refactor, and tune databases (SQL & NoSQL) to improve query performance and data integrity. Establish engineering best practices—code reviews, automated pipelines, observability dashboards—and mentor junior developers. Skills & Qualifications Must-Have 3–6 yrs hands-on building back-end services with Java (Spring Boot) or Python (FastAPI/Django). Deep understanding of RESTful design, JSON, and API versioning; exposure to gRPC or GraphQL a plus. Proficient in relational databases (PostgreSQL/MySQL) and at least one NoSQL store (MongoDB/Redis). Experience with Docker, Kubernetes, and CI/CD pipelines (Jenkins/GitHub Actions) for automated delivery. Solid grasp of data structures, algorithms, and multithreading; able to debug complex production issues. Comfortable working onsite in an Agile, test-driven environment with daily stand-ups and sprint rituals. Preferred Knowledge of message brokers (Kafka/RabbitMQ) and event-driven microservice architectures. Hands-on with cloud services—AWS ECS/EKS, Azure AKS, or GCP GKE—plus infrastructure-as-code (Terraform). Exposure to monitoring stacks (Prometheus, Grafana) and distributed tracing (Jaeger, OpenTelemetry). Benefits & Culture Highlights Engineer-centric culture: hackathons, architecture guilds, and dedicated learning budgets. Cutting-edge tech stack and green-field projects that influence millions of users. Performance-linked bonuses, comprehensive medical cover, and on-site wellness facilities. Skills: grpc,opentelemetry,aws eks,data structures,aws,rabbitmq,azure aks,git,nosql,jenkins,github actions,django,ci/cd,kubernetes,algorithms,terraform,java,postgresql,docker,devops,agile,aws ecs,prometheus,python,sql,cloud,fastapi,mongodb,restful apis,kafka,gcp gke,graphql,jaeger,spring boot,mysql,grafana,redis,multithreading,microservices

Posted 1 day ago

Apply

0 years

6 - 9 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

A fast-growing player in the enterprise SaaS and cloud-native solutions space, we build high-availability, high-throughput platforms that power digital services for Fortune 500 clients across finance, retail, and healthcare. Our engineering culture revolves around clean code, test automation, and DevOps-driven delivery to ship scalable back-end services that keep mission-critical applications running 24×7. Role & Responsibilities Design, develop, and optimize server-side features in Java/Python, ensuring sub-second latency and 99.99% uptime. Implement RESTful and gRPC APIs, integrating with microservices, event streams, and third-party systems. Own end-to-end module lifecycle—architecture, coding, unit/integration testing, containerization, and deployment. Collaborate with Product, Front-End, and DevOps teams to translate business requirements into production-ready services. Profile, refactor, and tune databases (SQL & NoSQL) to improve query performance and data integrity. Establish engineering best practices—code reviews, automated pipelines, observability dashboards—and mentor junior developers. Skills & Qualifications Must-Have 3–6 yrs hands-on building back-end services with Java (Spring Boot) or Python (FastAPI/Django). Deep understanding of RESTful design, JSON, and API versioning; exposure to gRPC or GraphQL a plus. Proficient in relational databases (PostgreSQL/MySQL) and at least one NoSQL store (MongoDB/Redis). Experience with Docker, Kubernetes, and CI/CD pipelines (Jenkins/GitHub Actions) for automated delivery. Solid grasp of data structures, algorithms, and multithreading; able to debug complex production issues. Comfortable working onsite in an Agile, test-driven environment with daily stand-ups and sprint rituals. Preferred Knowledge of message brokers (Kafka/RabbitMQ) and event-driven microservice architectures. Hands-on with cloud services—AWS ECS/EKS, Azure AKS, or GCP GKE—plus infrastructure-as-code (Terraform). Exposure to monitoring stacks (Prometheus, Grafana) and distributed tracing (Jaeger, OpenTelemetry). Benefits & Culture Highlights Engineer-centric culture: hackathons, architecture guilds, and dedicated learning budgets. Cutting-edge tech stack and green-field projects that influence millions of users. Performance-linked bonuses, comprehensive medical cover, and on-site wellness facilities. Skills: grpc,opentelemetry,aws eks,data structures,aws,rabbitmq,azure aks,git,nosql,jenkins,github actions,django,ci/cd,kubernetes,algorithms,terraform,java,postgresql,docker,devops,agile,aws ecs,prometheus,python,sql,cloud,fastapi,mongodb,restful apis,kafka,gcp gke,graphql,jaeger,spring boot,mysql,grafana,redis,multithreading,microservices

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Backend Engineer Locations: Hybrid, Hyderabad Duration: Full Time Job Description: We are looking for a backend engineer who can fully own the backend architecture of our platform, from the API layer to database modeling and cloud deployment. You will be responsible for maintaining and evolving all backend services and cloud infrastructure, including replacing the current limited .NET Core implementation with a modern, maintainable stack. Currently, we expose a few REST endpoints, but over time we plan to introduce a GraphQL API layer, which you will help design and implement. This is a hands-on, solo contributor role to start. We're looking to build a flexible, modern backend stack; ideally using Node.js (TypeScript) or Golang. You will work closely with a dedicated frontend team and report directly to the technical leadership. Responsibilities: Migrate existing REST API endpoints from .NET Core to Node.js (NestJS) or Golang backend microservices Architect modular, scalable backend services using clean architecture or domain-driven design principles Implement backend APIs that power a multi-tenant SaaS product, including: Multi-tenant logic for data isolation, scoped permissions, and tenant-aware behaviours Authentication and authorization middleware Role-based access control for 3 system roles Design and implement a GraphQL API layer (schema-first, data loaders, modular resolvers) Implement observability patterns (structured logging, tracing, error monitoring) using tools like OpenTelemetry, Sentry, etc. Manage the KrakenD API Gateway, including: Creating and updating route configurations Integrating new services via request transformation Build and publish Docker containers to AWS ECR; deploy to ECS (Fargate or EC2) Configure and maintain GitHub Actions for CI/CD pipelines Design and manage cloud infrastructure networking: Application Load Balancer (ALB) setup Private networking for service-to-service communication Planning for future service mesh adoption Design and evolve PostgreSQL database schemas Manage schema changes using code-based migration tools (not GUI tools) Collaborate with frontend and platform engineers to ensure seamless integration Qualifications: 5+ years of backend development experience with strong architectural decision-making Proficient in TypeScript/Node.js (Nest JS preferred) or Golang Strong knowledge of GraphQL, including: Unified schema design Resolver architecture Data Loader patterns and performance tuning Deep experience building and maintaining REST APIs Hands-on experience with: KrakenD API Gateway AWS services: ECR, ECS, ALB, and VPC networking Docker and container orchestration GitHub Actions CI/CD automation Fluent in PostgreSQL: schema design, indexing, migrations, and tuning Familiar with secure service-to-service networking and early-stage service mesh planning Comfortable working independently and proactively in a distributed team environment Clear communicator, highly accountable, and detail-oriented Nice to Have: Experience replacing legacy backends or leading replat forming efforts Familiarity with GraphQL federation patterns Exposure to Infrastructure-as-Code tools (e.g., Terraform, AWS CDK) Prior experience in a startup or high-ownership engineering environment

Posted 1 day ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary Job Summary: Qualcomm is seeking a seasoned Staff Engineer, DevOps to join our central software engineering team. In this role, you will lead the design, development, and deployment of scalable cloud-native and hybrid infrastructure solutions, modernize legacy systems, and drive DevOps best practices across products. This is a hands-on architectural role ideal for someone who thrives in a fast-paced, innovation-driven environment and is passionate about building resilient, secure, and efficient platforms. Key Responsibilities Architect and implement enterprise-grade AWS cloud solutions for Qualcomm’s software platforms. Design and implement CI/CD pipelines using Jenkins, GitHub Actions, and Terraform to enable rapid and reliable software delivery. Develop reusable Terraform modules and automation scripts to support scalable infrastructure provisioning. Drive observability initiatives using Prometheus, Grafana, Fluentd, OpenTelemetry, and Splunk to ensure system reliability and performance. Collaborate with software development teams to embed DevOps practices into the SDLC and ensure seamless deployment and operations. Provide mentorship and technical leadership to junior engineers and cross-functional teams. Manage hybrid environments, including on-prem infrastructure and Kubernetes workloads supporting both Linux and Windows. Lead incident response, root cause analysis, and continuous improvement of SLIs for mission-critical systems. Drive toil reduction and automation using scripting or programming languages such as PowerShell, Bash, Python, or Go. Independently drive and implement DevOps/cloud initiatives in collaboration with key stakeholders. Understand software development designs and compilation/deployment flows for .NET, Angular, and Java-based applications to align infrastructure and CI/CD strategies with application architecture. Required Qualifications 10+ years of experience in IT or software development, with at least 5 years in cloud architecture and DevOps roles. Strong foundational knowledge of infrastructure components such as networking, servers, operating systems, DNS, Active Directory, and LDAP. Deep expertise in AWS services including EKS, RDS, MSK, CloudFront, S3, and OpenSearch. Hands-on experience with Kubernetes, Docker, containerd, and microservices orchestration. Proficiency in Infrastructure as Code using Terraform and configuration management tools like Ansible and Chef. Experience with observability tools and telemetry pipelines (Grafana, Prometheus, Fluentd, OpenTelemetry, Splunk). Experience with agent-based monitoring tools such as SCOM and Datadog. Solid scripting skills in Python, Bash, and PowerShell. Familiarity with enterprise-grade web services (IIS, Apache, Nginx) and load balancing solutions. Excellent communication and leadership skills with experience mentoring and collaborating across teams. Preferred Qualifications Experience with api gateway solutions for API security and management. Knowledge on RDBMS, preferably MSSQL/Postgresql is good to have. Proficiency in SRE principles including SLIs, SLOs, SLAs, error budgets, chaos engineering, and toil reduction. Experience in core software development (e.g., Java, .NET). Exposure to Azure cloud and hybrid cloud strategies. Bachelor’s degree in Computer Science or a related field Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3076919

Posted 1 day ago

Apply

8.0 - 13.0 years

85 - 90 Lacs

Noida

Work from Office

Naukri logo

About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field.

Posted 2 days ago

Apply

8.0 - 13.0 years

85 - 90 Lacs

Noida

Work from Office

Naukri logo

About the Role We are seeking a highly skilled Staff Engineer to lead the architecture, development, and scaling of our Marketplace platform - including portals & core services such as Identity & Access Management (IAM), Audit, and Tenant Management services. This is a hands-on technical leadership role where you will drive engineering excellence, mentor teams, and ensure our platforms are secure, compliant, and built for scale. A Day in the Life Design and implement scalable, high-performance backend systems for all the platform capabilities Lead the development and integration of IAM, audit logging, and compliance frameworks, ensuring secure access, traceability, and regulatory adherence. Champion best practices for reliability, availability, and performance across all marketplace and core service components. Mentor engineers, conduct code/design reviews, and establish engineering standards and best practices. Work closely with product, security, compliance, and platform teams to translate business and regulatory requirements into technical solutions. Evaluate and integrate new technologies, tools, and processes to enhance platform efficiency, developer experience, and compliance posture. Take end-to-end responsibility for the full software development lifecycle, from requirements and design through deployment, monitoring, and operational health. What You Need 8+ years of experience in backend or infrastructure engineering, with a focus on distributed systems, cloud platforms, and security. Proven expertise in building and scaling marketplace platforms and developer/admin/API portals. Deep hands-on experience with IAM, audit logging, and compliance tooling. Strong programming skills in languages such as Python or Go. Experience with cloud infrastructure (AWS, Azure), containerization (Docker, Kubernetes), and service mesh architectures. Understanding of security protocols (OAuth, SAML, TLS), authentication/authorization, and regulatory compliance. Demonstrated ability to lead technical projects and mentor engineering teams & excellent problem-solving, communication, and collaboration skills. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Prior experience with Marketplace & Portals Bachelor's or Masters degree in Computer Science, Engineering, or a related field

Posted 2 days ago

Apply

8.0 years

7 - 7 Lacs

Bengaluru

On-site

GlassDoor logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to join the team of software engineering experts at Kyndryl? We are seeking a talented Technical Lead to contribute to our software engineering space and provide critical skills required for the development of cutting-edge products. We are looking for a highly skilled Technical Lead – Tools and Automation with hands-on experience in implementing and managing ZOHO tools, DevSecOps, and Observability Solutions within a banking environment. The ideal candidate will bring deep technical expertise, banking industry experience, and a strong focus on automation, service management, performance monitoring, and DevOps practices Key Responsibilities: Lead the design, implementation, and administration of ZOHO tools (primarily ZOHO Desk, ZOHO Analytics, and ZOHO FSM) for ITSM and APM processes. Manage the end-to-end lifecycle of automation projects, from requirements gathering to rollout and support. Collaborate with cross-functional teams (IT Infrastructure, DevOps, Compliance, Risk) to develop automation for incident management, problem management, change requests, and asset tracking. Monitor and analyze application performance using ZOHO’s APM capabilities, Observability Solution; define SLAs and thresholds. Lead the implementation of observability solutions across infrastructure, application, and network layers to improve monitoring, alerting, and root cause analysis. Support and drive DevSecOps practices including CI/CD pipeline for automated deployments. Drive integration of these tools with existing banking systems (e.g., Core Banking, CRM, IAM). Define and implement automation scripts/workflows to reduce manual effort in recurring IT operations. Create dashboards and reports for IT service delivery and performance metrics using ZOHO Analytics. Mentor junior engineers and serve as a technical escalation point for tools and automation issues. Ensure compliance with banking regulatory requirements in all automation and observability initiatives. Work closely with InfoSec teams to implement role-based access and data protection policies across all these tools and related systems. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 8+ years of IT experience with 3+ years in a technical leadership role. Proven expertise in ZOHO tools – especially ZOHO Desk, Creator, FSM, and Analytics. Strong background in IT Service Management (ITSM) frameworks (ITIL v3/v4 preferred). Experience implementing or managing APM solutions and performance tuning dashboards. Hands-on experience with observability tools such as ELK, Prometheus, Grafana, or OpenTelemetry. Familiarity with DevOps principles and tools. Hands-on scripting knowledge (e.g., JavaScript, Deluge, Python). Good knowledge of REST APIs, system integration, and automation workflows. Prior experience in banking or financial services domain is preferred. Familiarity with compliance frameworks like PCI-DSS, SOX, or RBI Guidelines is a plus. Excellent communication, stakeholder management, and documentation skills. Automation tools and frameworks: Knowledge of tools like Selenium, Jenkins, Ansible, Terraform, or others (Preferred). Preferred Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. ITIL Foundation or Practitioner Certification. ZOHO Certified Developer or Administrator (Preferred). Why Join Us: Be part of a fast-growing digital banking transformation team. Opportunity to lead innovation in enterprise automation, DevSecOps, and observability. Work with cutting-edge tools in a regulated and impact-driven industry. Flexible work culture and structured career path. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Be at the Forefront of Mobility's Future: Join Ford as a Site Reliability Engineer! Enterprise Technology is the engine driving the future of transportation, and we're looking for a talented Site Reliability Engineer (SRE) to help us redefine mobility. In this role, you'll leverage cutting-edge technology to enhance customer experiences, improve lives, and create vehicles as smart as you are. As an SRE at Ford, you'll be instrumental in developing, enhancing, and expanding our global monitoring and observability platform. You'll blend software and systems engineering to ensure the uptime, scalability, and maintainability of our critical cloud services. You'll be at the intersection of SRE and Software Development, building and driving the adoption of our global monitoring capabilities. If you're passionate about using your IT expertise and analytical skills to shape the future of transportation, this is your opportunity to make a real impact. Join us and be part of a team that's building the future of mobility! Responsibilities Write, configure, and deploy code that improves service reliability for existing or new systems; set standard for others with respect to code quality. Provide helpful and actionable feedback and review for code or production changes. Drive repair/optimization of complex systems with consideration towards a wide range of contributing factors. Lead debugging, troubleshooting, and analysis of service architecture and design. Participate in on-call rotation. Write documentation: design, system analysis, runbooks, playbooks. Provide design feedback and uplevel design skills of others. Implement and manage SRE monitoring application backends using Golang, Postgres, and OpenTelemetry. Develop tooling using Terraform and other IaC tools to ensure visibility and proactive issue detection across our platforms. Work within GCP infrastructure, optimizing performance, and cost, and scaling resources to meet demand. Collaborate with development teams to enhance system reliability and performance, applying a platform engineering mindset to system administration tasks. Develop and maintain automated solutions for operational aspects such as on-call monitoring, performance tuning, and disaster recovery. Troubleshoot and resolve issues in our dev, test, and production environments. Participate in postmortem analysis and create preventative measures for future incidents. Implement and maintain security best practices across our infrastructure, ensuring compliance with industry standards and internal policies. Participate in security audits and vulnerability assessments. Participate in capacity planning and forecasting efforts to ensure our systems can handle future growth and demand. Analyze trends and make recommendations for resource allocation. Identify and address performance bottlenecks through code profiling, system analysis, and configuration tuning. Implement and monitor performance metrics to proactively identify and resolve issues. Develop, maintain, and test disaster recovery plans and procedures to ensure business continuity in the event of a major outage or disaster. Participate in regular disaster recovery exercises. Contribute to internal knowledge bases and documentation. Qualifications Bachelor’s degree in Computer Science, Engineering, Mathematics or equivalent experience. 3+ years of experience as an SRE, DevOps Engineer, Software Engineer or similar role. Strong experience with Golang development and desired familiarity with Terraform Provider development. Proficient with monitoring and observability tools, particularly OpenTelemetry or other tools. Proficient with cloud services, with a strong preference for Kubernetes and Google Cloud Platform (GCP) experience. Solid programming skills in Golang and scripting languages, with a good understanding of software development best practices. Experience with relational and document databases. Ability to debug, optimize code, and automate routine tasks. Strong problem-solving skills and the ability to work under pressure in a fast-paced environment. Excellent verbal and written communication skills.

Posted 2 days ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Role Description We are seeking a highly skilled Business Analyst with strong experience in monitoring and observability tools and practices. The ideal candidate will have knowledge on Dynatrace , Elastic Cloud , OpenTelemetry , and be proficient in defining requirements, collaborating with cross-functional teams, and translating business needs into actionable user stories. This role will play a critical part in improving the performance, reliability, and visibility of our applications by bridging the gap between business stakeholders, developers, and SRE/DevOps teams. Key Responsibilities Work closely with stakeholders to gather and document monitoring and observability requirements. Create and maintain user stories, epics, and acceptance criteria in Jira. Collaborate with different teams to ensure effective implementation of APM solutions. Leverage tools like Dynatrace, Elastic Cloud, and OpenTelemetry to define monitoring strategies. Analyze logs, metrics, and traces to provide insights and drive root cause analysis. Develop and maintain dashboards, s, and reports for performance and health monitoring. Document knowledge and processes in Confluence. Conduct impact analysis and prioritize work in an Agile environment. Define KPIs and SLAs aligned with business goals for system performance and reliability. Required Skills & Experience Proven experience as a Business Analyst in a technical environment with monitoring focus. Strong knowledge and hands-on experience with Dynatrace, Elastic Cloud, and APM tools. Solid understanding of logging, metrics, traces, and observability concepts. Good to have knowledge in OpenTelemetry standards and integration approaches. Experience in working with Jira and Confluence in an Agile mode. Ability to translate complex technical monitoring requirements into user stories and visual reports. Strong analytical, communication, and documentation skills. Experience in dashboarding (Dynatrace, Kibana, Grafana, etc.) and ing/reporting best practices. Skills Dynatrace,elastic cloud,open telemetry

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Backend Engineer Location: Hyderabad - Hybrid Type: Full-time Job Descriptions: We are looking for a backend engineer who can fully own the backend architecture of our platform—from the API layer to database, Full-time modeling and cloud deployment. You will be responsible for maintaining and evolving all backend services and cloud infrastructure, including replacing the current limited .NET Core implementation with a modern, maintainable stack. Currently, we expose a few REST endpoints, but over time we plan to introduce a GraphQL API layer, which you will help design and implement. This is a hands-on, solo contributor role to start. We're looking to build a flexible, modern backend stack; ideally using Node.js (TypeScript) or Golang. You will work closely with a dedicated frontend team and report directly to the technical leadership. Responsibilities Migrate existing REST API endpoints from .NET Core to Node.js (NestJS) or Golang backend microservices Architect modular, scalable backend services using clean architecture or domain-driven design principles Implement backend APIs that power a multi-tenant SaaS product, including: Multi-tenant logic for data isolation, scoped permissions, and tenant-aware behaviors Authentication and authorization middleware Role-based access control for system roles Design and implement a GraphQL API layer (schema-first, data loaders, modular resolvers) Implement observability patterns (structured logging, tracing, error monitoring) using tools like OpenTelemetry, Sentry, etc. Manage the API Gateway, including: Creating and updating route configurations Integrating new services via request transformation Build and publish Docker containers to AWS ECR; deploy to ECS (Fargate or EC2) Configure and maintain GitHub Actions for CI/CD pipelines Design and manage cloud infrastructure networking: Application Load Balancer (ALB) setup Private networking for service-to-service communication Planning for future service mesh adoption Design and evolve PostgreSQL database schemas Manage schema changes using code-based migration tools (not GUI tools) Collaborate with frontend and platform engineers to ensure seamless integration Qualifications 5+ years of backend development experience with strong architectural decision-making Proficient in TypeScript/Node.js (NestJS preferred) or Golang Strong knowledge of GraphQL, including: Unified schema design Resolver architecture DataLoader patterns and performance tuning Deep experience building and maintaining REST APIs Hands-on experience with: API Gateway configuration AWS services: ECR, ECS, ALB, and VPC networking Docker and container orchestration GitHub Actions CI/CD automation Fluent in PostgreSQL: schema design, indexing, migrations, and tuning Familiar with secure service-to-service networking and early-stage service mesh planning Comfortable working independently and proactively in a distributed team environment Clear communicator, highly accountable, and detail-oriented Nice to Have Experience replacing legacy backends or leading replatforming efforts Familiarity with GraphQL federation patterns Exposure to Infrastructure-as-Code tools (e.g., Terraform, AWS CDK) Prior experience in a startup or high-ownership engineering environment

Posted 2 days ago

Apply

3.0 years

8 - 24 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Industry: Information Technology & Cloud Services Sector: Full-Stack Web Development & Managed AWS Solutions Operating at the intersection of modern web engineering and enterprise cloud migrations, we build and maintain high-traffic, mission-critical platforms for FinTech, Retail, and Media clients worldwide. Join our on-site engineering pod in India and help design resilient, low-latency back-end services that scale on AWS. Role & Responsibilities Design, develop, and deploy Node.js microservices on AWS (Lambda, ECS, EKS) following 12-factor principles. Build RESTful and event-driven APIs integrating DynamoDB, RDS, and third-party services with robust authentication and logging. Optimize code, queries, and infrastructure for performance, reliability, and cost efficiency using CloudWatch metrics. Implement automated testing, CI/CD pipelines, and Infrastructure-as-Code (CloudFormation/CDK) to ensure rapid, safe releases. Collaborate with front-end, DevOps, and product teams to refine requirements, conduct code reviews, and uphold engineering standards. Troubleshoot production incidents, execute root-cause analysis, and deliver preventive fixes within defined SLAs. Skills & Qualifications Must-Have 3+ years professional Node.js back-end development experience. Hands-on with AWS core services: Lambda, API Gateway, IAM, DynamoDB, S3. Proficiency in modern JavaScript/TypeScript, async patterns, and test frameworks (Jest/Mocha). Experience crafting REST and JSON-based microservices with secure authentication (JWT/OAuth2). Strong Git workflow knowledge and CI/CD skills (GitHub Actions, CodePipeline, or similar). Preferred Exposure to serverless event architectures (SNS, SQS, Kinesis) and container orchestration (ECS/EKS). Familiarity with Infrastructure-as-Code via CDK, Terraform, or CloudFormation. Knowledge of performance tuning, caching (Redis/ElastiCache), and observability stacks (OpenTelemetry, X-Ray). Benefits & Culture Highlights Greenfield projects leveraging the latest Node LTS and AWS serverless tooling. Dedicated hack days, in-house labs, and fully sponsored AWS certification tracks. Collaborative, no-ego engineering culture that rewards innovation and continuous learning. Skills: oauth2,cloudformation,jwt,restful apis,mocha,jest,rds,ci/cd,microservices,javascript,typescript,dynamodb,node.js,aws,git

Posted 2 days ago

Apply

3.0 years

8 - 24 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Industry: Information Technology & Cloud Services Sector: Full-Stack Web Development & Managed AWS Solutions Operating at the intersection of modern web engineering and enterprise cloud migrations, we build and maintain high-traffic, mission-critical platforms for FinTech, Retail, and Media clients worldwide. Join our on-site engineering pod in India and help design resilient, low-latency back-end services that scale on AWS. Role & Responsibilities Design, develop, and deploy Node.js microservices on AWS (Lambda, ECS, EKS) following 12-factor principles. Build RESTful and event-driven APIs integrating DynamoDB, RDS, and third-party services with robust authentication and logging. Optimize code, queries, and infrastructure for performance, reliability, and cost efficiency using CloudWatch metrics. Implement automated testing, CI/CD pipelines, and Infrastructure-as-Code (CloudFormation/CDK) to ensure rapid, safe releases. Collaborate with front-end, DevOps, and product teams to refine requirements, conduct code reviews, and uphold engineering standards. Troubleshoot production incidents, execute root-cause analysis, and deliver preventive fixes within defined SLAs. Skills & Qualifications Must-Have 3+ years professional Node.js back-end development experience. Hands-on with AWS core services: Lambda, API Gateway, IAM, DynamoDB, S3. Proficiency in modern JavaScript/TypeScript, async patterns, and test frameworks (Jest/Mocha). Experience crafting REST and JSON-based microservices with secure authentication (JWT/OAuth2). Strong Git workflow knowledge and CI/CD skills (GitHub Actions, CodePipeline, or similar). Preferred Exposure to serverless event architectures (SNS, SQS, Kinesis) and container orchestration (ECS/EKS). Familiarity with Infrastructure-as-Code via CDK, Terraform, or CloudFormation. Knowledge of performance tuning, caching (Redis/ElastiCache), and observability stacks (OpenTelemetry, X-Ray). Benefits & Culture Highlights Greenfield projects leveraging the latest Node LTS and AWS serverless tooling. Dedicated hack days, in-house labs, and fully sponsored AWS certification tracks. Collaborative, no-ego engineering culture that rewards innovation and continuous learning. Skills: oauth2,cloudformation,jwt,restful apis,mocha,jest,rds,ci/cd,microservices,javascript,typescript,dynamodb,node.js,aws,git

Posted 2 days ago

Apply

3.0 years

8 - 24 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Industry: Information Technology & Cloud Services Sector: Full-Stack Web Development & Managed AWS Solutions Operating at the intersection of modern web engineering and enterprise cloud migrations, we build and maintain high-traffic, mission-critical platforms for FinTech, Retail, and Media clients worldwide. Join our on-site engineering pod in India and help design resilient, low-latency back-end services that scale on AWS. Role & Responsibilities Design, develop, and deploy Node.js microservices on AWS (Lambda, ECS, EKS) following 12-factor principles. Build RESTful and event-driven APIs integrating DynamoDB, RDS, and third-party services with robust authentication and logging. Optimize code, queries, and infrastructure for performance, reliability, and cost efficiency using CloudWatch metrics. Implement automated testing, CI/CD pipelines, and Infrastructure-as-Code (CloudFormation/CDK) to ensure rapid, safe releases. Collaborate with front-end, DevOps, and product teams to refine requirements, conduct code reviews, and uphold engineering standards. Troubleshoot production incidents, execute root-cause analysis, and deliver preventive fixes within defined SLAs. Skills & Qualifications Must-Have 3+ years professional Node.js back-end development experience. Hands-on with AWS core services: Lambda, API Gateway, IAM, DynamoDB, S3. Proficiency in modern JavaScript/TypeScript, async patterns, and test frameworks (Jest/Mocha). Experience crafting REST and JSON-based microservices with secure authentication (JWT/OAuth2). Strong Git workflow knowledge and CI/CD skills (GitHub Actions, CodePipeline, or similar). Preferred Exposure to serverless event architectures (SNS, SQS, Kinesis) and container orchestration (ECS/EKS). Familiarity with Infrastructure-as-Code via CDK, Terraform, or CloudFormation. Knowledge of performance tuning, caching (Redis/ElastiCache), and observability stacks (OpenTelemetry, X-Ray). Benefits & Culture Highlights Greenfield projects leveraging the latest Node LTS and AWS serverless tooling. Dedicated hack days, in-house labs, and fully sponsored AWS certification tracks. Collaborative, no-ego engineering culture that rewards innovation and continuous learning. Skills: oauth2,cloudformation,jwt,restful apis,mocha,jest,rds,ci/cd,microservices,javascript,typescript,dynamodb,node.js,aws,git

Posted 2 days ago

Apply

7.0 years

10 - 20 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Lead Python Engineer Industry: Information Technology & Digital Solutions We are a fast-growing provider of enterprise-grade software engineering and digital transformation services, delivering secure, data-driven platforms for global finance, healthcare, and e-commerce customers from our on-site engineering hub in India. Role & Responsibilities Lead a squad of 4-8 Python engineers through the full SDLC—requirements, design, coding, code review, testing, deployment, and support. Architect scalable, low-latency microservices using Django/FastAPI, RESTful APIs, and asynchronous task queues such as Celery/RabbitMQ. Implement cloud-native solutions on AWS, leveraging Lambda, ECS/EKS, S3, RDS, and Terraform for infrastructure as code. Drive engineering best practices—TDD, CI/CD pipelines with GitLab/Jenkins, automated static analysis, and performance profiling. Collaborate with Product, DevOps, and QA to translate business problems into technical deliverables and ensure on-time releases. Mentor developers, conduct technical workshops, and contribute to hiring to build a high-performing Python guild. Skills & Qualifications Must-Have 7+ years hands-on Python backend development with Django, Flask, or FastAPI. Proven experience designing microservices and REST APIs serving high-concurrency workloads. Deep understanding of relational databases (PostgreSQL/MySQL) and NoSQL stores (MongoDB/Redis). Production exposure to AWS, containerisation with Docker, and orchestration via Kubernetes or ECS. Strong grasp of Git workflows, automated testing, and Agile/Scrum ceremonies. Excellent communication and people-leader skills enabling cross-functional influence. Preferred Exposure to event-driven patterns with Kafka or Kinesis. Experience implementing GraphQL, gRPC, or WebSocket streaming services. Knowledge of security, compliance, and observability (OpenTelemetry, Prometheus, ELK). Benefits & Culture Highlights Collaborative on-site culture with hackathons, brown-bag sessions, and a dedicated innovation lab. Fast-track leadership path and fully-funded certifications in AWS, Kubernetes, and data engineering. Comprehensive health coverage for family, annual performance bonus up to 20%, and generous leave policies. Skills: python,kubernetes,graphql,celery,sql,grpc,s3,mysql,eks,elk,jenkins,scrum,kafka,ecs,rest apis,prometheus,mongodb,opentelemetry,docker,redis,websocket,aws,terraform,postgresql,rabbitmq,rds,fastapi,tdd,django,lambda,kinesis,gitlab,flask,agile,python software foundation,ci/cd,microservices

Posted 3 days ago

Apply

1.0 years

0 Lacs

Thoothukudi, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Analytics Engineer (Splunk / OpenTelemetry / Python) Location: Thoothukudi Job Type: Full-time Experience: 1 year Notice Period: Immediate joiners preferred Job Summary: We are looking for a dedicated and technically skilled Analytics Engineer with 1 year of experience to join our team in Thoothukudi. The ideal candidate should have working knowledge of Splunk, OpenTelemetry, and Python, and be passionate about data-driven monitoring, performance analysis, and system optimization. Key Responsibilities: Support the development and maintenance of observability and analytics solutions using Splunk and OpenTelemetry. Assist in writing Python scripts to automate data processing and reporting tasks. Monitor system and application logs to identify anomalies and performance issues. Help create dashboards and alerts to provide real-time visibility into system health. Collaborate with development and infrastructure teams to improve monitoring coverage. Contribute to incident investigations and root cause analysis through log and telemetry data. Maintain clear and accurate documentation of analytics configurations and processes. Key Skills Required: Basic to intermediate experience in Splunk for data analysis and visualization. Exposure to OpenTelemetry for telemetry data capture and integration. Good understanding of Python scripting and automation. Familiarity with observability and monitoring concepts. Strong attention to detail and analytical thinking. Willingness to learn and adapt in a fast-paced environment. Good communication and teamwork skills.

Posted 3 days ago

Apply

5.0 years

10 - 16 Lacs

India

On-site

GlassDoor logo

As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹1,000,000.00 - ₹1,600,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person Expected Start Date: 01/07/2025

Posted 4 days ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About The Role We are seeking an experienced Senior Backend Developer with strong expertise in Java, Spring framework, and high availability service design. This role will be pivotal in designing, developing, and optimizing robust backend systems that power our index and product generation platforms while providing technical leadership within the team. You'll be joining a dynamic team focused on solving complex challenges in delivering near real-time financial data with high throughput and resiliency requirements. About The Team This is an excellent opportunity to join the Index IT team, as part of a delivery-focused IT group responsible for designing, developing and supporting internal, client and public-facing distribution solutions. If selected, you will work as part of a delivery focused and talented software development team responsible for designing, developing and supporting the index and product generation platforms. You will use cutting edge software development techniques and technologies, following the best practices of the industry. Our team solves challenging problems around delivering near real-time financial data, working with large flexible schemas and building database systems that provide exceptional throughput and resiliency. We leverage the latest technologies including Kubernetes, continuous integration/deployment pipelines, and build highly observable applications. MSCI provides a very attractive compensation package, an exciting work environment and opportunities for continuous self-development and career advancement for the right candidates. Key Responsibilities Design, develop, and maintain scalable, high-performance backend applications using Java and Spring framework Lead the architecture and implementation of complex API services that interact with high availability database systems Develop solutions for processing and delivering near real-time financial data streams Design flexible schemas that can accommodate evolving financial data requirements Collaborate closely with product managers, business analysts, and other developers to translate business requirements into technical solutions Design and optimize OLAP database interactions for analytical performance and high availability Implement observable applications with comprehensive monitoring and logging Design and develop RESTful APIs following industry best practices Lead code reviews and mentor junior developers on team best practices Participate in the full software development lifecycle from requirements analysis through deployment Troubleshoot and resolve complex production issues in high-throughput systems Evaluate and recommend new technologies and approaches to improve system performance and developer productivity Contribute to technical documentation and system design specifications Preferred Qualifications Master's degree in Computer Science, Software Engineering, or related field Experience with Kubernetes and containerized application deployment Experience with observability frameworks such as OpenTelemetry (OTEL) Proficiency with continuous integration and deployment methodologies (CI/CD) Knowledge of cloud platforms (AWS, Azure, or GCP) Experience with microservices architecture Experience with containerization technologies (Docker) Understanding of DevOps practices Experience with message brokers (Kafka, RabbitMQ) Background in agile development methodologies Experience with test-driven development and automated testing frameworks Familiarity with financial data models and structures Background in financial services or experience with financial data Required Qualifications Bachelor's degree in Computer Science, Information Technology, or related field 7+ years of professional experience in backend software development 5+ years of experience with Java programming and core Java concepts 3+ years of experience with Spring framework (Spring Boot, Spring MVC, Spring Data) Familiarity with OLAP concepts and high availability database design principles Experience building systems that handle large data volumes with high throughput requirements Proficiency in SQL and database optimization techniques Experience with RESTful API design and implementation Solid understanding of design patterns and object-oriented programming Experience with version control systems (Git) Strong problem-solving skills and attention to detail Excellent communication skills to collaborate effectively across teams and explain technical concepts to non-technical stakeholders What We Offer You Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose - to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Senior GenAI Engineer Location: Chennai Mode: Hybrid Role Core Generative AI & LLM Skills: 5+ years in Software Engineering, 1+ year in Generative AI. Strong understanding of LLMs , prompt engineering, and RAG. Experience with multi-agent system design (planning, delegation, feedback). Hands-on with LangChain (tools, memory, callbacks) and LangGraph (multi-agent orchestration). Proficient in using vector DBs (OpenSearch, Pinecone, FAISS, Weaviate). Skilled in Amazon Bedrock and integrating LLMs like Claude, Titan, Llama. Strong Python (LangChain, LangGraph, FastAPI, boto3). Experience building MCP servers/tools. Designed robust APIs, integrated external tools with agents. AWS proficiency: Lambda, API Gateway, DynamoDB, S3, Neptune, Bedrock Agents Knowledge of data privacy, output filtering, audit logging Familiar with AWS IAM, VPCs, and KMS encryption Desired Skills: Integration with Confluence, CRMs, knowledge bases, etc. Observability with Langfuse, OpenTelemetry, Prompt Catalog Understanding of model alignment & bias mitigation

Posted 4 days ago

Apply

0.0 - 5.0 years

10 - 16 Lacs

Musheerabad, Hyderabad, Telangana

On-site

Indeed logo

As the Senior DevOps Engineer focused on Observability, you will set observability standards, lead automation efforts and mentor engineers ensuring all monitoring and Datadog configuration changes are implemented Infrastructure-as-Code (IaC). You will lead the design and management of a code-driven Datadog observability platform, providing end-to-end visibility into Java applications, Kubernetes workloads and containerized infrastructure. This role emphasizes cost-effective observability at scale requiring deep expertise in Datadog monitoring, logging, tracing and optimization techniques. You'll collaborate closely with SRE, DevOps and Software Engineering teams to standardize monitoring and logging practices to deliver scalable, reliable and cost-efficient observability solutions. This is a hands-on engineering role focused on observability-as-code. All monitoring, logging, alerting, and Datadog configurations are defined and managed through Terraform, APIs and CI/CD workflows — not manual configuration in the Datadog UI. PRIMARY RESPONSIBILITIES: Own and define observability standards for Java applications, Kubernetes workloads and cloud infrastructure Configure and manage the Datadog platform using Terraform and Infrastructure-as-Code (IaC) best practices Drive adoption of structured JSON logging, distributed tracing and custom metrics across Java and Python services Optimize Datadog usage through cost governance, log filtering, sampling strategies and automated reporting Collaborate closely with Java developers and platform engineers to standardize instrumentation and alerting Troubleshoot and resolve issues with missing or misconfigured logs, metrics and traces, working with developers to ensure proper instrumentation and data flow into Datadog Involve in incident response efforts using Datadog insights for actionable alerting, root cause analysis (RCA) and reliability improvements Serve as the primary point of contact for Datadog-related requests, supporting internal teams with onboarding, integration and usage questions Continuously audit and tune monitors for alert quality, reducing false positives and improving actionable signal detection Maintain clear internal documentation on Datadog usage, standards, integrations and IaC workflows Evaluate and propose improvements to the observability stack, including new Datadog features, OpenTelemetry adoption and future architecture changes Mentor engineers and develop internal training programs on Datadog, observability-as-code and modern log pipeline architecture QUALIFICATIONS: Bachelor’s degree in Computer Science, Engineering, Mathematics, Physics or a related technical field 5+ years of experience in DevOps, Site Reliability Engineering, or related roles with a strong focus on observability and infrastructure as code Hands-on experience managing and scaling Datadog programmatically using code-based workflows (e.g. Terraform, APIs, CI/CD) Deep expertise in Datadog including APM, logs, metrics, tracing, dashboards and audit trails Proven experience integrating Datadog observability into CI/CD pipelines (e.g. GitLab CI, AWS CodePipeline, GitHub Actions) Solid understanding of AWS services and best practices for monitoring services on Kubernetes infrastructure Strong background in Java application development is preferred Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹1,000,000.00 - ₹1,600,000.00 per year Benefits: Paid sick time Schedule: Monday to Friday Night shift US shift Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Location: Musheerabad, Hyderabad, Telangana (Preferred) Shift availability: Night Shift (Required) Work Location: In person Expected Start Date: 01/07/2025

Posted 4 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

🚨 Urgent Hiring: Java Spring Boot Developer – 7+ Years Experience | Gurgaon (On-site) 🚨 📍 Location: Gurgaon, India Type : Full-Time (preferred) Job Summary : We are seeking a highly skilled and motivated Java Spring Boot Developer to join our engineering team. This role focuses on developing and deploying scalable, event-driven applications on OpenShift , with data ingestion from Apache Kafka and transformation logic written in Apache Camel . The ideal candidate should possess a strong understanding of enterprise integration patterns , stream processing , and protocols, and have experience with observability tools and concepts in AI-enhanced applications . Key Responsibility : Design, develop, and deploy Java Spring Boot applications on Red Hat OpenShift . Build robust data pipelines with Apache Kafka for high-throughput ingestion and real-time processing. Implement transformation and routing logic using Apache Camel and Enterprise Integration Patterns (EIPs) . Develop components that interface with various protocols including HTTP , JMS , and database systems (SQL/NoSQL). Utilize Apache Flink or similar tools for complex event and stream processing where necessary. Integrate observability solutions (e.g., Prometheus, Grafana, ELK, OpenTelemetry) to ensure monitoring, logging, and alerting. Collaborate with AI/ML teams to integrate or enable AI-driven capabilities within applications. Write unit and integration tests, participate in code reviews, and support CI/CD practices. Troubleshoot and optimize application performance and data flows in production environments Required Skills & Qualification 5+ years of hands-on experience in Java development with strong proficiency in Spring Boot . Solid experience with Apache Kafka (consumer/producer patterns, schema registry, Kafka Streams is a plus). Proficient in Apache Camel and understanding of EIPs (routing, transformation, aggregation, etc.). Strong grasp of various protocols (HTTP, JMS, TCP) and messaging paradigms . In-depth understanding of database concepts – both relational and NoSQL. Experience with stream processing technologies such as Apache Flink , Kafka Streams, or Spark Streaming. Familiarity with OpenShift or similar container platforms (Kubernetes, Docker). Knowledge of observability tools and techniques – logging, metrics, tracing. Exposure to AI concepts (basic understanding of ML model integration, AI-driven decisions, etc.). Troubleshoot and optimize application performance and data flows in production environments ⚠️ Important Notes: Only candidates with a notice period of 30 days or less will be considered. PF account is Must for joining Full time 📬 How to Apply: Email your resume to career@strive4x.net with the subject line: Java Spring Boot Developer - Gurgaon Please include the following details: Full Name Mobile Number Current Location Total Experience (in years) Current Company Current CTC Expected CTC Notice Period Are you open to relocating to Gurgaon? (Yes/No) 👉 Know someone who fits the role? Tag or share this with them! #JavaJobs #SpringBoot #GurgaonJobs #Kafka #ApacheCamel #OpenShift #HiringNow #SoftwareJobs #SeniorDeveloper #Microservices #Strive4X

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Foundit logo

Job Requisition ID # 25WD89378 Position Overview Join Autodesk as a Observability Engineer driving the architecture, scale, and evolution of our global observability platform engineering team. You will lead technical strategy, platform innovation, and cross-functional collaboration to elevate telemetry across the engineering org. Responsibilities Manage Splunk for centralized logging ensure system health, performance, and uptime Configure OpenTelemetry agents across AWS environments Onboard log sources from applications, cloud services, and network devices into Splunk Optimize log formats and develop field extractions in collaboration with engineering teams Build and maintain Splunk dashboards, alerts, reports, and knowledge objects Write and refine SPL queries for troubleshooting and operational insight Maintain Splunk indexes for optimal storage and performance Troubleshoot issues across the logging pipeline Monitor AWS logging infrastructure contribute to capacity planning Automate tasks using Python, Bash, or AWS CLI Participate in logging platform on-call rotation Align with senior engineers on observability standards and best practices Support SSO/AD integration with Splunk Document configurations and SOPs Minimum Qualifications Bachelor's in CS, Engineering, or related field 3-5 years in DevOps, SRE, or Observability with a logging focus Strong Linux command-line and scripting (Python/Bash) skills Solid understanding of OpenTelemetry and log pipelines Proven experience with Splunk (SPL, dashboards, admin) Familiar with AWS (EC2, ECS, Lambda, S3, CloudWatch) Preferred Qualifications Splunk Core Certified User/Power User Experience deploying OTEL agents in production Familiarity with IaC (Terraform, CloudFormation, Ansible) Logging experience with Docker/Kubernetes Understanding of AWS observability tools (CloudWatch, X-Ray) Solid grasp of networking/security in log management #LI-MR2 Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk - our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you're an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future Join us! Salary transparency Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: Are you an existing contractor or consultant with Autodesk Please search for open jobs and apply internally (not on this external site).

Posted 4 days ago

Apply

Exploring Opentelemetry Jobs in India

The job market for opentelemetry professionals in India is rapidly growing, with many companies looking to adopt this technology to improve their observability and monitoring capabilities. If you are a job seeker interested in opentelemetry roles, there are plenty of opportunities waiting for you in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for opentelemetry professionals in India varies based on experience level: - Entry-level: INR 5-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

A typical career path in opentelemetry may progress as follows: - Junior Developer - Developer - Senior Developer - Tech Lead

Related Skills

In addition to proficiency in opentelemetry, employers often look for candidates with the following skills: - Proficiency in cloud platforms like AWS, GCP, or Azure - Knowledge of monitoring and observability tools - Strong programming skills in languages like Java, Python, or Go

Interview Questions

  • What is opentelemetry and how does it differ from other monitoring tools? (basic)
  • How would you set up opentelemetry in a microservices architecture? (medium)
  • Can you explain the benefits of distributed tracing in opentelemetry? (medium)
  • Describe how sampling works in opentelemetry. (medium)
  • How would you troubleshoot performance issues using opentelemetry data? (advanced)
  • Explain the role of exporters in opentelemetry. (basic)
  • What are the key components of an opentelemetry instrumentation library? (medium)
  • How does opentelemetry handle context propagation between services? (medium)
  • Can you explain the concept of spans and traces in opentelemetry? (basic)
  • How would you integrate opentelemetry with a logging framework? (medium)
  • Describe the process of creating custom metrics in opentelemetry. (advanced)
  • What are the common challenges faced when implementing opentelemetry in a large-scale system? (advanced)
  • How does opentelemetry handle data collection in a multi-tenant environment? (advanced)
  • What are the best practices for securing opentelemetry data transmissions? (advanced)
  • Can you explain the role of the opentelemetry collector in data processing? (medium)
  • How would you monitor the performance of opentelemetry itself? (advanced)
  • Describe a scenario where opentelemetry helped improve the performance of a system. (advanced)
  • How does opentelemetry handle sampling in a distributed system? (medium)
  • What are the key differences between opentelemetry and other APM tools? (medium)
  • How can opentelemetry be integrated with containerized applications? (medium)
  • Explain the concept of baggage in opentelemetry context propagation. (medium)
  • How would you handle log correlation with opentelemetry traces? (advanced)
  • Can you share your experience with migrating from a different monitoring tool to opentelemetry? (advanced)
  • What are the key considerations for scaling opentelemetry in a growing infrastructure? (advanced)
  • How would you contribute to the opentelemetry open-source project? (advanced)

Conclusion

As you prepare for opentelemetry job interviews in India, make sure to brush up on your technical knowledge, practice coding exercises, and familiarize yourself with common interview questions. With the right skills and preparation, you can confidently pursue a rewarding career in this exciting field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies