Home
Jobs

538 Prometheus Jobs - Page 2

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Educational Bachelor of Engineering Service Line Quality Responsibilities Key word Observability, Observability Architect, Dynatrace, Monitoring , Logging and alerting, End-End VisibilityKey Tools and Platform Dynatrace, Splunk, New Relic, Prometheus and Grafana (Our immediate requirement is for Dynatrace and Splunk) Preferred Skills: Technology-Architecture-Architecture - ALL

Posted 1 day ago

Apply

6.0 - 11.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Lead the solution design and implementation of core platform features API Design and Implementation Provide Operational support with building platform monitoring tools/dashboards, ad hoc reports Defect Fixes, Performance Testing, Endurance testing Willingness to work in second shift from offshore so that he can overlap with Onsite team Qualifications Overall 8+ years of developing internet-scale solution development primarily using Java, Spring Boot and no-sql databases Must have demonstrated proficiency and experience in the following tools and technologies Java 11 (Lambdas, Streams, Completable Future, optional, generics) Spring boot (webflux , Reactor 3), spring-data, REST Java functional and reactive programming Test Driven Development Asynchronous Reactive Micro services utilizing Vert.x REST APIs using Spring Boot 2.0 (reactive) and skilled in Open API (swagger) specification Designing database schemas, index design, optimizations for query tuning Working knowledge of cloud technologies (eg. docker, kubernetes, jager, prometheus) Modern software engineering toolsgit workflows, gradle, load testing tools, mock frameworks Good knowledge of messaging systems like Kafka, mq Take pride in writing good clean code, perform peer code reviews and architecture reviews. A bachelors degree in engineering or related field Java certification is a plus Candidate Soft Skills Demonstrated evidence to learn new skills Demonstrated evidence of going above and beyond to make projects successful Good Communication is necessary. skills Core Java SpringBoot Rest API VertX No SQL Kafka MQ

Posted 1 day ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

: Lead the solution design and implementation of core platform features API Design and Implementation Provide Operational support with building platform monitoring tools/dashboards, ad hoc reports Defect Fixes, Performance Testing, Endurance testing Willingness to work in second shift from offshore so that he can overlap with Onsite team Qualifications Overall 4+ years of developing internet-scale solution development primarily using Java, Spring Boot and no-sql databases Must have demonstrated proficiency and experience in the following tools and technologies Java 11 (Lambdas, Streams, Completable Future, optional, generics) Spring boot (webflux , Reactor 3), spring-data, REST Java functional and reactive programming Test Driven Development Asynchronous Reactive Micro services utilizing Vert.x REST APIs using Spring Boot 2.0 (reactive) and skilled in Open API (swagger) specification Designing database schemas, index design, optimizations for query tuning Working knowledge of cloud technologies (eg. docker, kubernetes, jager, prometheus) Modern software engineering toolsgit workflows, gradle, load testing tools, mock frameworks Good knowledge of messaging systems like Kafka, mq Take pride in writing good clean code, perform peer code reviews and architecture reviews. A bachelors degree in engineering or related field Java certification is a plus Candidate Soft Skills Demonstrated evidence to learn new skills Demonstrated evidence of going above and beyond to make projects successful Good Communication is necessary. skills Core Java SpringBoot Rest API VertX No SQL Kafka MQ

Posted 1 day ago

Apply

8.0 - 13.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Primary Skills 4-6 Years Experience with Azure Cloud & Azure Services Experience in Infra Design , Estimations and Impact Assessment for complex requirements Experience of automating processes using Helm for managing Kubernetes deployments Good understanding of the software development lifecycle and DevOps culture IaC with Terraform, alongside Automation with Ansible Strong CI/CD knowledge, with hands-on work across Azure DevOps Prior work with tools such as Native Azure DevOps Deployment, TeamCity, Octopus Deploy etc Experience in Observability tools like Splunk, Prometheus and Dynatrace History of Scripting (PowerShell, NodeJs Python etc) Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analysing results Ensuring that systems are safe and secure against cybersecurity threats Identifying technical problems and developing software updates and fixes Working with software developers and software engineers to ensure that development follows established processes and works as intended Planning out projects and being involved in project management decisions Previous experience in banking domain is not mandatory, but preferable Develops and maintains mission-critical information extraction, analysis, and management systems. Implements streaming analysis algorithms to generate question focused data sets (QFDs). Provides direct and responsive support for urgent analytic needs. Participates in architecture and software development activities. Translates loosely defined requirements into solutions. Uses open source technologies and tools to accomplish specific use cases encountered within the project. Uses coding languages or scripting methodologies to solve a problem with a custom workflow. Collaborates with others on the project to brainstorm about the best way to tackle a complex technological infrastructure, security, or development problem. Performs incremental testing actions on code, processes, and deployments to identify ways to streamline execution and minimize errors encountered.

Posted 1 day ago

Apply

3.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Strong proficiency in Java (8 or higher) and Spring Boot framework. Basic foundation on AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs. Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.).

Posted 1 day ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Mumbai

Work from Office

Naukri logo

Skill—Java AWS Experience:6-9Yrs Ro leT2 Responsibilities: Strong proficiency in Java (8 or higher) and Spring Boot framework. Hands-on experience with AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs.ac Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Knowledge of containerization (Docker) and orchestration tools (ECS/Kubernetes) is a plus. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.).Develop and maintain robust backend services and RESTful APIs using Java and Spring Boot. Design and implement microservices that are scalable, maintainable, and deployable in AWS. Integrate backend systems with AWS services including but not limited to Lambda, S3, DynamoDB, RDS, SNS/SQS, and CloudFormation. Collaborate with product managers, architects, and other developers to deliver end-to-end features. Participate in code reviews, design discussions, and agile development processes.

Posted 1 day ago

Apply

1.0 - 3.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

As a DevOps + Site Reliability Engineer you will work in an agile, collaborative environment to build, deploy, configure, and support services in the IBM Cloud. Your responsibilities will encompass the design and implementation of innovative features/automation, fine-tuning and sustaining existing code for optimal performance, uncovering efficiencies, supporting adopters globally, and driving to deliver a highly available cloud offering within IBM Cloud Security Services. In this role, you will be implementing and consuming APIs in the IBM cloud infrastructure environment while configuring integrating services. You will be a motivated self-starter who loves to solve challenging problems and feels comfortable managing multiple and changing priorities, and meeting deadlines in an entrepreneurial environment. Your primary responsibilities include: Contributing to new features and improving existing capabilities or processes while relentlessly troubleshooting problems to deliver. Practice secure development principles supporting continuous integration and delivery leveraging tools such as Tekton, Ansible, and Terraform Orchestrate and maintain Kubernetes/OpenShift clusters to ensure high availability and resilience Collaborate across teams in activities including code reviews, testing, audit support, and mitigating issues. Continuously improve code, automation, testing, monitoring and alerting processes to ensure proactive identification and resolution of potential issues. Lead or contribute to the problem resolution process for our clients, from analysis and troubleshooting, to deploying workarounds or fixes Participate in on-call rotation and lead or contribute to the problem resolution process for our clients, from analysis and troubleshooting, to deploying workarounds or fixes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 1-3 Years Experience delivering code and debugging problems. 1-3 Years Experience in SRE, DevOps or similar role A strong preference for collaborative teamwork A rigorous approach to problem-solving Experience with cloud computing technologies Programming skills – scripting, Go, Python, or similar Hands-on experience with Container technologiesKubernetes (IKS), RedHat OpenShift, Docker, Rancher, Podman Proficient with automation tools and CI/CDs Preferred technical and professional experience Strongly preferred experience in working with production Kubernetes/OpenShift environments. Excellent Git skills (merges, rebase, branching, forking, submodules) Experience with Tekton, Ansible, Terraform, Jenkins Experience with Rust, C/C++, or Java Experience using, configuring and troubleshooting CI/CDs Excellent record of improving solutions through automation Experience with monitoring and alerting tools (e.g., Prometheus, Grafana, Kibana, Sysdig, LogDNA). SQL or Postgresql experience

Posted 1 day ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Design, implement, and manage scalable, secure, and highly available infrastructure on GCP Automate infrastructure provisioning using tools like Terraform or Deployment Manager Build and manage CI/CD pipelines using Jenkins, GitLab CI, or similar tools Manage containerized applications using Kubernetes (GKE) and Docker Monitor system performance and troubleshoot infrastructure issues using tools like Stackdriver, Prometheus, or Grafana Implement security best practices across cloud infrastructure and deployments Collaborate with development and operations teams to streamline release processes Ensure high availability, disaster recovery, and backup strategies are in place Participate in performance tuning and cost optimization of GCP resources Strong hands-on experience with Google Cloud Platform (GCP) services Harness as an optional skill. Proficiency in Infrastructure as Code tools like Terraform or Google Deployment Manager Experience with Kubernetes (especially GKE) and Docker Knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI, or CircleCI Familiarity with scripting languages (e.g., Bash, Python) Experience with logging and monitoring tools (e.g., Stackdriver, Prometheus, ELK, Grafana) Understanding of networking, security, and IAM in a cloud environment Strong problem-solving and communication skills Experience in Agile environments and DevOps culture GCP Associate or Professional Cloud DevOps Engineer certification Experience with Helm, ArgoCD, or other GitOps tools Familiarity with other cloud platforms (AWS, Azure) is a plus Knowledge of application performance tuning and cost management on GCP

Posted 1 day ago

Apply

14.0 - 19.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Strong proficiency in Java (8 or higher) and Spring Boot framework. Hands-on experience with AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs. Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Knowledge of containerization (Docker) and orchestration tools (ECS/Kubernetes) is a plus. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.).

Posted 1 day ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

8 years of hands-on experience in AWS, Kubernetes, Prometheus, Cloudwatch,Splunk.Datadog Terraform, Scripting (Python/Go), Incident Management Architect and manage enterprise-level databases with 24/7 availability Lead efforts on optimization, backup, and disaster recovery planning Design and manage scalable CI/CD pipelines for cloud-native apps Automate infrastructure using Terraform/CloudFormation Implement container orchestration using Kubernetes and ECS Ensure cloud security, compliance, and cost optimization Monitor performance and implement high-availability setups Collaborate with dev, QA, and security teams; drive architecture decisions Troubleshoot and resolve issues in a timely manner, ensuring optimal system performance. Keep up-to-date with industry trends and advancements, incorporating best practices into our development processes. Bachelor's or Master's degree in Computer Science or related field. Solid understanding of AWS services, including but not limited to EC2, Lambda, S3, and RDS. Experience with Kafka for building event-driven architectures. Strong database skills, including SQL and NoSQL databases. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Excellent problem-solving and troubleshooting skills. Good to have TM Vault core banking knowledge, Strong communication and collaboration skills.

Posted 1 day ago

Apply

9.0 - 14.0 years

11 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a skilled and proactive DevOps Engineer with deep expertise in Google Cloud Platform (GCP), Google Kubernetes Engine (GKE), and on-premises Kubernetes platforms like OpenShift. The ideal candidate will have a strong foundation in Infrastructure as Code (IaC) using Terraform, and a solid understanding of cloud-native networking, service meshes (e.g., Istio), and CI/CD pipelines. Experience with DevSecOps practices and security tools is highly desirable. Key Responsibilities Design, implement, and manage scalable infrastructure on GCP (especially GKE google Kubernetes environment) and on-prem Kubernetes (OpenShift). Develop and maintain Terraform modules for infrastructure provisioning and configuration. Troubleshoot and resolve complex issues related to networking, Istio, and Kubernetes clusters. Build and maintain CI/CD pipelines using tools such as Jenkins, Codefresh, or GitHub Actions. Integrate and manage DevSecOps tools such as Blackduck, Checkmarx, Twistlock, and Dependabot to ensure secure software delivery. Collaborate with development and security teams to enforce security best practices across the SDLC. Support and configure WAFs and on-prem load balancers as needed. Required Skills & Qualifications: 5+ years of experience in a DevOps or Site Reliability Engineering role. Proficiency in GCP and GKE, with hands-on experience in OpenShift or similar on-prem Kubernetes platforms. Strong experience with Terraform and managing cloud infrastructure as code. Solid understanding of Kubernetes networking, Istio, and service mesh architectures. Experience with at least one CI/CD toolJenkins, Codefresh, or GitHub Actions. Familiarity with DevSecOps tools such as Black Duck, Checkmarx, Twistlock, and Dependabot. Strong Linux administration and scripting skills. Nice to Have: Experience with WAFs and on-prem load balancers. Familiarity with monitoring and logging tools (e.g., Prometheus, ELK stack, Dynatrace, Splunk). Knowledge of container security and vulnerability scanning best practices. Familiarity with GenAI, Google Vertex AI in Google Cloud.

Posted 1 day ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Strong proficiency in Java (8 or higher) and Spring Boot framework. Hands-on experience with AWS services such as EC2, Lambda, API Gateway, S3, CloudFormation, DynamoDB, RDS. Experience developing microservices and RESTful APIs.ac Understanding of cloud architecture and deployment strategies. Familiarity with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Knowledge of containerization (Docker) and orchestration tools (ECS/Kubernetes) is a plus. Experience with monitoring/logging tools like CloudWatch, ELK Stack, or Prometheus is desirable. Familiarity with security best practices for cloud-native apps (IAM roles, encryption, etc.).Develop and maintain robust backend services and RESTful APIs using Java and Spring Boot. Design and implement microservices that are scalable, maintainable, and deployable in AWS. Integrate backend systems with AWS services including but not limited to Lambda, S3, DynamoDB, RDS, SNS/SQS, and CloudFormation. Collaborate with product managers, architects, and other developers to deliver end-to-end features. Participate in code reviews, design discussions, and agile development processes.

Posted 1 day ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Noida

Work from Office

Naukri logo

About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field. Heres What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices

Posted 1 day ago

Apply

8.0 - 13.0 years

13 - 18 Lacs

Noida

Work from Office

Naukri logo

Engineering at Innovaccer With every line of code, we accelerate our customers' success, turning complex challenges into innovative solutions. Collaboratively, we transform each data point we gather into valuable insights for our customers. Join us and be part of a team that's turning dreams of better healthcare into reality, one line of code at a time. Together, were shaping the future and making a meaningful impact on the world. About the Role We are seeking a highly skilled Staff Engineer to lead the architecture, development, and scaling of our Marketplace platform including portals & core services such as Identity & Access Management (IAM), Audit, and Tenant Management services. This is a hands-on technical leadership role where you will drive engineering excellence, mentor teams, and ensure our platforms are secure, compliant, and built for scale. A Day in the Life Design and implement scalable, high-performance backend systems for all the platform capabilities Lead the development and integration of IAM, audit logging, and compliance frameworks, ensuring secure access, traceability, and regulatory adherence. Champion best practices for reliability, availability, and performance across all marketplace and core service components. Mentor engineers, conduct code/design reviews, and establish engineering standards and best practices. Work closely with product, security, compliance, and platform teams to translate business and regulatory requirements into technical solutions. Evaluate and integrate new technologies, tools, and processes to enhance platform efficiency, developer experience, and compliance posture. Take end-to-end responsibility for the full software development lifecycle, from requirements and design through deployment, monitoring, and operational health. What You Need 8+ years of experience in backend or infrastructure engineering, with a focus on distributed systems, cloud platforms, and security. Proven expertise in building and scaling marketplace platforms and developer/admin/API portals. Deep hands-on experience with IAM, audit logging, and compliance tooling. Strong programming skills in languages such as Python or Go. Experience with cloud infrastructure (AWS, Azure), containerization (Docker, Kubernetes), and service mesh architectures. Understanding of security protocols (OAuth, SAML, TLS), authentication/authorization, and regulatory compliance. Demonstrated ability to lead technical projects and mentor engineering teams & excellent problem-solving, communication, and collaboration skills. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Prior experience with Marketplace & Portals Bachelor's or Masters degree in Computer Science, Engineering, or a related field.

Posted 1 day ago

Apply

4.0 - 9.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 4 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 day ago

Apply

6.0 - 11.0 years

10 - 14 Lacs

Mumbai

Hybrid

Naukri logo

Greetings from #IDESLABS We have Immediate opening for SRE JD: SRE Platform:- Skill Profile SRE Client Platform: 7+ years of relevant experience as an SRE/DevOps Engineer Have a background in either Systems Administration or Software Engineering Strong experience with major public Cloud Providers (ideally GCP but this is not a must have) Strong experience with Docker and Kubernetes. Strong experience with IaC (Terraform) Strong understanding of GitOps concepts and tools (ideally Flux) Excellent knowledge of technical architecture and modern design patterns, including micro-services, serverless functions, NoSQL, RESTful APIs, etc. Ability to set up and support CI/CD pipelines and tooling using Gitlab. Proficiency in a high-level programming language such as Python, Ruby or Go Experience with monitoring, log aggregation and alerting tooling (GCP Logging, Prometheus, Grafana). Additional Job Description SRE Data Platform:- SRE Data Platform: Linux administration skills and a deep understanding of networking and TCP/IP. Experience with the major cloud providers and Terraform. Knowledge of technical architecture and modern-day design patterns, including micro-services, serverless functions, NoSQL, RESTful APIs, etc. Demonstrable skills in a Configuration Management tool like Ansible. Experience in setting up and supporting CI/CD pipelines and tooling such as GitHub or Gitlab CI Proficiency in a high-level programming language such as Python or Go. Experience with monitoring, log aggregation, and alerting tooling (ELK, Prometheus, Grafana, etc). Experience with Docker and Kubernetes Experience with secret management tools like Hashicorp Vault is deemed a plus Proficient in applying SRE core tenets, including SLI/SLO/SLA measurement, toil elimination, and reliability modeling for optimizing system performance and resilience. Experience with cloud-native tools like Cluster API, service mesh, KEDA, OPA, Kubernetes Operators Experience with big data technologies such as NoSQL/RDBMS(PostgreSQL, Oracle, MongoDB), Redis, Spark, Rabbit, Kafka, etc. Experience in troubleshooting and monitoring large-scale distributed systems

Posted 1 day ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Gurugram

Work from Office

Naukri logo

Looking for joiners available to start within 30 days Preferred candidate profile 5+ years of DevOps engineering experience Expertise in Ansible for configuration management and automation Strong Python scripting for automation and tooling Hands-on experience with Openshift or Kubernetes Proven experience in building and maintaining CICD pipelines Version control with Git Proficiency in using Prometheus and Grafana for monitoring and alerting Containerization with Docker Experience with Jenkins job configuration and pipeline scripting

Posted 1 day ago

Apply

7.0 - 9.0 years

9 - 13 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

Key Responsibilities: 1. Cloud Infrastructure Management:o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP).o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization:o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications.o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines:o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD.o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance:o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption.o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support:o Work closely with development teams to containerize applications and ensure smooth deployment on GCP.o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization:o Monitor and optimize GCP resource usage to ensure cost efficiency.o Implement strategies to reduce cloud spend without compromising performance. Required Skills and Qualifications:1. Certifications:o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise:o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools:o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build.o Experience with containerization tools like Docker. 4. Kubernetes Expertise:o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets.o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting:o Strong scripting skills in Python, Bash, or Go.o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging:o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking:o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers.8. Soft Skills: o Strong problem-solving and troubleshooting skills.o Excellent communication and collaboration abilities.o Ability to work in an agile, fast-paced environment.

Posted 1 day ago

Apply

8.0 - 12.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Design, build, and maintain our containerization and orchestration solutions using Docker and Kubernetes.Automate deployment, monitoring, and management of applications using Ansible and Python.Collaborate with development teams to ensure seamless integration and deployment.Implement and manage CI/CD pipelines to streamline software delivery.Monitor system performance and troubleshoot issues to ensure high availability and reliability.Ensure security best practices for containerized environments.Provide support and guidance for development and operations teams. Required Skills and Qualifications:Bachelor's degree in Computer Science, Information Technology, or a related field.Proven experience as a DevOps Engineer or in a similar role.Extensive experience with Docker and Kubernetes.Strong proficiency in Python and Ansible.Solid understanding of CI/CD principles and tools.Familiarity with cloud platforms such as AWS, Azure, or Google Cloud.Excellent problem-solving and troubleshooting skills.Strong communication and teamwork skills. Preferred Qualifications:Experience with infrastructure-as-code tools like Terraform.Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack).Familiarity with Agile development methodologies.Experience with containerization technologies like Docker and Kubernetes.

Posted 1 day ago

Apply

4.0 - 9.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Key Responsibilities: 1. Cloud Infrastructure Management:o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP).o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization:o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications.o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines:o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD.o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance:o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption.o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support:o Work closely with development teams to containerize applications and ensure smooth deployment on GCP.o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization:o Monitor and optimize GCP resource usage to ensure cost efficiency.o Implement strategies to reduce cloud spend without compromising performance. Required Skills and Qualifications: 1. Certifications:o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise:o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools:o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build.o Experience with containerization tools like Docker. 4. Kubernetes Expertise:o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets.o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting:o Strong scripting skills in Python, Bash, or Go.o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging:o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking:o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers.8. Soft Skills: o Strong problem-solving and troubleshooting skills.o Excellent communication and collaboration abilities.o Ability to work in an agile, fast-paced environment.

Posted 1 day ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Telangana

Work from Office

Naukri logo

Job Summary We are looking for a skilled DevOps Engineer with strong experience in Python, Ansible, Docker, and Kubernetes. The ideal candidate will have a proven track record of automating and optimizing processes, deploying and managing containerized applications, and ensuring system reliability and scalability. Key Responsibilities Design, build, and maintain our containerization and orchestration solutions using Docker and Kubernetes. Automate deployment, monitoring, and management of applications using Ansible and Python. Collaborate with development teams to ensure seamless integration and deployment. Implement and manage CI/CD pipelines to streamline software delivery. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Ensure security best practices for containerized environments. Provide support and guidance for development and operations teams. Required Skills and Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as a DevOps Engineer or in a similar role. Extensive experience with Docker and Kubernetes. Strong proficiency in Python and Ansible. Solid understanding of CI/CD principles and tools. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Excellent problem-solving and troubleshooting skills. Strong communication and teamwork skills. Preferred Qualifications Experience with infrastructure-as-code tools like Terraform. Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with Agile development methodologies. Experience with containerization technologies like Docker and Kubernetes.

Posted 1 day ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Jaipur

Work from Office

Naukri logo

Overview of Job Role: We are looking for a skilled and motivated DevOps Engineer to join our growing team. The ideal candidate will have expertise in AWS, CI/CD pipelines, and Terraform, with a passion for building and optimizing scalable, reliable, and secure infrastructure. This role involves close collaboration with development, QA, and operations teams to streamline deployment processes and enhance system performance. Roles & Responsibilities: Leadership & Strategy Lead and mentor a team of DevOps engineers, fostering a culture of automation, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business objectives to enhance scalability, security, and reliability. Collaborate with cross-functional teams, including software engineering, security, MLOps, and infrastructure teams, to drive DevOps best practices. Establish KPIs and performance metrics for DevOps operations, ensuring optimal system performance, cost efficiency, and high availability. Advocate for CPU throttling, auto-scaling, and workload optimization strategies to improve system efficiency and reduce costs. Drive MLOps adoption , integrating machine learning workflows into CI/CD pipelines and cloud infrastructure. Ensure compliance with ISO 27001 standards , implementing security controls and risk management measures. Infrastructure & Automation Oversee the design, implementation, and management of scalable, secure, and resilient infrastructure on AWS . Lead the adoption of Infrastructure as Code (IaC) using Terraform, CloudFormation, and configuration management tools like Ansible or Chef. Spearhead automation efforts for infrastructure provisioning, deployment, and monitoring to reduce manual overhead and improve efficiency. Ensure high availability and disaster recovery strategies, leveraging multi-region architectures and failover mechanisms. Manage Kubernetes (or AWS ECS/EKS) clusters , optimizing container orchestration for large-scale applications. Drive cost optimization initiatives , implementing intelligent cloud resource allocation strategies. CI/CD & Observability Architect and oversee CI/CD pipelines , ensuring seamless automation of application builds, testing, and deployments. Enhance observability and monitoring by implementing tools like CloudWatch, Prometheus, Grafana, ELK Stack, or Datadog. Develop robust logging, alerting, and anomaly detection mechanisms to ensure proactive issue resolution. Security & Compliance (ISO 27001 Implementation) Lead the implementation and enforcement of ISO 27001 security standards , ensuring compliance with information security policies and regulatory requirements. Develop and maintain an Information Security Management System (ISMS) to align with ISO 27001 guidelines. Implement secure access controls, encryption, IAM policies, and network security measures to safeguard infrastructure. Conduct risk assessments, vulnerability management, and security audits to identify and mitigate threats. Ensure security best practices are embedded into all DevOps workflows, following DevSecOps principles . Work closely with auditors and compliance teams to maintain SOC2, GDPR, and other regulatory frameworks . Required Skills and Qualifications: 5+ years of experience in DevOps, cloud infrastructure, and automation, with at least 3+ years in a managerial or leadership role . Proven experience managing AWS cloud infrastructure at scale, including EC2, S3, RDS, Lambda, VPC, IAM, and CloudFormation. Expertise in Terraform and Infrastructure as Code (IaC) principles . Strong background in CI/CD pipeline automation with tools like Jenkins, GitHub Actions, GitLab CI, or CircleCI. Hands-on experience with Docker and Kubernetes (or AWS ECS/EKS) for container orchestration. Experience in CPU throttling, auto-scaling, and performance optimization for cloud-based applications. Strong knowledge of Linux/Unix systems, shell scripting, and network configurations . Proven experience with ISO 27001 implementation , ISMS development, and security risk management. Familiarity with MLOps frameworks like Kubeflow, MLflow, or SageMaker, and integrating ML pipelines into DevOps workflows. Deep understanding of observability tools such as ELK Stack, Grafana, Prometheus, or Datadog. Strong stakeholder management, communication, and ability to collaborate across teams. Experience in regulatory compliance, including SOC2, ISO 27001, and GDPR . Professional Attributes: Strong interpersonal and communication skills, being an effective team player, being able to work with individuals at all levels within the organization and building remote relationships. Excellent prioritization skills, the ability to work well under pressure, and the ability to multi-task.

Posted 1 day ago

Apply

5.0 - 8.0 years

15 - 30 Lacs

Gurugram

Work from Office

Naukri logo

We are looking for a talented Software Engineer with hands-on experience in Quarkus and Red Hat Fuse to design, develop, and maintain integration solutions. The ideal candidate will have strong proficiency in Java, experience with Kafka-based event streaming, RESTful APIs, relational databases, and CI/CD pipelines deployed on OpenShift Container Platform (OCP) . This role requires a developer who is passionate about building robust microservices and integration systems in a cloud-native environment. Key Responsibilities: Design and develop scalable microservices using Quarkus framework. Build and maintain integration flows and APIs leveraging Red Hat Fuse (Apache Camel) for enterprise integration patterns. Develop and consume RESTful web services and APIs. Design, implement, and optimize Kafka producers and consumers for real-time data streaming and event-driven architecture. Write efficient, well-documented, and testable Java code adhering to best practices. Work with relational databases (e.g., PostgreSQL, MySQL, Oracle) including schema design, queries, and performance tuning. Collaborate with DevOps teams to build and maintain CI/CD pipelines for automated build, test, and deployment workflows. Deploy and manage applications on OpenShift Container Platform (OCP) including containerization best practices (Docker). Participate in code reviews, design discussions, and agile ceremonies. Troubleshoot and resolve production issues with a focus on stability and performance. Keep up-to-date with emerging technologies and recommend improvements. Required Skills & Experience: Strong experience with Java (Java 8 or above) and the Quarkus framework. Expertise in Red Hat Fuse (or Apache Camel) for integration development. Proficient in designing and consuming REST APIs. Experience with Kafka for event-driven and streaming solutions. Solid understanding of relational databases and SQL . Experience in building and maintaining CI/CD pipelines (e.g., Jenkins, GitLab CI) and automated deployment. Hands-on experience deploying applications to OpenShift Container Platform (OCP). Working knowledge of containerization tools like Docker. Familiarity with microservices architecture, cloud-native development, and agile methodologies. Strong problem-solving skills and ability to work independently as well as in a team environment. Good communication and documentation skills.

Posted 2 days ago

Apply

8.0 - 13.0 years

15 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description Position : Senior Analyst / Expert Network Operation Center Location : Bengaluru We are looking for an experienced Senior Network Engineer with experience in Network operations. The selected candidate needs to have excellent communication and organizational skills, who is also willing to grow with us. Who are we? Krones Digital Solutions India (KDSI) is a subsidiary of the Krones Group and is a part of the Krones.Digital community. The Krones Group, headquartered in Neutraubling, Germany, plans, develops, and manufactures machines and complete lines for the fields of process technology, bottling, and packaging, plus intralogistics and recycling. Every day, millions of bottles, cans, and containers are "processed" in Krones lines - in alcoholic and non-alcoholic beverage industries, dairy and liquid food industry as well as in the chemical, pharmaceutical and home & personal care industries. It is quite likely that the bottle of water, cola or juice in your hand is manufactured in one of the Krones lines!! Krones Digital Solutions India is the Technology Competence Centre for Krones, focusing on developing software solutions for the Internal organization as well as for the customers of Krones Global. What is in it for you? You are responsible for configuring, troubleshooting and implementing various network devices and services (e.g., routers, switches, firewalls, VPN) Your tasks include performing network maintenance and system upgrades including service packs, patches, hot fixes and security configurations. You are responsible for configuring and monitoring customer OT On-prem & cloud devices/solutions ensuring data availability and reliability. You are co-responsible for building a central CMDB/asset management database. You ensure - together with the responsible central departments - the topics security, case management, license management. You have hands on skills for modelling of processes to support the implementation into the organization as well providing of technical guidance. You are a central sparring partner for continuous deployment/delivery and automations. Your tasks also include supporting the technical creation and organizational implementation of the operational concept. Your working style shows independence as well as result-oriented & solution- oriented work. You are comfortable working in a 24x7 environment. What are we looking for? Must Have Skills/ Experience : - Degree in IT or a comparable education At least 8-15 years of experience in Network/ OT operations. Technical Certification equivalent to CCNP or above (multi-vendor certifications highly valued) Tools and Technologies Cisco Prime Infrastructure, FMC, Palo Alto TLS, IPSEC, FlexVPN PLC, SCADA, OPCUA Operating Systems Linux Windows Incident Management ServiceNow Salesforce Added Advantage : Scripting Python Infrastructure and IAC Containerization Technologies VMware ESXI Hypervisor DNS, IPAM Monitoring Prometheus Grafana

Posted 2 days ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Gurugram

Work from Office

Naukri logo

production experience on AWS (IAM, ECS, EC2, VPC, ELB, RDS, Autoscaling, Cost Optimisation, Trusted Advisors, Guard Duty, Security, etc.) Must have monitoring experience on tools like Nagios/ Prometheus/ Grafana/ Datadog/ New Relic, etc. Required Candidate profile Must have experience in Linux Administration.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies