Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 11 Lacs
Mumbai
Work from Office
We are looking for an experienced Senior Java Developer with a strong background in observability and telemetry to join our talented team. In this role, you will be responsible for designing, implementing, and maintaining robust and scalable solutions that enable us to gain deep insights into the performance, reliability, and health of our systems and applications. WHAT'S IN' IT FOR YOU : - You will get a pivotal role in the project and associated incentives based on your contribution towards the project success. - Working on optimizing performance of a platform handling data volume in the range of 5-8 petabytes. - An opportunity to collaborate and work with engineers from Google, AWS, ELK - You will be enabled to take-up leadership role in future to set-up your team as you grow with the customer during the project engagement. - Opportunity for advancement within the company, with clear paths for career progression based on performance and demonstrated capabilities. - Be part of a company that values innovation and encourages experimentation, where your ideas are heard and your contributions are recognized and rewarded. Work in a zero micro-management culture where you get to enjoy accountability and ownership for your tasks RESPONSIBILITIES : - Design, develop, and maintain Java-based microservices and applications with a focus on observability and telemetry. - Implement best practices for instrumenting, collecting, analyzing, and visualizing telemetry data (metrics, logs, traces) to monitor and troubleshoot system behavior and performance. - Collaborate with cross-functional teams to integrate observability solutions into the software development lifecycle, including CI/CD pipelines and automated testing frameworks. - Drive improvements in system reliability, scalability, and performance through data-driven insights and continuous feedback loops. - Stay up-to-date with emerging technologies and industry trends in observability, telemetry, and distributed systems to ensure our systems remain at the forefront of innovation. - Mentor junior developers and provide technical guidance and expertise in observability and telemetry practices. REQUIREMENTS : - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - 5+ years of professional experience in software development with a strong focus on Java programming. - Expertise in observability and telemetry tools and practices, including but not limited to Prometheus, Grafana, Jaeger, ELK stack (Elasticsearch, Logstash, Kibana), and distributed tracing. - Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud- native technologies (AWS, Azure, GCP). - Proficiency in designing and implementing scalable, high-performance, and fault-tolerant systems. -Strong analytical and problem-solving skills with a passion for troubleshooting complex issues. - Excellent communication and collaboration skills with the ability to work effectively in a fast-paced, agile environment. - Experience with Agile methodologies and DevOps practices is a plus.
Posted 1 month ago
5.0 - 10.0 years
7 - 11 Lacs
Ahmedabad
Work from Office
We are looking for an experienced Senior Java Developer with a strong background in observability and telemetry to join our talented team. In this role, you will be responsible for designing, implementing, and maintaining robust and scalable solutions that enable us to gain deep insights into the performance, reliability, and health of our systems and applications. WHAT'S IN' IT FOR YOU : - You will get a pivotal role in the project and associated incentives based on your contribution towards the project success. - Working on optimizing performance of a platform handling data volume in the range of 5-8 petabytes. - An opportunity to collaborate and work with engineers from Google, AWS, ELK - You will be enabled to take-up leadership role in future to set-up your team as you grow with the customer during the project engagement. - Opportunity for advancement within the company, with clear paths for career progression based on performance and demonstrated capabilities. - Be part of a company that values innovation and encourages experimentation, where your ideas are heard and your contributions are recognized and rewarded. Work in a zero micro-management culture where you get to enjoy accountability and ownership for your tasks RESPONSIBILITIES : - Design, develop, and maintain Java-based microservices and applications with a focus on observability and telemetry. - Implement best practices for instrumenting, collecting, analyzing, and visualizing telemetry data (metrics, logs, traces) to monitor and troubleshoot system behavior and performance. - Collaborate with cross-functional teams to integrate observability solutions into the software development lifecycle, including CI/CD pipelines and automated testing frameworks. - Drive improvements in system reliability, scalability, and performance through data-driven insights and continuous feedback loops. - Stay up-to-date with emerging technologies and industry trends in observability, telemetry, and distributed systems to ensure our systems remain at the forefront of innovation. - Mentor junior developers and provide technical guidance and expertise in observability and telemetry practices. REQUIREMENTS : - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - 5+ years of professional experience in software development with a strong focus on Java programming. - Expertise in observability and telemetry tools and practices, including but not limited to Prometheus, Grafana, Jaeger, ELK stack (Elasticsearch, Logstash, Kibana), and distributed tracing. - Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud- native technologies (AWS, Azure, GCP). - Proficiency in designing and implementing scalable, high-performance, and fault-tolerant systems. -Strong analytical and problem-solving skills with a passion for troubleshooting complex issues. - Excellent communication and collaboration skills with the ability to work effectively in a fast-paced, agile environment. - Experience with Agile methodologies and DevOps practices is a plus.
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the worlds toughest diseases, and make peoples lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 45 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond whats known today. ABOUT THE ROLE Role Description We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer with deep expertise of R&D domain in life sciences to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domin Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL . Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Masters degree and 3 to 7 years of Computer Science, IT or related field experience Bachelors degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 1 month ago
3.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
We are looking for a Kibana Subject Matter Expert (SME) to support our Network Operations Center (NOC) by designing, developing, and maintaining real-time dashboards and alerting mechanisms. The ideal candidate will have strong experience in working with Elasticsearch and Kibana to visualize key performance indicators (KPIs), system health, and alerts related to NOC-managed infrastructure. Key Responsibilities: Design and develop dynamic and interactive Kibana dashboards tailored for NOC monitoring. Integrate various NOC elements such as network devices, servers, applications, and services into Elasticsearch/Kibana. Create real-time visualizations and trend reports for system health, uptime, traffic, errors, and performance metrics. Configure alerts and anomaly detection mechanisms for critical infrastructure issues using Kibana or related tools (e.g., ElastAlert, Watcher). Collaborate with NOC engineers, infrastructure teams, and DevOps to understand monitoring requirements and deliver customized dashboards. Optimize Elasticsearch queries and index mappings for performance and data integrity. Provide expert guidance on best practices for log ingestion, parsing, and data retention strategies. Support troubleshooting and incident response efforts by providing actionable insights through Kibana visualizations. Primary Skills Proven experience as a Kibana SME or similar role with a focus on dashboards and alerting. Strong hands-on experience with Elasticsearch and Kibana (7.x or higher). Experience in working with log ingestion tools (e.g., Logstash, Beats, Fluentd). Solid understanding of NOC operations and common infrastructure elements (routers, switches, firewalls, servers, etc.). Proficiency in JSON, Elasticsearch Query DSL, and Kibana scripting for advanced visualizations. Familiarity with alerting frameworks such as ElastAlert, Kibana Alerting, or Watcher. Good understanding of Linux-based systems and networking fundamentals. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Experience in working within telecom, ISP, or large-scale IT operations environments. Exposure to Grafana, Prometheus, or other monitoring and visualization tools. Knowledge of scripting languages such as Python or Shell for automation. Familiarity with SIEM or security monitoring solutions.
Posted 1 month ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Primary Skills Strong hands-on experience with observability tools like AppDynamics, Dynatrace, Prometheus, Grafana, and ELK Stack Proficient in AppDynamics setup, including installation, configuration, monitor creation, and integration with ServiceNow, email, and Teams Ability to design and implement monitoring solutions for logs, traces, telemetry, and KPIs Skilled in creating dashboards and alerts for application and infrastructure monitoring Experience with AppDynamics features such as NPM, RUM, and synthetic monitoring Familiarity with AWS and Kubernetes, especially in the context of observability Scripting knowledge in Python or Bash for automation and tool integration Understanding of ITIL processes and APM support activities Good grasp of non-functional requirements like performance, capacity, and security Secondary Skills AppDynamics Performance Analyst or Implementation Professional certification Experience with other APM tools like New Relic, Datadog, or Splunk Exposure to CI/CD pipelines and integration of monitoring into DevOps workflows Familiarity with infrastructure-as-code tools like Terraform or Ansible Understanding of network protocols and troubleshooting techniques Experience in performance tuning and capacity planning Knowledge of compliance and audit requirements related to monitoring and logging Ability to work in Agile/Scrum environments and contribute to sprint planning from an observability perspective
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Hybrid
Position Overview: We are seeking a Senior Software Engineer to help drive our build, release, and testing infrastructure to the next level. You will focus on scaling and optimizing our systems for large-scale, high-performance deployments reducing build times from days to mere minutes while maintaining high-quality releases. As part of our collaborative, fast-paced engineering team, you will play a pivotal role in delivering tools and processes that support continuous delivery, test-driven development, and agile methodologies. Key Responsibilities: Automation & Tooling Development: Build, maintain, and improve our automated build, release, and testing infrastructure. Your focus will be on developing tools and scripts that automate our deployment pipeline, enabling a seamless and efficient continuous delivery process. Cross-functional Collaboration: Collaborate closely with development, QA, and SRE teams to ensure our build infrastructure meets the needs of all teams. Work with teams across the organization to create new tools, processes, and technologies that will streamline and enhance our delivery pipeline. Innovative Technology Integration: Stay on top of the latest advancements in cloud technology, automation, and infrastructure tools. You ll have the opportunity to experiment with and recommend new technologies, including AWS services, to enhance our CI/CD system. Scaling Infrastructure: Work on scaling our infrastructure to meet the demands of running thousands of automated tests for every commit. Help us reduce compute time from days to minutes, addressing scalability and performance challenges as we grow. Continuous Improvement & Feedback Loops: Be a champion for continuous improvement by collecting feedback from internal customers, monitoring the adoption of new tools, and fine-tuning processes to maximize efficiency, stability, and overall satisfaction. Process & Project Ownership: Lead the rollout of new tools and processes, from initial development through to full implementation. You ll be responsible for ensuring smooth adoption and delivering value to internal teams. Required Qualifications: 5+ years of experience in software development with a strong proficiency in at least one of the following languages: Python , Go , Java , or JavaScript . Deep understanding of application development, microservices architecture, and the elements that drive a successful multi-service ecosystem. Familiarity with building and deploying scalable services is essential. Strong automation skills : Experience scripting and building tools for automation in the context of continuous integration and deployment pipelines. Cloud infrastructure expertise : Hands-on experience with AWS services (e.g., EC2, S3, Lambda, RDS) and Kubernetes or containerized environments. Familiarity with containerization : Strong understanding of Docker and container orchestration, with a particular focus on cloud-native technologies. Problem-solving mindset : Ability to identify, troubleshoot, and resolve technical challenges, particularly in large-scale systems. Agile experience : Familiarity with Agile methodologies, and the ability to collaborate effectively within cross-functional teams to deliver on-time and with high quality. Collaboration skills : Ability to communicate complex technical concepts to both technical and non-technical stakeholders. Strong team-oriented mindset with a focus on delivering value through collaboration. Bachelor s degree in Computer Science or a related field, or equivalent professional experience. Preferred Qualifications: Experience with Kubernetes (K8s): In-depth knowledge of Kubernetes architecture and operational experience in managing Kubernetes clusters at scale. CI/CD expertise: Solid experience working with CI/CD pipelines and tools (e.g., Terraform, Ansible, Spinnaker). Infrastructure-as-code experience: Familiarity with Terraform , CloudFormation , or similar tools for automating cloud infrastructure deployments. Container orchestration & scaling : Experience with Karpenter or other auto-scaling tools for Kubernetes. Monitoring & Logging : Familiarity with tools such as Prometheus , Grafana , and CloudWatch for tracking infrastructure performance and debugging production issues.
Posted 1 month ago
9.0 - 10.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 9 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
6.0 - 8.0 years
13 - 18 Lacs
Gurugram
Work from Office
Responsibilities : - Define and enforce SLOs, SLIs, and error budgets across microservices - Architect an observability stack (metrics, logs, traces) and drive operational insights - Automate toil and manual ops with robust tooling and runbooks - Own incident response lifecycle: detection, triage, RCA, and postmortems - Collaborate with product teams to build fault-tolerant systems - Champion performance tuning, capacity planning, and scalability testing - Optimise costs while maintaining the reliability of cloud infrastructure Must have Skills : - 6+ years in SRE/Infrastructure/Backend related roles using Cloud Native Technologies - 2+ years in SRE-specific capacity - Strong experience with monitoring/observability tools (Datadog, Prometheus, Grafana, ELK etc.) - Experience with infrastructure-as-code (Terraform/Ansible) - Proficiency in Kubernetes, service mesh (Istio/Linkerd), and container orchestration - Deep understanding of distributed systems, networking, and failure domains - Expertise in automation with Python, Bash, or Go - Proficient in incident management, SLAs/SLOs, and system tuning - Hands-on experience with GCP (preferred)/AWS/Azure and cloud cost optimisation - Participation in on-call rotations and running large-scale production systems Nice to have skills : - Familiarity with chaos engineering practices and tools (Gremlin, Litmus) - Background in performance testing and load simulation (Gatling, Locust, k6, JMeter)
Posted 1 month ago
6.0 - 10.0 years
15 - 25 Lacs
Gurugram, Bengaluru
Hybrid
What you will be doing The Site Reliability Engineer (SRE) operates and maintains production systems in the cloud. Their primary goal is to make sure the systems are up and running and provide the expected performance. This involves daily operations tasks of monitoring, deployment and incident management as well as strategic tasks like capacity planning, provisioning and continuous improvement of processes. Also, a major part of the role is the design for reliability, scalability, efficiency and the automation of everyday system operations tasks. SREs work closely together with technical support teams, application developers and DevOps engineers both on incident resolution and on long-term evolution of systems. Employees will primarily work on creating Terraform, Shell & Ansible scripts and will be part of Application deployments using Azure Kubernetes service. Employees will work with a cybersecurity client/company. Monitor production systems' health, usage, and performance using dashboards and monitoring tools. Track provisioned resources, infrastructure, and their configuration. Perform regular maintenance activities on databases, services, and infrastructure. Respond to alerts and incidents: investigate, resolve, or dispatch according to SLAs. Respond to emergencies: recover systems and restore services with minimal downtime. Coordinate with customer success and engineering teams on incident resolution. Perform postmortems after major incidents. Change management: perform rollouts, rollbacks, patching and configuration changes. Drive demand forecasting and capacity planning with engineering and customer success teams. Consider projected growth and demand spikes. Provision production resources according to capacity demands. Work with the engineering teams on the design and testing for reliability, scalability, performance, efficiency, and security. Track resource utilization and cost-efficiency of production services. What were BSc/MSc, B. Tech degree in STEM, 6+ years of relevant industry experience. Technical skills: Terraform, Docker Swarm/K8s, Python, Unix/Linux Shell scripting, DevOps, GitHub Actions, Azure Active Directory, Azure monitor & Log Analytics. Experience in integrating Grafana with Prometheus will be an added advantage. Strong verbal and written communication skills. Ability to perform on-call duties.
Posted 1 month ago
0.0 - 3.0 years
3 - 6 Lacs
Hyderabad
Work from Office
The ideal candidate will have a deep understanding of automation, configuration management, and infrastructure-as-code principles, with a strong focus on Ansible. You will work closely with developers, system administrators, and other collaborators to automate infrastructure related processes, improve deployment pipelines, and ensure consistent configurations across multiple environments. The Infrastructure Automation Engineer will be responsible for developing innovative self-service solutions for our global workforce and further enhancing our self-service automation built using Ansible. As part of a scaled Agile product delivery team, the Developer works closely with product feature owners, project collaborators, operational support teams, peer developers and testers to develop solutions to enhance self-service capabilities and solve business problems by identifying requirements, conducting feasibility analysis, proof of concepts and design sessions. The Developer serves as a subject matter expert on the design, integration and operability of solutions to support innovation initiatives with business partners and shared services technology teams. Please note, this is an onsite role based in Hyderabad. Key Responsibilities: Automating repetitive IT tasks - Collaborate with multi-functional teams to gather requirements and build automation solutions for infrastructure provisioning, configuration management, and software deployment. Configuration Management - Design, implement, and maintain code including Ansible playbooks, roles, and inventories for automating system configurations and deployments and ensuring consistency Ensure the scalability, reliability, and security of automated solutions. Troubleshoot and resolve issues related to automation scripts, infrastructure, and deployments. Perform infrastructure automation assessments, implementations, providing solutions to increase efficiency, repeatability, and consistency. DevOps Facilite continuous integration and deployment (CI/CD) Orchestration Coordinating multiple automated tasks across systems Develop and maintain clear, reusable, and version-controlled playbooks and scripts. Manage and optimize cloud infrastructure using Ansible and terraform automation (AWS, Azure, GCP, etc.). Continuously improve automation workflows and practices to enhance speed, quality, and reliability. Ensure that infrastructure automation adheres to best practices, security standards, and regulatory requirements. Document and maintain processes, configurations, and changes in the automation infrastructure. Participate in design review, client requirements sessions and development teams to deliver features and capabilities supporting automation initiatives Collaborate with product owners, collaborators, testers and other developers to understand, estimate, prioritize and implement solutions Design, code, debug, document, deploy and maintain solutions in a highly efficient and effective manner Participate in problem analysis, code review, and system design Remain current on new technology and apply innovation to improve functionality Collaborate closely with collaborators and team members to configure, improve and maintain current applications Work directly with users to resolve support issues within product team responsibilities Monitor health, performance and usage of developed solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelors degree and 0 to 3 years of computer science, IT, or related field experience OR Diploma and 4 to 7 years of computer science, IT, or related field experience Deep hands-on experience with Ansible including playbooks, roles, and modules Proven experience as an Ansible Engineer or in a similar automation role Scripting skills in Python, Bash, or other programming languages Proficiency expertise in Terraform & CloudFormation for AWS infrastructure automation Experience with other configuration management tools (e.g., Puppet, Chef). Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (GitHub Actions, CodePipeline, etc.) Familiarity with monitoring tools (e.g., Dynatrace, Prometheus, Nagios) Working in an Agile (SAFe, Scrum, and Kanban) environment Preferred Qualifications: Red Hat Certified Specialist in Developing with Ansible Automation Platform Red Hat Certified Specialist in Managing Automation with Ansible Automation Platform Red Hat Certified System Administrator AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures. Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) Experience operating within a validated systems environment (FDA, European Agency for the Evaluation of Medicinal Products, Ministry of Health, etc.) Soft Skills: Strong analytical and problem-solving skills. Effective communication and collaboration with multi-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is an onsite role and may require working during later hours to align with business hours. Candidates must be willing and able to work outside of standard hours as required to meet business needs.
Posted 1 month ago
6.0 - 10.0 years
15 - 25 Lacs
Gurugram, Bengaluru
Hybrid
What you will be doing The Site Reliability Engineer (SRE) operates and maintains production systems in the cloud. Their primary goal is to make sure the systems are up and running and provide the expected performance. This involves daily operations tasks of monitoring, deployment and incident management as well as strategic tasks like capacity planning, provisioning and continuous improvement of processes. Also, a major part of the role is the design for reliability, scalability, efficiency and the automation of everyday system operations tasks. SREs work closely together with technical support teams, application developers and DevOps engineers both on incident resolution and on long-term evolution of systems. Employees will primarily work on creating Terraform, Shell & Ansible scripts and will be part of Application deployments using Azure Kubernetes service. Employees will work with a cybersecurity client/company. Monitor production systems' health, usage, and performance using dashboards and monitoring tools. Track provisioned resources, infrastructure, and their configuration. Perform regular maintenance activities on databases, services, and infrastructure. Respond to alerts and incidents: investigate, resolve, or dispatch according to SLAs. Respond to emergencies: recover systems and restore services with minimal downtime. Coordinate with customer success and engineering teams on incident resolution. Perform postmortems after major incidents. Change management: perform rollouts, rollbacks, patching and configuration changes. Drive demand forecasting and capacity planning with engineering and customer success teams. Consider projected growth and demand spikes. Provision production resources according to capacity demands. Work with the engineering teams on the design and testing for reliability, scalability, performance, efficiency, and security. Track resource utilization and cost-efficiency of production services. What were BSc/MSc, B. Tech degree in STEM, 3+ years of relevant industry experience. Technical skills: Terraform, Docker Swarm/K8s, Python, Unix/Linux Shell scripting, DevOps, GitHub Actions, Azure Active Directory, Azure monitor & Log Analytics. Experience in integrating Grafana with Prometheus will be an added advantage. Strong verbal and written communication skills. Ability to perform on-call duties. Regards, Kajal Khatri Kajal@beanhr.com
Posted 1 month ago
3.0 - 6.0 years
6 - 11 Lacs
Pune
Work from Office
Job ID: 200078 Required Travel :Minimal Managerial - No Location: :India- Pune (Amdocs Site) Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence Immerse yourself in the design, development, modification, debugging and maintenance of our client's software systems! Engage with specific modules, applications or technologies, and look after sophisticated assignments during the software development process. What will your job look like Key responsibilities: Design, implement, and maintain CI/CD pipelines to automate the software development lifecycle. Collaborate with development, QA, and operations teams to ensure smooth deployment and operation of applications. Monitor system performance, troubleshoot issues, and optimize infrastructure for scalability and reliability. Implement and manage infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. Ensure security best practices are followed in all aspects of the development and deployment process. Manage cloud infrastructure (AWS, Azure, GCP) and on-premises servers. Develop and maintain scripts for automation of routine tasks. Participate in on-call rotations to provide 24/7 support for critical systems. All you need is... Proven experience as a DevOps Engineer role. Strong knowledge of CI/CD tools (Jenkins, Bitbucket, GitLab CI, CircleCI, etc.). Experience with containerization and orchestration tools (Docker, Kubernetes). Proficiency in scripting languages (Python, Bash, etc.). Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK stack). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Good to have: Experience with microservices architecture., Knowledge of configuration management tools (Chef, Puppet)., Certification in cloud platforms (AWS Certified DevOps Engineer, Azure DevOps Engineer Expert). Behavioral skills: Eagerness & Hunger to learn. Good problem solving & decision-making skills. Good communication skills within the team, site and with the customer Ability to stretch respective working hours, when necessary, to support business needs. Ability to work independently and drive issues to closure. Consult, when necessary, with relevant parties, raise timely risks. Why you will love this job: You will be responsible for the integration between a major product infrastructure system and the Amdocs infrastructure system, driving automation helping teams work smarter and faster. Be a key member of an international, highly skilled and encouraging team with various possibilities for personal and professional development! You will have the opportunity to work in multinational environment for the global market leader in its field. We are a dynamic, multi-cultural organization that constantly innovates and empowers our employees to grow. Our people our passionate, daring, and phenomenal teammates that stand by each other with a dedication to creating a diverse, inclusive workplace! We offer a wide range of stellar benefits including health, dental, vision, and life insurance as well as paid time off, sick time, and parental leave! Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce
Posted 1 month ago
3.0 - 5.0 years
10 - 15 Lacs
Bengaluru
Work from Office
: Job TitleProject & Change Execution Manager, VP LocationBangalore, India Role Description Vice President Core Engineering (Technical Leadership Role) We are seeking a highly skilled and experienced Vice President of Engineering to lead the design, development, and maintenance of our core software systems and infrastructure. This is a purely technical leadership role ideal for someone who thrives on solving complex engineering challenges, stays ahead of modern technology trends, and is passionate about software craftsmanship. You will play a pivotal role in shaping our architecture, contributing directly to the codebase, and mentoring engineers across the organization. This role does not involve people management responsibilities, but requires strong collaboration and technical influence. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities System Design & Development Architect, develop, and maintain high-performance, scalable software systems using Java. Code Contribution Actively contribute to the codebase, ensuring high standards of quality, performance, and reliability. Database Engineering Design and optimize data-intensive applications using MongoDB, including indexing and query optimization. Microservices & Cloud Implement microservices architecture following established guidelines, deployed on Google Kubernetes Engine (GKE) . Security & Compliance Ensure systems comply with security regulations and internal policies. Infrastructure Oversight Review and update policies related to internal systems and equipment. Mentorship Guide and mentor engineers, setting a high bar for technical excellence and best practices. Cross-functional Collaboration Work closely with product managers, architects, and other stakeholders to translate business requirements into scalable technical solutions, including HLD and LLD documentation. Process Improvement Drive best practices in software development, deployment, and operations. Your skills and experience Deep expertise in software architecture, cloud infrastructure, and modern development practices. Strong coding skills and a passion for hands-on development. Excellent communication and leadership abilities. 10+ years of professional software development experience, with deep expertise in Java . Strong experience with MongoDB and building data-intensive applications. Proficiency in Kubernetes and deploying systems at scale in cloud environments , preferably Google Cloud Platform (GCP) . Hands-on experience with CI/CD pipelines , monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK ). Solid understanding of reactive or event-driven architectures . Familiarity with Infrastructure as Code (IaC) tools such as Terraform . Experience with modern software engineering practices , including TDD, CI/CD, and Agile methodologies. Front-end knowledge is a plus. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 1 month ago
3.0 - 5.0 years
35 - 40 Lacs
Pune
Work from Office
: Job TitleLead Engineer, VP LocationPune, India Role Description Vice President Core Engineering (Technical Leadership Role) We are seeking a highly skilled and experienced Vice President of Engineering to lead the design, development, and maintenance of our core software systems and infrastructure. This is a purely technical leadership role ideal for someone who thrives on solving complex engineering challenges, stays ahead of modern technology trends, and is passionate about software craftsmanship. You will play a pivotal role in shaping our architecture, contributing directly to the codebase, and mentoring engineers across the organization. This role does not involve people management responsibilities, but requires strong collaboration and technical influence. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities System Design & Development Architect, develop, and maintain high-performance, scalable software systems using Java. Code Contribution Actively contribute to the codebase, ensuring high standards of quality, performance, and reliability. Database Engineering Design and optimize data-intensive applications using MongoDB, including indexing and query optimization. Microservices & Cloud Implement microservices architecture following established guidelines, deployed on Google Kubernetes Engine (GKE) . Security & Compliance Ensure systems comply with security regulations and internal policies. Infrastructure Oversight Review and update policies related to internal systems and equipment. Mentorship Guide and mentor engineers, setting a high bar for technical excellence and best practices. Cross-functional Collaboration Work closely with product managers, architects, and other stakeholders to translate business requirements into scalable technical solutions, including HLD and LLD documentation. Process Improvement Drive best practices in software development, deployment, and operations. Your skills and experience Deep expertise in software architecture, cloud infrastructure, and modern development practices. Strong coding skills and a passion for hands-on development. Excellent communication and leadership abilities. 10+ years of professional software development experience, with deep expertise in Java . Strong experience with MongoDB and building data-intensive applications. Proficiency in Kubernetes and deploying systems at scale in cloud environments , preferably Google Cloud Platform (GCP) . Hands-on experience with CI/CD pipelines , monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK ). Solid understanding of reactive or event-driven architectures . Familiarity with Infrastructure as Code (IaC) tools such as Terraform . Experience with modern software engineering practices , including TDD, CI/CD, and Agile methodologies. Front-end knowledge is a plus. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2463_JOB Date Opened 03/05/2025 Industry Other Job Type Work Experience 5-8 years Job Title Node JS Developer City Bangalore Province Karnataka Country India Postal Code 560066 Number of Positions 1 5 - 6 years of experience in Node.js development. Strong knowledge of JavaScript (ES6+), TypeScript (preferred). Experience with Express.js, Nest.js, or similar frameworks. Proficiency in working with NoSQL and SQL databases. Knowledge of microservices architecture and API development. Familiarity with Docker, Kubernetes, and CI/CD pipelines. Experience with unit testing and debugging. Understanding of version control systems like Git/GitHub/GitLab. Good understanding of security best practices in web applications. Excellent problem-solving and communication skills. Nice to Have: Experience with GraphQL. Knowledge of message brokers (RabbitMQ, Kafka). Exposure to DevOps and monitoring tools (Prometheus, ELK stack). Prior experience working in Agile/Scrum environments. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
5.0 - 8.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2064_JOB Date Opened 24/11/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Sr Software Engineer City Bangalore North Province Karnataka Country India Postal Code 560002 Number of Positions 4 [Qualifications] BS in Computer Science or equivalent work experience 5+ years of relevant development experience[Primary Skills] Experienced in Java Experience in NoSQL, Docker, Kubernetes, Prometheus, Consul, ElasticSearch, Kibana, and any other CNCF technologies Knowledge of Linux and Bash Skilled in crafting distributed systems Understanding of concepts such as concurrency, parallelism, and event driven architecture Knowledge of Web technologies including REST and gRPC Intermediate knowledge of version control tools such as git and GitLaKeywords Java, NoSQL, Docker, Kubernetes, Prometheus, Consul, Elasticsearch, Kibana Python Nice to have. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
5.0 - 8.0 years
13 - 17 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1862_JOB Date Opened 13/04/2023 Industry Technology Job Type Work Experience 5-8 years Job Title DevOps Architect / Consultant City Pune Province Maharashtra Country India Postal Code 411038 Number of Positions 4 Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX) Location: Pune / Mumbai [Work from Office] check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
3.0 - 5.0 years
4 - 8 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1876_JOB Date Opened 14/04/2023 Industry Technology Job Type Work Experience 3-5 years Job Title Sr DevOps Engineer City Mumbai Province Maharashtra Country India Postal Code 400008 Number of Positions 10 Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc). Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins). Experience with configuration management tools (e.g. Ansible, Chef) . Container Registry Solutions (Harbor, JFrog, Quay etc) . Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios, ELK. Mandatory Skills: Hands on Exp on Kubernetes and Kubernete Networking. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
7.0 - 10.0 years
40 - 45 Lacs
Hyderabad, Bengaluru
Hybrid
We’re looking for a Lead Python Developer with deep expertise in FastAPI, microservices architecture, and containerization (Docker/Kubernetes). You will lead backend development, ensure system scalability and security.
Posted 1 month ago
8.0 - 10.0 years
35 - 40 Lacs
Bengaluru
Work from Office
Job Responsibilities: Collaborates with Product and Engineering stakeholders to design and build platform services that meet key product and infrastructure requirements Produces both detailed designs for platform-level services Must be able to evaluate software and products against business requirements and turn business requirements into robust technical solutions fitting into corporate standards and strategy. Designs and implements microservices with thoughtfully defined APIs Should be conversant with frameworks & Architectures - Spring Boot, Spring Cloud, Spring Batch, Messaging Frameworks (like Kafka), Micro service Architecture Work with other areas of technology team to realize end to end solution and estimation for delivery proposals. Sound understanding of Java concepts, understanding of the technologies in the various architecture tiers - presentation, middleware, data access and integration to propose solution using Java /open-source technologies Design modules that are scalable, reusable, modular, secure. Clearly communicates design decisions, roadblocks and timelines to key stakeholders Adheres to all industry best practices and standards for Agile/Scrum Frameworks adopted by the Organization including but not limited to daily stand-ups, grooming, planning, retrospectives, sprint reviews, demos, and analytics via systems (JIRA) administration to directly support initiatives set by Product Management and the Organization at large Actively participate in Production stabilization and lead system software improvements along with team members. Technical Skills: Candidate Should have at least total 8+ years of experience in IT software development/design architecture. 3+ experience as an Architect in building distributed, highly available and scalable, microservice-based Cloud Native architecture Experience in one or more open-source Java frameworks such as Spring Boot, Spring Batch, Quartz, Spring Cloud, Spring Security, BPM, etc. Experience in single page web application framework like Angular. Experience with at least one type messaging system (Apache Kafka (Required), RabbitMQ) Experience with at least one RDBMS (MySQL, PostgreSQL, Oracle) Experience with at least one document-oriented DB (MongoDB, Preferably Couchbase DB) Experience with NoSQL DB like Elasticsearch Proficient in creating design documents - LLD documents with UML Good Exposure on Design Patterns, Microservices Architecture Design patterns and 12 factor application Experience working with observability/monitoring framework (Prometheus/Grafana, ELK) along with any APM tool Ability to conceptualize end-to-end system components across a wide range of technologies and translate into architectural design patterns for implementation Knowledge of security systems like Oauth 2, Keyclaok and SAML Familiarity with source code version control systems like Git/SVN Experience using, designing, and building REST/GRPC/ GraphQL/Web Service APIs Production experience with container orchestration (Docker, Kubernetes/CI/CD) and maintaining production environments Good understanding of public clouds GCP, AWS Etc. Good Exposure on API Gateways, Config servers Familiar with OWASP Experience in Telecom BSS (Business Support System) for CRM components added advantage. Immediate Joiner/30 days
Posted 1 month ago
3.0 - 8.0 years
12 - 15 Lacs
Pune
Work from Office
Hiring: Alert Monitoring & Tech Support Analyst Health domain preferred. Location: Pune | Experience: 3-10 years | Key Skills (e.g., Linux, PostgreSQL, Troubleshooting, PACS, HL7, DICOM
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Design, build, and maintain our containerization and orchestration solutions using Docker and Kubernetes. Automate deployment, monitoring, and management of applications using Ansible and Python. Collaborate with development teams to ensure seamless integration and deployment. Implement and manage CI/CD pipelines to streamline software delivery. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Ensure security best practices for containerized environments. Provide support and guidance for development and operations teams. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as a DevOps Engineer or in a similar role. Extensive experience with Docker and Kubernetes. Strong proficiency in Python and Ansible. Solid understanding of CI/CD principles and tools. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Excellent problem-solving and troubleshooting skills. Strong communication and teamwork skills. Preferred Qualifications: Experience with infrastructure-as-code tools like Terraform. Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with Agile development methodologies. Experience with containerization technologies like Docker and Kubernetes.
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Mumbai
Work from Office
Skill Profile SRE Client Platform7+ years of relevant experience as an SRE/DevOps Engineer Have a background in either Systems Administration or Software Engineering Strong experience with major public Cloud Providers (ideally GCP but this is not a must have) Strong experience with Docker and Kubernetes. Strong experience with IaC (Terraform) Strong understanding of GitOps concepts and tools (ideally Flux) Excellent knowledge of technical architecture and modern design patterns, including micro-services, serverless functions, NoSQL, RESTful APIs, etc. Ability to set up and support CI/CD pipelines and tooling using Gitlab. Proficiency in a high-level programming language such as Python, Ruby or Go Experience with monitoring, log aggregation and alerting tooling (GCP Logging, Prometheus, Grafana). Additional SRE Data Platform:- SRE Data PlatformLinux administration skills and a deep understanding of networking and TCP/IP. Experience with the major cloud providers and Terraform. Knowledge of technical architecture and modern-day design patterns, including micro-services, serverless functions, NoSQL, RESTful APIs, etc. Demonstrable skills in a Configuration Management tool like Ansible. Experience in setting up and supporting CI/CD pipelines and tooling such as GitHub or Gitlab CI Proficiency in a high-level programming language such as Python or Go. Experience with monitoring, log aggregation, and alerting tooling (ELK, Prometheus, Grafana, etc). Experience with Docker and Kubernetes Experience with secret management tools like Hashicorp Vault is deemed a plus Proficient in applying SRE core tenets, including SLI/SLO/SLA measurement, toil elimination, and reliability modeling for optimizing system performance and resilience. Experience with cloud-native tools like Cluster API, service mesh, KEDA, OPA, Kubernetes Operators Experience with big data technologies such as NoSQL/RDBMS(PostgreSQL, Oracle, MongoDB), Redis, Spark, Rabbit, Kafka, etc. Experience in troubleshooting and monitoring large-scale distributed systems
Posted 1 month ago
10.0 - 20.0 years
45 - 50 Lacs
Noida
Work from Office
The Team: We are looking for a highly self-motivated hands-on Platform engineer to lead our DevOps team focusing on our Infrastructure Estate and Devops engineering. The Impact: This is an excellent opportunity to join us as we transform and harmonize our infrastructure into a unified place while also developing your skills and furthering your career as we plan to power the markets of the future. Whats in it for you: This is the place to hone your existing Infrastructure, DevOps and leadership skills while having the chance to become exposed to fresh and divergent technologies. Responsibilities: Have a strong understanding of large-scale cloud computing solutions including setting up and configuring Container platform. Have experience working with Azure DevOps, Docker and Kubernetes or related cloud technologies. Have excellent communication and troubleshooting skills. Have ability to present solution of complex problems to technical and non-technical audience. Have passion to learn new technologies and grow with team. Setup, configure and monitor CI/CD Pipelines and Container platform; conduct routine maintenance work for smooth operation with guaranteed uptime. Onboard applications onto the Container platform as demands come. Support the Assist various DEV and QA teams during their development and testing following the guidelines provided. Work closely with other dev leads and manager in day-to-day operation activities. Conduct regular capacity analysis and POCs. Develop and maintain the platform automation tools using Terraform, dashboard and utilities (Java, .NET C#, shell scripting, python etc.). Experience with setting up Infrastructure via Infrastructure as Code Lead the team providing hands-on guidance and roadmap. What Were Looking For: Bachelors degree in computer science, Engineering or in equivalent discipline is required. 10+ years of relevant work experience managing platform and/or infrastructure. Professional level hands on experience in terraform, GIT, CI-CD, Docker and Containerization. Proficient with modern DevOps tools including GitLab and GitHub based CI/CD pipelines. Strong experience on application deployment and monitoring. Experience with logging framework and strategies. DataDog, Prometheus, Splunk, ELK or similar tools is preferable. Good hands-on experience with Linux/Unix and Windows OS. Hands on Experience in AWS Services (IAM, CloudWatch, S3, EC2, Lamda, SQS, SNS, Step Functions and others) Preferred Qualifications: Excellent communication (written & verbal) and collaboration skills. Excellent presentation skills to senior leadership. Detail-oriented and a great team player. Willing to work providing support coverage for extended hours and leading by example Willing to learn new technology and acquire new skills.
Posted 1 month ago
10.0 - 15.0 years
35 - 40 Lacs
Hyderabad
Work from Office
The Team: Service Management is a global team that provides specialized technical support across the suite of trade processing and workflow solutions that support all participants in the Data & Research group. The Service Management team works collaboratively, both internally and across our customer base, operating in a sharing and learning culture with a view to build continuous improvement in our processes. Impact: We are seeking an experienced Service Management professional with a minimum of 10 years' work experience to join the team in India. The role encompasses 2nd line technical application support & Cloud Infrastructure Management for our Issuer Solutions Platforms within the Data & Research group of Market Intelligence. This person will report directly to the Global Manager responsible for application support and will work closely with the global team contributing to the quality of our support. Key Management Responsibilities Partner with functional areas within Technology such as Architecture and Engineering, Business Systems and Service Delivery (1st and 2nd line) to ensure Global Technology provides efficient and effective IT services and support to our clients. Building a culture of collaboration, repeatable quality processes with cost efficiency, and dedication to improving quality of services delivered through strong working relationships with various stakeholders. Drive Major Incidents from fault logging to resolution and follow up Root Cause Analysis. Accountability for service reviews with business and other technology partners looking for area where services can be improved. Responsible for all aspects of the team's training, management, appraisals and all aspects of recruitment. Implement and enhance robust observability frameworks to monitor system health, performance metrics, and logging across multiple platforms, ensuring high availability and proactive issue detection. Manage disaster recovery strategies and incident response plans, conducting regular drills to ensure team readiness and system resilience. Provide mentorship and technical leadership to junior SREs and other engineering teams, sharing knowledge and promoting SRE best practices across the organization. Duties & accountabilities The candidate should handle all support requests; incident, problem and change management, and business continuity activities, to ensure flawless and quality delivery of services to end users. This is a critical role requiring a highly dedicated individual who can take ownership and provide procedural and technical support to various teams and internal/external stakeholders. Provide second line client-facing technical support for issues escalated by first line support teams. Apply strong technical skills and good business knowledge together with investigative techniques and problem-solving skills to identify and resolve issues efficiently and in a timely manner. Work collaboratively with development team required for third line escalation. Coordinate with product and delivery teams to ensure the Service Management team is ready for new releases and engaged in early design of new enhancements. Work on initiatives and continuous improvement process around proactive application health monitoring, reporting, and technical support. Key Areas of The Teams Responsibilities Are Proactive monitoring and management of business critical 24x7 real-time. Where required to rectify issues in a timely fashion to restore application functionality. Ensure incidents are correctly processed, assessing business and technical impact and severity. Taking ownership of application incidents and ensuring that they are resolved, this includes retaining ownership of incidents that require 3rd Line or IT Change activity to resolve. Ensuring the communication to the business community remains active. Application responsibilities will cover Application Infrastructure, Data Fixes, User Queries, User Education and Incident Investigation. Monitoring of application events alerts, job schedules, capacity monitors and performance KPI's. Creation and ownership of change requests raised to address any of the above issues. Working with the Functional and Technical teams, to understand future application deliverables. Proactively share knowledge with the team and update the knowledge base with support documentation (Confluence). Work to provide services to agreed Service Level Targets and Operating Level Agreements. Education and hands on experience required. University Graduate of Computer Science or Engineering degree. 8-13 yrs of direct experience in Site Reliability Engineering or DevOps roles, experience implementing disaster recovery, high availability, and incident response in AWS or Azure or GCP. Minimum of 5 years of direct managerial experience, preferably of global teams across multiple time zones. Proficiency with cloud computing environments (AWS GCP/ Azure). Good understanding of Application Support processes Ideally familiar with monitoring tools such as Splunk, Cloudwatch, Dotcom and Monolith. Expertise in SQL Server/PostgreSQL: Proficiency in advanced SQL techniques, query optimization, and experience with complex database systems. Experience with advanced observability tools (e.g., Prometheus, Grafana, Splunk, DataDog) for monitoring, logging, and tracing. Experience in leading post-mortem analyses and implementing preventative measures to avoid recurrence of incidents. Excellent problem-solving skills and the capacity to lead effectively under pressure during incident response and outage management. Must understand operating systems most especially Windows and Linux. Good scripting experience (preferably including python) an advantage. Must be knowledgeable in programming languages, SDLC and experience in raising development bugs including priority assessment, high quality analysis, and detailed investigation. Understanding of agile methodology an advantage Ideally would have experience of working in the Finance Industry and/or experience of S&P Global product.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France