Jobs
Interviews

127 Helm Charts Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

8 - 12 Lacs

Pune, Bengaluru

Work from Office

Location: Pune / Bangalore (Onsite Only) Experience: 6+ Years (4+ years relevant in Camunda) Note: No Remote Option | Subcon Role | Must be ready to work as a Subcon Screening: Strict profile screening before submission Primary Skills & Qualifications: Hands-on experience with Camunda v8 (design, coding, debugging) Ability to translate business requirements into Camunda workflows Proficient in Java, Spring Boot, and Microservices architecture REST / JSON API integration experience Exposure to QA, Automation, and CI/CD pipelines Familiar with DevOps tools Kubernetes, Terraform, Helm charts, EKS Good to have: Frontend experience with ReactJS or Angular Soft Skills: Strong communication and stakeholder management Effective collaboration with cross-functional teams Problem-solving and debugging capabilities

Posted 1 month ago

Apply

12.0 - 17.0 years

16 - 20 Lacs

Bengaluru

Work from Office

locationsBangalore, Indiaposted onPosted 9 Days Ago job requisition id30604 FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity "We are seeking an experienced DevOps Engineer to join our development team to assist in the continuing evolution of our Platform Orchestration product. You will be able to demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading-edge technologies and integration frameworks. Staff training, investment and career growth form an important part of our team ethos. Consequently, you will gain exposure to different software validation techniques supported by industry-standard engineering processes that will help to grow your skills and experience." - VP, Software Engineering. What Youll Contribute Build and maintain CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices. Manage Kubernetes infrastructure (AWS EKS), Helm charts, and service mesh configurations (ISTIO). Use kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Evaluate security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software. Support development and QA teams with code merge, build, install, and deployment environments. Ensure continuous improvement of the software automation pipeline to increase build and integration efficiency. Oversee and maintain the health of software repositories and build tools, ensuring successful and continuous software builds. Verify final software release configurations, ensuring integrity against specifications, architecture, and documentation. Perform fulfillment and release activities, ensuring timely and reliable deployments. What Were Seeking A Bachelors or Masters degree in Computer Science, Engineering, or a related field. 812 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services (EKS, IAM, CloudWatch, S3, Secrets Manager), including networking and security components. Strong experience with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize. Expertise in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools. Hands-on experience with infrastructure provisioning tools such as Docker and CloudFormation. Familiarity with CI/CD pipeline tools and build systems including Jenkins and Maven. Experience administering software repositories such as Git or Bitbucket. Proficient in scripting/programming languages such as Ruby, Groovy, and Java. Proven ability to analyze and resolve issues related to performance, scalability, and reliability. Solid understanding of DNS, Load Balancing, SSL, TCP/IP, and general networking and security best practices. Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 15 Lacs

Pune

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Cloud Platforms:Proficiency in AWS for managing cloud infrastructure. ContainerizationExpertise in Kubernetes and Docker for container management. CI/CD Tools:Experience with Jenkins for building and deploying pipelines. Deployment Experience on Helm Charts or similar technology for managing Kubernetes applications. Scripting LanguagesProficiency in scripting languages such as Groovy/Python, PowerShell/Bash for automation tasks. Your Profile Experienced DevOps Platform Engineer in designing, implementing, and optimizing DevOps platforms with a strong emphasis on automation, CI/CD pipelines, containerization, and infrastructure as code. The Engineer will work closely with the platform team to analyse the existing DevOps platform, identify gaps, and provide recommendations to enhance efficiency and reliability. Ensuring the implementation of these recommendations aligns with industry best practices is key. What youll love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 1 month ago

Apply

4.0 - 9.0 years

0 - 3 Lacs

Visakhapatnam, Hyderabad

Work from Office

Key Responsibilities: Cloud Platform: GCP Infrastructure Automation: Design, implement, and manage infrastructure as code using Terraform to provision and manage GCP resources. Container Orchestration: Deploy and manage Kubernetes clusters, ensuring efficient operation of containerized applications. Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines using Jenkins to automate application build, test, and deployment processes. Containerization: Collaborate with development teams to containerize applications using Docker and manage deployments with Helm Charts. Code Quality Assurance: Integrate and manage SonarQube to ensure code quality and security standards are met. Monitoring and Logging: Implement and manage monitoring solutions using Datadog to ensure system health, performance, and security. Collaboration: Work closely with cross-functional teams, including developers, QA, and operations, to streamline processes and improve productivity. Requirements: Experience: 5+ years in DevOps or cloud engineering roles, with at least 3 years of relevant experience in the specified technologies. Technical Proficiency: Hands-on experience with GCP services and architecture. Proficiency in Terraform for infrastructure as code implementations. Strong understanding and experience with Kubernetes and Docker. Experience in setting up and managing CI/CD pipelines using Jenkins. Familiarity with Helm Charts for application deployment. Experience with SonarQube for code quality analysis. Proficiency in monitoring and logging tools, particularly Datadog. Scripting Skills: Proficiency in scripting languages such as Bash or Python is an added advantage. Strong problem-solving abilities and analytical thinking. Excellent communication skills, both verbal and written. Ability to work collaboratively in a team environment. Strong organizational and time management skills.

Posted 1 month ago

Apply

7.0 - 9.0 years

27 - 42 Lacs

Pune

Work from Office

Primary & Mandatory Skill: Kubernetes Administrator and Helm Chart Certification Mandatory: CKA (Certified Kubernetes Administrator) OR CKAD (Certified Kubernetes Application Developer) Level: SA/M Client Round (Yes/ No): Yes Location Constraint if any : PAN India Shift timing: General shift JD: Should have very good understanding of various components of various types of Kubernetes clusters (Community/AKS/GKE/OpenShift) Should have provisioning experience of various type of Kubernetes clusters(Community/AKS/GKE/OpenShift) Should have Upgradation and monitoring experience of various type of Kubernetes clusters (Community/AKS/GKE/OpenShift) Should have good experience of sizing the Kubernetes clusters Should have very good experience on Container Security & Container Storage Should have hands-on development experience on "GO or JavaScript or Java" Should have very good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin) Should have good experience / knowledge of cloud platforms preferably Azure / Google / OpenStack Should have good understanding of application life cycle management on container platform Should have very good understating of container registry Should have very good understanding of Helm and Helm Charts Should have very good understanding of container monitoring tools like Prometheus, Grafana and ELK Should have very good experience on Linux operating system Should have basis understanding of enterprise networks and container networks Should be able to handle Severity#1 and Severity#2 incidents very good communication skills Should have analytical and problem-solving capabilities, ability to work with teams Good to have knowledge of ITIL Process

Posted 1 month ago

Apply

2.0 - 5.0 years

7 - 12 Lacs

Gurugram

Work from Office

Redhat Openshift Engineer with 3+ years of hands-on experience in Red Hat OpenShift . The ideal candidate will be responsible for managing, configuring, and maintaining container orchestration and cloud infrastructure environments to support enterprise-grade applications and services. Key Responsibilities: Deploy, configure, and maintain OpenShift clusters in production and development environments. Monitor system performance, availability, and capacity planning. Automate infrastructure provisioning and application deployment using CI/CD pipelines. Troubleshoot and resolve issues related to container orchestration, cloud networking, and virtualized environments. Implement security best practices for containerized and cloud-native applications. Collaborate with development, QA, and operations teams to ensure seamless delivery pipelines. Create and maintain documentation related to architecture, processes, and troubleshooting. Required Skills: Strong hands-on experience with Red Hat OpenShift (v4.x preferred). Experience with Kubernetes concepts, Helm charts, and Operators. Familiarity with Linux system administration (RHEL/CentOS). Proficiency in scripting languages like Bash, Python, or Ansible. Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI, or Tekton. Knowledge of cloud networking, load balancers, firewalls, and DNS. Preferred Qualifications: RHCSA/RHCE or OpenShift certification (EX280/EX180). Exposure to monitoring tools such as Prometheus, Grafana, or ELK stack. Experience with GitOps workflows (e.g., ArgoCD or Flux). Basic understanding of ITIL processes and DevOps culture. Education: Bachelors degree in Computer Science, Information Technology, or related field.

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Pune, Bengaluru

Work from Office

Experience: 6+ Years (4+ years relevant in Camunda) Note: No Remote Option | Subcon Role | Must be ready to work as a Subcon Screening: Strict profile screening before submission Primary Skills & Qualifications: Hands-on experience with Camunda v8 (design, coding, debugging) Ability to translate business requirements into Camunda workflows Proficient in Java, Spring Boot, and Microservices architecture REST / JSON API integration experience Exposure to QA, Automation, and CI/CD pipelines Familiar with DevOps tools Kubernetes, Terraform, Helm charts, EKS Good to have: Frontend experience with ReactJS or Angular Soft Skills: Strong communication and stakeholder management Effective collaboration with cross-functional teams Problem-solving and debugging capabilities Total Experience: Relevant Camunda Experience: Current CTC: Expected CTC: Preferred Location (Pune/Bangalore): Notice Period: Willing to join as Subcon (Yes/No): Email to: navaneetha@suzva.com Contact: 9032956160

Posted 2 months ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Pune, Bengaluru

Work from Office

Note: No Remote Option | Subcon Role | Must be ready to work as a Subcon Screening: Strict profile screening before submission Primary Skills & Qualifications: Hands-on experience with Camunda v8 (design, coding, debugging) Ability to translate business requirements into Camunda workflows Proficient in Java, Spring Boot, and Microservices architecture REST / JSON API integration experience Exposure to QA, Automation, and CI/CD pipelines Familiar with DevOps tools Kubernetes, Terraform, Helm charts, EKS Good to have: Frontend experience with ReactJS or Angular Soft Skills: Strong communication and stakeholder management Effective collaboration with cross-functional teams Problem-solving and debugging capabilities

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Hyderabad, Ahmedabad

Hybrid

Job Title: DevOps Engineer Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Exp: 5year to 8year Experience Required: 5 8 years in DevOps engineering roles with proven expertise in CI/CD, infrastructure automation, and Kubernetes.. Mandatory: • OS: Linux • Cloud: GCP (Compute Engine, Load Balancing, GKE, IAM) • CI/CD: Jenkins, GitHub Actions, Argo CD • Containers: Docker, Kubernetes • IaC: Terraform, Helm • Monitoring: Prometheus, Grafana, ELK • Security: Vault, Trivy, OWASP concepts Nice to Have : • Service Mesh (Istio), Pub/Sub, API Gateway Kong • Advanced scripting (Python, Bash, Node.js) • Skywalking, Rancher, Jira, Freshservice Scope: • Own CI/CD strategy and configuration • Implement DevSecOps practices • Drive automation-first culture Roles and Responsibilities: • Design and implement end-to-end CI/CD pipelines using Jenkins, GitHub Actions, and Argo CD for production-grade deployments. • Define branching strategies and workflow templates for development teams. • Automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests across multiple environments. • Implement and maintain container orchestration strategies on GKE, including Helm-based deployments. • Manage secrets lifecycle using Vault and integrate with CI/CD for secure deployments. • Integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. • Collaborate with engineering leads to review deployment readiness and ensure quality gates are met. • Monitor infrastructure health and capacity planning using Prometheus, Grafana, and Datadog; implement alerting rules. • Implement auto-scaling, self-healing, and other resilience strategies in Kubernetes. • Drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers IF Interested contact to 9346538450 or sowmya.v@acesoftlabs.com

Posted 2 months ago

Apply

2.0 - 7.0 years

12 - 22 Lacs

Gurugram, Bengaluru

Work from Office

Lenskart Tech@Lenskart Devops Exp: You are a person whose day-to-day job would involve writing Python scripts, making infrastructure changes via terraform. Strong understanding of Linux. We breathe on Linux. Must have a knack of automating manual efforts. If you have to do something more than 3 times manually you are the person who would hate this. Engage with cross-functional teams in design, development and implementation of DevOps capabilities related to enabling higher developer productivity, environment monitoring and self-healing. Should have good knowledge in AWS. Excellent troubleshooting skills as it would be part of day to day work. Working knowledge of Kubernetes and Docker (any container technology) in production. Understanding of CI/CD pipeline of how it works and how it can be implemented. Should have a knack to identify performance bottlenecks and maturing the monitoring and alerting systems. Good knowledge of monitoring and Logging tools like Grafana/ Prometheus / ELK / Sumologic / NewRelic. Ability to work on-call and respond to production failures. Should be self-motivated as most of the time the person has to drive a project or find performance issues or do POC's independently. You are a person who will be happy to write articles about your leanings and share within the company and in the community. You might be a person who is ready to challenge the architecture for longer performance gains. You know how SSL/TCP-IP/VPN/CDN/DNS/LoadBalancing works Essential skills 1. B.E./B.Tech in CS/IT or equivalent technical qualifications. 2. Knowledge of Amazon web services (AWS) would be a big plus. 3. Experience in administering/managing Windows or LINUX systems. 4. Hands-on experience in AWS, Jenkins, Git, Chef. 5. Experience with the various application servers (Apache, Nginx , varnish etc). 6. Experience in Python, Chef & Terraform, Kubernetes, dockers. Experience installing, upgrading, and maintaining application servers on the LINUX platfor

Posted 2 months ago

Apply

7.0 - 9.0 years

17 - 22 Lacs

Bengaluru

Hybrid

Dear Candidate, EY Is currently hiring for Lead DevOps Engineer role who has Relevant experience of 6+ years in below mentioned skills for EY India- Bengaluru Location. ONLY Immediate joiners or 30 days NP will be considered. Key Responsibilities: Infrastructure Management: Design, implement, and manage scalable infrastructure solutions in Kubernetes for optimal performance and reliability. Monitoring and Optimization: Monitor Kubernetes clusters for service availability, scaling, and resource optimization to meet SLA requirements. Automation: Automate scaling of services using Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler. CI/CD Development: Develop and maintain CI/CD pipelines for automated deployment, testing, and delivery of infrastructure and services. Service Management: Set up and manage self-hosted services (MQTT, Kafka, Redis, Databases, Nginx) within Kubernetes clusters. Alerting and Monitoring: Implement alerting and monitoring solutions using Prometheus, Grafana, and Loki for continuous observability. Deployment and Maintenance: Handle deployment, maintenance, and upgrades of stateful and stateless services across environments. Cost Optimization: Optimize Kubernetes workloads for cost efficiency, reliability, and performance. Log Management: Design and implement log aggregation solutions using Loki for centralized log management. Collaboration: Work with cross-functional teams to troubleshoot and resolve infrastructure issues while adhering to SLA requirements. Security Compliance: Ensure compliance with IT security standards and pass security assessments and penetration tests. High Availability: Maintain high availability and performance of production systems through proactive management. Qualifications: BE/B.Tech/BS/MS/PhD in Computer Science, Information Technology, or a related field. 7-9 years of professional experience in Software Engineering, with 5-6 years in DevOps and Kubernetes. Experience managing self-hosted services in Kubernetes (MQTT, Kafka, Redis, Databases, Nginx). Proficiency in using Helm charts for application packaging and deployment. Experience with monitoring and alerting tools like Prometheus, Grafana, and Loki. Expertise in cloud platforms (AWS, GCP, Azure) and hybrid infrastructure management. Strong shell scripting skills (Bash, PowerShell) and familiarity with Python for automation. Knowledge of source control systems (Git/GitHub) and configuration management tools (Ansible, Chef, etc.). Additional Attributes: Strong understanding of Agile methodologies. Exceptional problem-solving and analytical skills. Excellent communication and teamwork skills for cross-functional collaboration. If interested, Please share below details to Krithika.L@in.ey.com with updated resume : Name: Skill: Notice Period (if serving mention LWD) : Contact Number: Email: Current Location: Preferred Location: Total Exp: Relevant Exp: Current Company: Education (Mention Year of completion): Current CTC (LPA): Expected CTC (LPA) : Offer In Hand (Mention Date of joining): Regards, Talent Team Krithika

Posted 2 months ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad, Bengaluru

Work from Office

Duration: 12Months Job Type: Contract Work Type: Onsite Roles and Responsibilities: GitLab DevOps & CI/CD: Expertise in GitLab DevOps tools, CI/CD best practices, and automation. Pipeline Management: Hands-on experience in designing, implementing, and managing environment-specific pipelines. Proficiency in Shell scripting and YAML scripting for workflow automation. Experience with Terraform or ARM templates is a plus. Kubernetes: Strong expertise in Docker and Azure Kubernetes Service (AKS). Experience with Helm charts, version upgrades, monitoring, and debugging AKS workloads. Azure Experience: In-depth knowledge of Azure App Services, Function Apps, and Azure Key Vault management. Experience managing Azure Virtual Networks (VNet), Route Tables, and Network Security Groups (NSGs). Hands-on experience integrating Checkmarx (CX Scan), Snyk, SonarQube, and Unit Testing frameworks. Strong knowledge of RBAC (Role-Based Access Control), Azure AD (Entra ID), and Azure Policy. Expertise in Azure Monitoring & Alerting, including logs, metrics, and dashboard setup. Understanding of Azure Load Balancers, Application Gateway, and Traffic Manager. Experience in provisioning, scaling, and maintaining VMs on Azure. Deep knowledge of IIS, including dependency installation, configuration, and troubleshooting. Cost Optimization: Familiarity with Azure Cost Management & Optimization strategies. Release Management: Experience in release planning, Change Request (CR) preparation, and LOP (List of Pending) management. Cross-functional Coordination: Ability to coordinate with teams for deployment success and issue resolution. Mandatory Skills: Primary skills: Devops with Azure Kubernetes, GitLab, CI/CD, Shell scripting, Helm charts, Checkmarx (CX Scan), Snyk, SonarQube, and Unit Testing frameworks. Experience: Total Exp: 5-8 years Rel Exp: 7-8 years relevant with the mandate skills

Posted 2 months ago

Apply

4.0 - 6.0 years

14 - 15 Lacs

Chennai

Work from Office

Strong knowledge of Docker, containers, and Kubernetes is required. Must have knowledge in telecom IMS products i.e. I-CSCF, S-CSCF, and P-CSCF. Solid understanding of Helm charts and Helm-based deployments. Experience in deploying CFX-5000, TAS, SBC on platforms such as CFX 5000 OpenStack, VMware, CNF, and CNCS-based environments. Experience with cloud computing service models, including CaaS, PaaS, and IaaS. Knowledge of CFX-5000 CNF planning tools like Plato, TPD, and Acord. Practical experience with Cloud for both Cloud-Native Functions (CNF) and Virtual Network Functions (VNF). Strong understanding of SIP, Diameter and other IMS telecom protocols. In-depth knowledge of 4G, 5G, VoLTE, interfaces, protocols, and IMS architecture. Comprehensive understanding of IMS call flows is mandatory. Hands-on experience with CNF and VNF operations and deployment is essential. Proficient in Linux/Unix commands. Strong communication skills and a positive attitude are required. Practical experience with IMS node operations and deployment is mandatory.

Posted 2 months ago

Apply

3.0 - 8.0 years

15 - 27 Lacs

Hyderabad

Hybrid

Role & responsibilities Bachelors degree in computer science or a related field. 5+ years of experience in DevOps or a related field. Strong experience with cloud-based services Strong experience running Kubernetes as a service (GKE, EKS, AKS). Strong experience with managing Kubernetes clusters. Strong experience with infrastructure automation and deployment tools such as Terraform, Ansible, Docker, Jenkins, GitHub, GitHub Actions or similar tools. Strong experience with monitoring tools such as Grafana, Nagios, ELK, Open Telemetry, Prometheus or similar tools. Desirable experience with Anthos/Istio Service Mesh or similar tools. Desirable experience with Cloud Native Computing Foundation (CNCF) projects, Kubernetes Operators and Key Cloak. Strong knowledge of Linux systems administration.

Posted 2 months ago

Apply

5.0 - 8.0 years

12 - 18 Lacs

Bengaluru

Work from Office

Are you an experienced Platform Engineer looking for a new opportunity to showcase your skills and expertise? If so, then Torry Harris is looking for you! We are currently seeking a skilled and motivated individual to join our team and play a critical role in streamlining and automating our cloud infrastructure. As a Senior Platform Engineer at Torry Harris, you responsible to design, build, and maintain scalable infrastructure that supports software development and deployment. The ideal candidate will have expertise in cloud technologies, automation, and DevOps practices. Roles and Responsibilities • Design and maintain scalable, resilient any cloud infrastructure AWS is recommended. • Implement Infrastructure as Code (IaC) using Terraform, Ansible, or CloudFormation. • Automate provisioning, monitoring, and self-healing mechanisms. • Develop and enhance continuous integration & deployment pipelines. • Develop and maintain Helm charts, Kubernetes manifests, and custom operators. • Implement blue-green deployments, canary releases, and rollback mechanisms. • Ensure fast, reliable software delivery while minimizing downtime. • Integrate security scanning tools (SonarQube, Snyk) into CI/CD workflows. • Ensure secure configurations, RBAC policies, and compliance with industry standards. • Implement secrets management and identity access control in cloud environments. • Deploy monitoring tools (Prometheus, Grafana, Datadog) for real-time observability. • Lead root cause analysis,performance optimization for any platform releated issues. • Ensure system reliability using automated alerting and logging mechanisms • Implement monitoring, logging, and alerting solutions for Kubernetes workloads. • Troubleshoot and resolve issues related to container orchestration and networking. • Stay up to date with Kubernetes ecosystem developments and recommend improvements. • Mentor junior engineers and contribute to technical leadership within the DevOps team. • Work closely with developers, platform engineers, and SREs to optimize workflows. • Drive cross-functional collaboration to align DevOps strategies with business objectives.

Posted 2 months ago

Apply

3.0 - 6.0 years

15 - 20 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Roles and Responsibilities Design and develop application health dashboards, alerting and notification delivery systems to help with observability of application stack in Azure cloud. Respond to incidents, perform root cause analysis, troubleshoot issues, and implement solutions to prevent recurrence. Act as gatekeeper for production deployments, participate in the application release cycles and perform production releases. Manage, and maintain environments hosting Credit, Swaps & FX FO IT microservices and data lake platform. Manage and maintain the lifecycle of core application suite that provide common capabilities such as continuous deployment, observability, and kafka streaming. Establish, deploy, and maintain CI/CD pipelines to automate the build, test, and deployment processes adhering to firms audit and compliance policies. Migration of on-prem build and deployment projects to adopt existing GitOps, cloud deployment pipeline pattern and branching policies. Assist the development teams in containerising, building, and migration of on-prem applications to Azure cloud. Setup, manage and maintain central observability solution for on-prem and cloud. Identify areas that benefit from automation and build automated processes wherever possible. Collaborate with infra teams to provision and manage infra resources required by FO IT development teams in Azure cloud. Implement backup and disaster recovery strategies and participate in annual DR tests and assist with executing the DR test plan. Create and maintain documentation related to common issues, fixes, deployment/release processes, transfer knowledge among DevOps and support team members to remove any key man dependencies Essential Criteria : 2 to 5 years of experience in a SRE/DevOps role preferably in Investment Banking with solid understanding of both. Strong knowledge of DevOps practices, tools, and technologies. Experience in working with, managing, and maintaining enterprise scale production application microservice environments, observability tools. Strong knowledge of containerization and orchestration of microservices. Experience with Docker/Podman, Helm, ArgoCD GitOps tool, Terraform. Experience with Azure Kubernetes Service, Azure Storage, and other Azure cloud related technologies. Experience with Prometheus, Grafana, Loki, Tempo, Grafana Agent, Azure Monitor logging and observability tools. Bamboo CI/CD tools, Bitbucket, GIT. Automation scripting (Bash, Powershell, Python). Be able to demonstrate a high level of professionalism, organisation, self-motivation, and a desire for self- improvement. Ability to plan, schedule and manage a demanding workload.

Posted 2 months ago

Apply

4 - 9 years

7 - 11 Lacs

Hyderabad

Work from Office

Primary Skills 1.Java (8/11/17+) Strong expertise in Core Java, multithreading, collections, and functional programming. 2.Spring Boot Hands-on experience with Spring Boot for developing RESTful microservices. 3.Microservices Architecture Understanding of microservices design patterns, inter-service communication, and distributed systems. 4.Google Cloud Platform (GCP) Experience with Google Kubernetes Engine (GKE) for deploying and managing containerized applications, Cloud Run for running containerized applications in a serverless environment, Cloud Functions for serverless function execution, Cloud Pub/Sub for event-driven communication, and Firestore / Cloud SQL for working with NoSQL and relational databases on GCP. 5.Containers & Docker Experience in containerizing applications using Docker and managing images. 6.Kubernetes (GKE Preferred) Strong knowledge of Pods, Deployments, Services, ConfigMaps, Secrets, and Helm Charts for Kubernetes resource management. 7.RESTful APIs Experience in designing, building, and consuming REST APIs with security best practices. 8.CI/CD Pipelines Hands-on experience with Jenkins, GitHub Actions, GitLab CI/CD, or Google Cloud Build for automated testing and deployment of microservices. 9.Cloud Networking Understanding of VPCs, Load Balancers, and Service Mesh (Istio). 10.SQL & NoSQL Databases Experience with PostgreSQL, MySQL, Firestore, or MongoDB. 11.Logging & Monitoring Familiarity with Google Cloud Logging (Stackdriver), Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana). Secondary Skills Infrastructure as Code (IaC) Terraform for GCP infrastructure automation. Event-Driven Architecture Working knowledge of Kafka, Pub/Sub, or RabbitMQ. Security Best Practices Authentication/Authorization using OAuth2, JWT, and IAM roles. Testing Frameworks JUnit, Mockito, and integration testing for microservices. GraphQL Exposure to GraphQL API development. Agile Methodologies Experience working in Agile/Scrum teams. Performance Tuning Experience optimizing application performance and memory management. Multi-Cloud Exposure Knowledge of AWS or Azure is a plus. DevSecOps Exposure to security scanning tools like Snyk, SonarQube, and OWASP best practices. API Management Experience with API Gateways like Apigee or Kong is beneficial.

Posted 2 months ago

Apply

5 - 8 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Hiring: DevOps Engineer Immediate Joiners Location: Offshore (Chennai / Bangalore preferred) Experience: 5+ Years We’re looking for a DevOps Engineer to support our web and mobile dev teams with CI/CD, GitLab, and automation tooling. Key Skills: GitLab CI/CD, Docker, Terraform Kubernetes (Rancher a plus), Helm, Bash/NodeJS scripting AWS, S3, Infra-as-Code Mobile DevOps exposure, iOS tooling, JFrog Artifactory Agile experience, strong troubleshooting skills Join us immediately! Send your resume or DM now. #DevOps #ImmediateJoiner #GitLab #Terraform #Docker #AWS #HiringNow #ChennaiJobs #BangaloreJobs

Posted 2 months ago

Apply

12 - 21 years

12 - 22 Lacs

Hyderabad, Ahmedabad

Hybrid

Summary: The SRE Manager at Techblocks India will lead the reliability engineering function, ensuring infrastructure resiliency and optimal operational performance. This hybrid role blends technical leadership with team mentorship and cross-functional coordination. Experience Required: 10+ years total experience, with 3+ years in a leadership role in SRE or Cloud Operations. Technical Knowledge and Skills: Mandatory: Deep understanding of Kubernetes, GKE, Prometheus, Terraform Cloud: Advanced GCP administration CI/CD: Jenkins, Argo CD, GitHub Actions Incident Management: Full lifecycle, tools like OpsGenie Nice to Have : Knowledge of service mesh and observability stacks Strong scripting skills (Python, Bash) BigQuery/Dataflow exposure for telemetry Scope: Build and lead a team of SREs Standardize practices for reliability, alerting, and response Engage with Engineering and Product leaders Role & responsibilities if ur interssted please call me 9701923036

Posted 2 months ago

Apply

6 - 10 years

11 - 21 Lacs

Hyderabad, Ahmedabad

Work from Office

Key Responsibilities: Release & Environment Management: Manage release schedules, timelines, and coordination with multiple delivery streams. Own the setup and consistency of lower environments and production cutover readiness. Ensure effective version control, build validation, and artifact management across CI/CD pipelines. Oversee rollback strategies, patch releases, and post-deployment validations. Toolchain Ownership: Manage and maintain DevOps tools such as Jenkins, GitHub Actions, Bitbucket, SonarQube, JFrog, Argo CD, and Terraform. Govern container orchestration through Kubernetes and Helm. Maintain secrets and credential hygiene through HashiCorp Vault and related tools. Infrastructure & Automation: Work closely with Cloud, DevOps, and SRE teams to ensure automated and secure deployments. Leverage GCP (VPC, Compute Engine, GKE, Load Balancer, IAM, VPN, GCS) for scalable infrastructure. Ensure adherence to infrastructure-as-code (IaC) standards using Terraform and Helm charts. Monitoring, Logging & Stability: Implement and manage observability tools such as Prometheus, Grafana, ELK, and Datadog. Monitor release impact, track service health post-deployment, and lead incident response if required. Drive continuous improvement for faster and safer releases by implementing lessons from RCAs. Compliance, Documentation & Coordination: Use Jira, Confluence, and ServiceNow for release planning, documentation, and service tickets. Implement basic security standards (OWASP, WAF, GCP Cloud Armor) in release practices. Conduct cross-team coordination with QA, Dev, CloudOps, and Security for aligned delivery. ole & responsibilities

Posted 2 months ago

Apply

1 - 3 years

8 - 11 Lacs

Chennai

Work from Office

About the Role: We are primarily focused on helping our customers in addressing application performance problems. This role is primarily deep into Research and Cutting-edge Technologies. We are a 100% product development company and are looking for passionate, thoughtful, and compassionate individuals who have experience of building backend applications in Golang . We are primarily focused on helping our customers in addressing application performance problems. It is a challenging role with scope to dive deep into research and development in cutting-edge technologies. You will work with a cross functional product development team. Required Skills: Strong programming fundamentals in Go (Golang) Exposure to containerization technologies such as Docker or equivalent Experience in Kubernetes Operators development using Golang , Helm charts , or Ansible Familiarity with Linux administration and Bash scripting Strong knowledge of Kubernetes fundamentals Experience developing new Kubernetes controllers Hands-on with creating Kubernetes Operators using Kubebuilder or Operator SDK Working knowledge or experience with Python (preferred) Experience with Git Nice to Have: Knowledge of OpenTelemetry libraries Additional experience in Python or Java Experience troubleshooting production issues using external tools (for Go -related applications)

Posted 2 months ago

Apply

3 - 8 years

10 - 20 Lacs

Bengaluru

Remote

About the Team/Role We are seeking a highly skilled DevOps Engineer with in-depth knowledge and hands-on experience in Kubernetes, GitOps, GitHub Actions, Argo CD, and Docker. The ideal candidate will be responsible for containerizing all technology applications and ensuring seamless integration and deployment across our infrastructure. How youll make an impact Provide strong technical guidance and leadership in DevOps practices. Design, implement, and maintain Kubernetes clusters for scalable application deployment. Utilize GitOps methodologies for continuous delivery and operational efficiency. Develop and manage CI/CD pipelines using GitHub Actions and Argo CD. Service mesh implementation using Istio. Containerize applications using Docker to ensure consistency across different environments. Collaborate with development and operations teams to deliver services quickly and efficiently. Monitor and optimize the performance, scalability, and reliability of applications. Experience you’ll bring Proven experience in Kubernetes, GitOps, GitHub Actions, Argo CD, and Docker. Strong background in containerizing technology applications. Demonstrated ability to deliver services quickly without compromising quality. Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication skills and the ability to provide technical guidance to team members. Prior experience in a similar role is essential. Development background is essential. Monitoring and Logging: Implement and manage comprehensive monitoring and logging solutions to ensure proactive issue detection and resolution. Debugging and Troubleshooting: Utilize advanced debugging and troubleshooting skills to address complex issues across the infrastructure and application stack. Architect and Design: Lead the architecture and design of scalable and reliable infrastructure solutions, ensuring alignment with organizational goals and industry best practices.

Posted 2 months ago

Apply

1 - 3 years

6 - 12 Lacs

Coimbatore

Hybrid

We are primarily focused on helping our customers in addressing application performance problems. This role is primarily deep into Research and Cutting-edge Technologies . We are a 100% product development company and are looking for passionate, thoughtful, and compassionate individuals who have experience of building backend applications in GoLang. We are primarily focused on helping our customers in addressing application performance problems. It is a challenging role with scope to dive deep into research and development in cutting-edge technologies. You will work with a cross functional product development team. Strong programming fundamentals in Go language Exposure with containerization technologies such as Docker or other equivalent. Exposure with Kubernetes Operators development Golang, Helm charts or ansible Exposure with Linux administration and Bash scripting Strong knowledge of Kubernetes and its fundamentals Exposure extending Kubernetes via developing new controllers. Exposure with creating Kubernetes operators using Kubebuilder or Operator SDK Knowledge/Experience with Python programming is a plus. Knowledge or working experience with git. Nice to Have: Knowledge of Open Telemetry libraries. Python / Java Experience in troubleshooting production issues using external tools. (For Go related). The selected talent will be used in architecting and developing product for “Go Application Performance”.

Posted 2 months ago

Apply

1 - 3 years

6 - 12 Lacs

Mysuru

Hybrid

We are primarily focused on helping our customers in addressing application performance problems. This role is primarily deep into Research and Cutting-edge Technologies . We are a 100% product development company and are looking for passionate, thoughtful, and compassionate individuals who have experience of building backend applications in GoLang. We are primarily focused on helping our customers in addressing application performance problems. It is a challenging role with scope to dive deep into research and development in cutting-edge technologies. You will work with a cross functional product development team. Strong programming fundamentals in Go language Exposure with containerization technologies such as Docker or other equivalent. Exposure with Kubernetes Operators development Golang, Helm charts or ansible Exposure with Linux administration and Bash scripting Strong knowledge of Kubernetes and its fundamentals Exposure extending Kubernetes via developing new controllers. Exposure with creating Kubernetes operators using Kubebuilder or Operator SDK Knowledge/Experience with Python programming is a plus. Knowledge or working experience with git. Nice to Have: Knowledge of Open Telemetry libraries. Python / Java Experience in troubleshooting production issues using external tools. (For Go related). The selected talent will be used in architecting and developing product for “Go Application Performance”.

Posted 2 months ago

Apply

1 - 3 years

6 - 12 Lacs

Pune

Hybrid

We are primarily focused on helping our customers in addressing application performance problems. This role is primarily deep into Research and Cutting-edge Technologies . We are a 100% product development company and are looking for passionate, thoughtful, and compassionate individuals who have experience of building backend applications in GoLang. We are primarily focused on helping our customers in addressing application performance problems. It is a challenging role with scope to dive deep into research and development in cutting-edge technologies. You will work with a cross functional product development team. Strong programming fundamentals in Go language Exposure with containerization technologies such as Docker or other equivalent. Exposure with Kubernetes Operators development Golang, Helm charts or ansible Exposure with Linux administration and Bash scripting Strong knowledge of Kubernetes and its fundamentals Exposure extending Kubernetes via developing new controllers. Exposure with creating Kubernetes operators using Kubebuilder or Operator SDK Knowledge/Experience with Python programming is a plus. Knowledge or working experience with git. Nice to Have: Knowledge of Open Telemetry libraries. Python / Java Experience in troubleshooting production issues using external tools. (For Go related). The selected talent will be used in architecting and developing product for “Go Application Performance”.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies