Home
Jobs

37 Kustomize Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

6 - 8 Lacs

Noida

Remote

GlassDoor logo

JOB DESCRIPTION Application Management ServicesAMS’s mission is to maximize the contributions of MMC Technology as a business-driven, future-ready and competitive function by reducing the time and cost spent managing applicationsAMS, Business unit of Marsh McLennan is seeking candidates for the following position based in the Gurgaon/Noida office: Principal Engineer Kubernetes Platform Engineer Position overview: We are seeking a skilled Kubernetes Platform Engineer with strong background in Cloud technologies (AWS, Azure) to manage, configure, and support Kubernetes infrastructure in a dynamic, high-availability environment. The Engineer collaborates with Development, DevOps and other technology teams to ensure that the Kubernetes platform ecosystem is reliable, scalable and efficient. The ideal candidate must possess hands-on experience in Kubernetes clusters operations management, and container orchestration, along with strong problem-solving skills. Experience in infrastructure platform management is required. Responsibilities: Implement and maintain platform services in Kubernetes infrastructure. Perform upgrades and patch management for Kubernetes and its associated components (not limited to API management system) are expected and required. Monitor and optimize Kubernetes resources, such as pods, nodes, and namespaces. Implement and enforce Kubernetes security best practices, including RBAC, network policies, and secrets management. Work with the security team to ensure container and cluster compliance with organizational policies. Troubleshoot and resolve issues related to Kubernetes infrastructure in a timely manner. Provide technical guidance and support to developers and DevOps teams. Maintain detailed documentation of Kubernetes configurations and operational processes. Maintain and support of Ci/CD pipelines are not part of the support scope of this position. Preferred skills and experience: At least 3 years of experience in managing and supporting Kubernetes clusters at platform operation layer, and its ecosystem. At least 2 years of infrastructure management and support, not limited to SSL certificate, Virtual IP. Proficiency in managing Kubernetes clusters using tools such as 'kubectl', Helm, or Kustomize. In-depth knowledge and experience of container technologies, including Docker. Experience with cloud platforms (AWS, GCP, Azure) and Kubernetes services (EKS, GKE, AKS). Understanding of infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of centralized logging systems like Fluentd, Logstash, or Loki. Proficiency in scripting languages (e.g., Bash, Python, or Go). Experience in supporting Public Cloud or hybrid cloud environments. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s 85,000 colleagues advise clients in 130 countries. With annual revenue of over $20 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh advises individual and commercial clients of all sizes on insurance broking and innovative risk management solutions. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and wellbeing for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and TwitterMarsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law.Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X.Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law.Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person.

Posted 4 days ago

Apply

7.0 - 10.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

Who You’ll Work With You’ll be joining a dynamic, fast-paced Global EADP (Enterprise Architecture & Developer Platforms) team within Nike. Our team is responsible for building innovative cloud-native platforms that scale with the growing demands of the business. Collaboration and creativity are at the core of our culture, and we’re passionate about pushing boundaries and setting new standards in platform development. Who We Are Looking For We are looking for an ambitious Lead Software Engineer – Platforms with a passion for cloud-native development and platform ownership. You are someone who thrives in a collaborative environment, is excited by cutting-edge technology, and excels at problem-solving. You have a strong understanding of AWS Cloud Services, Kubernetes, DevOps, Databricks, Python and other cloud-native platforms. You should be an excellent communicator, able to explain technical details to both technical and non-technical stakeholders and operate with urgency and integrity. Key Skills & Traits Deep expertise in Kubernetes, AWS Services, Full Stack. working experience in designing and building production grade Microservices in any programming languages preferably in Python Experience Building end to end CI/CD pipeline to build, test and deploy to different AWS environments such as lambda, EC2,ECS , EKS etc. Experience on AIML with proven knowledge of building chatbots by using LLM’s. Familiarity with software engineering best practices – including unit tests, code review, version control, production monitoring, etc. Strong Experience on React, Node JS, Proficient in managing cloud-native platforms, with a strong PaaS (Platform as a Service) focus. Knowledge of software engineering best practices including version control, code reviews, and unit testing. A proactive approach with the ability to work independently in a fast-paced, agile environment. Strong collaboration and problem-solving skills. Mentoring team through the complex technical problems What You’ll Work On You will play a key role in shaping and delivering Nike’s next-generation platforms. As a Lead Software Engineer, you’ll leverage your technical expertise to build resilient, scalable solutions, manage platform performance, and ensure high standards of code quality. You’ll also be responsible for leading the adoption of open-source and agile methodologies within the organization. Day-to-Day Activities: Deep working experience on Kubernetes, AWS Services, Databricks, AIML etc., Working experience of infrastructure as code tools, such as Helm, Kustomize, or Terraform. Implementation of Open Source Projects in K8s. Ability to set up monitoring, logging, and alerting for Kubernetes clusters. Implementation of Kubernetes security best practices, like RBAC, network, and pod security policies Experience with container runtimes like Docker Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Design, implement, and maintain robust CI/CD pipelines using Jenkins for efficient software delivery. Manage and optimize Artifactory repositories for efficient artifact storage and distribution. Architect, deploy, and manage AWS EC2 instances, Lambda functions, Auto Scaling Groups (ASG), and Elastic Block Store (EBS) volumes. Collaborate with cross-functional teams to ensure seamless integration of DevOps practices into the software development lifecycle. Monitor, troubleshoot, and optimize AWS resources to ensure high availability, scalability, and performance. Implement security best practices and compliance standards in the AWS environment. Develop and maintain scripts in Python, Groovy, and Shell for automation and core engineering tasks. Deep expertise in at least one of the technologies - Python, React, NodeJS Good Knowledge on CI/CD Pipelines and DevOps Skills – Jenkins, Docker, Kubernetes etc., Collaborate with product managers to scope new features and capabilities. Strong collaboration and problem-solving skills. 7-10 years of experience in designing and building production-grade platforms. Technical expertise in Kubernetes, AWS Cloud Services and cloud-native architectures. Proficiency in Python, Node JS, React, SQL, and AWS. Strong understanding of PaaS architecture and DevOps tools like Kubernetes, Jenkins, Terraform, Docker Familiarity with governance, security features, and performance optimization. Keen attention to detail with a growth mindset and the desire to explore new technologies.

Posted 6 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Location: Bangalore / Hyderabad / Chennai / Pune / Gurgaon Mode: Hybrid (3 days/week from office) Relevant Experience: 7+ years must Role Type: Individual Contributor Client: US-based multinational banking institution Role Summary We are hiring a seasoned DevOps Engineer (IC) to drive infrastructure automation, deployment reliability, and engineering velocity for AWS-hosted platforms. You’ll play a hands-on role in building robust CI/CD pipelines, managing Kubernetes (EKS or equivalent), and implementing GitOps, infrastructure as code, and monitoring systems. Must-Have Skills & Required Depth AWS Cloud Infrastructure Independently provisioned core AWS services — EC2, VPC, S3, RDS, Lambda, SNS, ECR — using CLI and Terraform. Configured IAM roles, security groups, tagging standards, and cost monitoring dashboards. Familiar with basic networking and serverless deployment models. Containerization (EKS / Kubernetes) Deployed containerized services to Amazon EKS or equivalent. Authored Helm charts, configured ingress controllers, pod autoscaling, resource quotas, and health probes. Troubleshot deployment rollouts, service routing, and network policies. Infrastructure as Code (Terraform / Ansible / AWS SAM) Created modular Terraform configurations with remote state, reusable modules, and drift detection. Implemented Ansible playbooks for provisioning and patching. Used AWS SAM for packaging and deploying serverless workloads. GitOps (Argo CD / Equivalent) Built and managed GitOps pipelines using Argo CD or similar tools. Configured application sync policies, rollback strategies, and RBAC for deployment automation. CI/CD (Bitbucket / Jenkins / Jira) Developed multi-stage pipelines covering build, test, scan, and deploy workflows. Used YAML-based pipeline-as-code and integrated Jira workflows for traceability. Scripting (Bash / Python) Written scripts for log rotation, backups, service restarts, and automated validations. Experienced in handling conditional logic, error management, and parameterization. Operating Systems (Linux) Proficient in Ubuntu/CentOS system management, package installations, and performance tuning. Configured Apache or NGINX for reverse proxy, SSL, and redirects. Datastores (MySQL / PostgreSQL / Redis) Managed relational and in-memory databases for application integration, backup handling, and basic performance tuning. Monitoring & Alerting (Tool-Agnostic) Configured metrics collection, alert rules, and dashboards using tools like CloudWatch, Prometheus, or equivalent. Experience in designing actionable alerts and telemetry pipelines. Incident Management & RCA Participated in on-call rotations. Handled incident bridges, triaged failures, communicated status updates, and contributed to root cause analysis and postmortems. Nice-to-Have Skills Skill Skill Depth Kustomize / FluxCD Exposure to declarative deployment strategies using Kustomize overlays or FluxCD for GitOps workflows. Kafka Familiarity with event-streaming architecture and basic integration/configuration of Kafka clusters in application environments. Datadog (or equivalent) Experience with Datadog for monitoring, logging, and alerting. Configured custom dashboards, monitors, and anomaly detection. Chaos Engineering Participated in fault-injection or resilience testing exercises. Familiar with chaos tools or simulations for validating system durability. DevSecOps & Compliance Exposure to integrating security scans in pipelines, secrets management, and contributing to compliance audit readiness. Build Tools (Maven / Gradle / NPM) Experience integrating build tools with CI systems. Managed dependency resolution, artifact versioning, and caching strategies. Backup / DR Tooling (Veeam / Commvault) Familiar with backup scheduling, data restore processes, and supporting DR drills or RPO/RTO planning. Certifications (AWS / Terraform) Possession of certifications like AWS Certified DevOps Engineer, Developer Associate, or HashiCorp Certified Terraform Associate is preferred.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Who we are. Newfold Digital (with over $1b in revenue) is a leading web technology company serving nearly seven million customers globally. Established in 2021 through the combination of leading web services providers Endurance Web Presence and Web.com Group, our portfolio of brands includes: Bluehost, Crazy Domains, HostGator, Network Solutions, Register.com, Web.com and many others. We help customers of all sizes build a digital presence that delivers results. With our extensive product offerings and personalized support, we take pride in collaborating with our customers to serve their online presence needs. We’re hiring for our Developer Platform team at Newfold Digital — a team focused on building the internal tools, infrastructure, and systems that improve how our engineers develop, test, and deploy software. In this role, you’ll help design and manage CI/CD pipelines, scale Kubernetes-based infrastructure, and drive adoption of modern DevOps and GitOps practices. You’ll work closely with engineering teams across the company to improve automation, deployment velocity, and overall developer experience. We’re looking for someone who can take ownership, move fast, and contribute to a platform that supports thousands of deployments across multiple environments. What you'll do & how you'll make your mark. Build and maintain scalable CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI Manage and improve Kubernetes clusters (Helm, Kustomize) used across environments Implement GitOps workflows using Argo CD or Argo Workflows Automate infrastructure provisioning and configuration with Terraform and Ansible Develop scripts and tooling in Bash, Python, or Go to reduce manual effort and improve reliability Work with engineering teams to streamline and secure the software delivery process Deploy and manage services across cloud platforms (AWS, GCP, Azure, OCI). Who you are & what you'll need to succeed. Strong understanding of core DevOps concepts including CI/CD, GitOps, and Infrastructure as Code Hands-on experience with Docker, Kubernetes, and container orchestration Proficiency with at least one major cloud provider (AWS, Azure, GCP, or OCI) Experience writing and managing Jenkins pipelines or similar CI/CD tools Comfortable working with Terraform, Ansible, or other configuration management tools Strong scripting skills (Bash, Python, Go) and a mindset for automation Familiarity with Linux-based systems and cloud-native infrastructure Ability to work independently and collaboratively across engineering and platform teams Good to Have Experience with build tools like Gradle or Maven Familiarity with Bitbucket or Git-based workflows Prior experience with Argo CD or other GitOps tooling Understanding of internal developer platforms and shared libraries Prior experience with agile development and project management. Why you’ll love us. We’ve evolved; we provide three work environment scenarios. You can feel like a Newfolder in a work-from-home, hybrid, or work-from-the-office environment. Work-life balance. Our work is thrilling and meaningful, but we know balance is key to living well. We celebrate one another’s differences. We’re proud of our culture of diversity and inclusion. We foster a culture of belonging. Our company and customers benefit when employees bring their authentic selves to work. We have programs that bring us together on important issues and provide learning and development opportunities for all employees. We have 20 + affinity groups where you can network and connect with Newfolders globally. We care about you. . At Newfold, taking care of our employees is our top priority. we make sure that cutting edge benefits are in place to for you. Some of the benefits you will have: We have partnered with some of the best insurance providers to provide you excellent Health Insurance options, Education/ Certification Sponsorships to give you a chance to further your knowledge, Flexi-leaves to take personal time off and much more. Building a community one domain at a time, one employee at a time. All our employees are eligible for a free domain and WordPress blog as we sponsor the domain registration costs. Where can we take you? We’re fans of helping our employees learn different aspects of the business, be challenged with new tasks, be mentored, and grow their careers. Unfold new possibilities with #teamnewfold!

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary The AI&E portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE - Edge CI/CD Specialist Level: Senior Consultant As Senior Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Responsibilities: The work you will do includes Integrate tools like SonarQube, Synk, or other widely used code quality analysis tools to detect issues such as bugs, vulnerabilities, and code smells. Develop and maintain infrastructure as code using tools like Terraform. Work closely with development, operations, and security teams to ensure seamless integration of CI/CD processes. Provide support for deployment issues and troubleshoot problems in the CI/CD pipeline. Maintain comprehensive documentation of CI/CD processes, tools, and best practices. Train and mentor junior team members on edge and cloud computing technologies and best practices. Qualifications Skills / Project Experience: Cloud Platform: Extensive experience with cloud platforms such as AWS, Azure, or Google Cloud Proficiency in CI/CD tools such as Jenkins, GitLab CI, CircleCI, Harness, GITHub Actions, Argo CD, or Travis CI. Proficiency in using Helm charts and Kustomize for templating and managing Kubernetes manifests. Strong skills in scripting languages such as Bash, Python, or Groovy for automation tasks. Se curity: Experience working with SonarQube, Snyk or similar tools. Containerization: Experience with containerization technologies like Docker. Co ntainerization and Orchestration: Expertise in containerization technologies like Docker and Proficiency in orchestration tools like Kubernetes. Programming Languages: Proficiency in languages such as Python, Java, C++, or similar. Infrastructure as Code: Extensive experience with Ansible, Terraform, Chef, Puppet or similar tools. Project Management: Proven track record in leading large-scale cloud infrastructure projects. Collaboration: Effective communicator with cross-functional teams. Must Have: Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domains and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Problem-Solving: Strong analytical and troubleshooting skills to address client-specific challenges. Adaptability: Ability to quickly adapt to changing client requirements and emerging technologies. Project Leadership: Demonstrated leadership in managing client projects, ensuring timely delivery and client satisfaction. Business Acumen: Understanding of business processes and the ability to align technical solutions with client business goals. Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 6 - 10 years of experience working with DevOps, Terraform, Ansible, Jenkins, CI/CD, Edge Computing, monitoring, Infrastructure as Code (IaC), SonarQube, Synk, Docker, Kustomize Location: Bengaluru/ Hyderabad/ Gurugram The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services. Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com . #HC&IE Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 302210 Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Summary Position Summary The AI&E portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE – Edge CI/CD Specialist Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. Responsibilities: The work you will do includes: Design, implement, and maintain CI/CD pipelines tailored for edge computing environments using tools such as Jenkins, Git and Travis CI etc. Develop automation scripts to streamline the deployment process. Manage code repositories and version control systems (e.g., Git). Implement automated testing frameworks with tools such as JUnit, pytest, Selenium, and Cypress for test automation to ensure code quality and reliability. Integrate tools like SonarQube, Synk, or other widely used code quality analysis tools to detect issues such as bugs, vulnerabilities, and code smells. Develop and maintain infrastructure as code using tools like Terraform. Work closely with development, operations, and security teams to ensure seamless integration of CI/CD processes. Provide support for deployment issues and troubleshoot problems in the CI/CD pipeline. Maintain comprehensive documentation of CI/CD processes, tools, and best practices. Train and mentor junior team members on edge and cloud computing technologies and best practices. Qualifications Skills / Project Experience: Cloud Platform: Extensive experience with cloud platforms such as AWS, Azure, or Google Cloud Proficiency in CI/CD tools such as Jenkins, GitLab CI, CircleCI, Harness, GITHub Actions, Argo CD, or Travis CI. Proficiency in using Helm charts and Kustomize for templating and managing Kubernetes manifests. Strong skills in scripting languages such as Bash, Python, or Groovy for automation tasks. Security: Experience working with SonarQube, Snyk or similar tools. Containerization: Experience with containerization technologies like Docker. Containerization and Orchestration: Expertise in containerization technologies like Docker and Proficiency in orchestration tools like Kubernetes. Programming Languages: Proficiency in languages such as Python, Java, C++, or similar. Infrastructure as Code: Extensive experience with Ansible, Terraform, Chef, Puppet or similar tools. Project Management: Proven track record in leading large-scale cloud infrastructure projects. Collaboration: Effective communicator with cross-functional teams. Must Have: Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domains and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Problem-Solving: Strong analytical and troubleshooting skills to address client-specific challenges. Adaptability: Ability to quickly adapt to changing client requirements and emerging technologies. Project Leadership: Demonstrated leadership in managing client projects, ensuring timely delivery and client satisfaction. Business Acumen: Understanding of business processes and the ability to align technical solutions with client business goals. Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 -7 years of experience working with DevOps, Terraform, Ansible, Jenkins, CI/CD, Edge Computing, monitoring, Infrastructure as Code (IaC), SonarQube, Synk, Docker, Kustomize Location: Bengaluru/ Hyderabad/ Gurugram The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services. Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com. #HC&IE Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 303055 Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Company Description Prioxis Technologies, formerly known as HypeTeq Software Solutions, is dedicated to delivering exceptional IT services and custom software solutions. With 5+ years of experience, we have successfully completed over 100 projects across various industries, serving clients in more than 8 countries. Our team comprises over 50 certified software developers. As a Microsoft Gold Partner, we are recognized for our innovative approach and technical excellence in technology outsourcing. Our services include custom software development, cloud consulting, front-end and back-end development, enterprise mobility, and DevOps. Founded in 2019, Prioxis Technologies aims to empower businesses with tailor-made technology solutions. 🛡️ Security & Compliance Lead Security Risk Assessments (SRA) and Data Classification Requests (DCR) Ensure compliance with Roche’s security standards Conduct security audits and implement remediation plans 💰 Financial Operations (FinOps) Optimize cloud infrastructure costs Manage and monitor MLOps budget plans Provide cost analysis and financial reporting 🏗️ Architecture & Engineering Design, document, and maintain MLOps infrastructure Contribute to architectural best practices Implement and deploy robust MLOps pipelines 🔍 Technical Evaluations Run Proofs of Concept (PoCs) for emerging tools Evaluate solutions and recommend technical direction 🧩 Task Management Break down technical epics into actionable tasks Identify dependencies and propose optimal approaches 5+ years in MLOps , DevOps , or related roles Proficient in Python Hands-on with AWS , Docker , Kubernetes (Helm, Kustomize) Experience with Terraform or CDK (Infrastructure as Code) Skilled in CI/CD tools: GitLab CI , ArgoCD Familiar with observability tools: Grafana , ELK , or Datadog Bonus: Experience with Kubeflow , KServe Solid understanding of system architecture and design patterns Work Timing: 12:30 PM – 9:30 PM IST Contract Duration: 6 Months Location: Remote (India-based candidates preferred) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Technical Operation for pRED-MLOps Job Profile Summary: Support technical operations for pRED-MLOps, focusing on security, financial operations, architecture, technical evaluations, and task breakdown. Key Responsibilities Security ● Drive security processes including Security Risk Assessment (SRA) and Data Classification Request (DCR). ● Ensure compliance with Roche security policies. ● Conduct security audits and lead implementations of remediation plans. Financial Operations (FinOps) ● Manage and optimize cloud infrastructure costs. ● Develop and monitor budget plans for MLOps operations. ● Provide regular cost analysis and reporting. Architecture and Engineering Support ● Contribute to the design and maintenance of the MLOps solutions and infrastructure. ● Contribute to architectural best practices. ● Support the team in documenting system architecture and configurations ● Contribute to the hands-on implementation of MLOps solutions and infrastructure Technical Explorations/Evaluations ● Conduct Proofs of Concept (PoCs) for new technologies. ● Evaluate technical solutions and make recommendations. Technical Task Breakdown ● Support the team in ○ breaking down tasks and epics into manageable components ○ identifying dependencies between tasks ○ proposing an optimal approach Qualifications Security Experience- Experience with security processes, preferably Roche SRA/DCR FinOps Experience- Experience managing and optimizing cloud costs Architecture- Understanding of system architecture principles and design patterns, preferably have previous experience in MLOps or similar area of work Technical Skills- Proficient in Python ● Extensive hands-on experience with cloud technologies, preferably AWS ● Extensive hands-on experience in Docker and Kubernetes (incl. Helm, Kustomize) ● Familiar with Infrastructure-as-Code tools, such as Terraform/CDK ● Familiar with CI/CD tools, such as Gitlab CI, ArgoCD ● Familiar with observability stacks, such as GrafanaLab stack or ELK or Datadog ● Preferably has previous experience in popular MLOps technologies, such as Kubeflow, KServe. Task Management- Experience in breaking down technical tasks and epics Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS) Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments Support deployment of infrastructure lambda functions Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments Ensure auditability and observability of pipeline states Implement security best practices, audit, and compliance requirements within the infrastructure Provide technical leadership, mentorship, and training to engineering staff Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role Extensive experience with Amazon Web Services (AWS) Proven ability to architect for scale, availability, and high-performance workloads Ability to plan and execute zero-disruption migrations Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD) Solid understanding of git, branching models, CI/CD pipelines and deployment strategies Experience with security, audit, and compliance best practices Excellent problem-solving and analytical skills Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership Experience with client relationship management and project planning Certifications: Relevant certifications (for example Kubernetes Certified Administrator, AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional etc.) Software development experience (for example Terraform, Python) Experience with machine learning infrastructure Education: B.Tech /BE in computer science, a related field or equivalent experience Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary: We are seeking a highly skilled and experienced Senior Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecture. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance. You will also provide mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS) Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments Support deployment of infrastructure lambda functions Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments Ensure auditability and observability of pipeline states Implement security best practices, audit, and compliance requirements within the infrastructure Engage with clients to understand their technical and business requirements, and provide tailored solutions If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads Troubleshoot and resolve complex infrastructure issues Qualifications: 6+ years of experience in Infrastructure Engineering or similar role Extensive experience with Amazon Web Services (AWS) Proven ability to architect for scale, availability, and high-performance workloads Deep knowledge of Infrastructure as Code (IaC) with Terraform Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD) Solid understanding of git, branching models, CI/CD pipelines and deployment strategies Experience with security, audit, and compliance best practices Excellent problem-solving and analytical skills Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders Experience in technical mentoring, team-forming and fostering self-organization and ownership Experience with client relationship management and project planning Certifications: Relevant certifications (e.g., Kubernetes Certified Administrator, AWS Certified Machine Learning Engineer - Associate, AWS Certified Data Engineer - Associate, AWS Certified Developer - Associate, etc.) Software development experience (e.g., Terraform, Python) Experience/Exposure with machine learning infrastructure Education: B.Tech/BE in computer sciences, a related field or equivalent experience Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Application Management Services AMS’s mission is to maximize the contributions of MMC Technology as a business-driven, future-ready and competitive function by reducing the time and cost spent managing applications AMS , Business unit of Marsh McLennan is seeking candidates for the following position based in the Gurgaon/Noida office: Principal Engineer Kubernetes Platform Engineer Position overview: We are seeking a skilled Kubernetes Platform Engineer with strong background in Cloud technologies (AWS, Azure) to manage, configure, and support Kubernetes infrastructure in a dynamic, high-availability environment. The Engineer collaborates with Development, DevOps and other technology teams to ensure that the Kubernetes platform ecosystem is reliable, scalable and efficient. The ideal candidate must possess hands-on experience in Kubernetes clusters operations management, and container orchestration, along with strong problem-solving skills. Experience in infrastructure platform management is required. Responsibilities: Implement and maintain platform services in Kubernetes infrastructure. Perform upgrades and patch management for Kubernetes and its associated components (not limited to API management system) are expected and required. Monitor and optimize Kubernetes resources, such as pods, nodes, and namespaces. Implement and enforce Kubernetes security best practices, including RBAC, network policies, and secrets management. Work with the security team to ensure container and cluster compliance with organizational policies. Troubleshoot and resolve issues related to Kubernetes infrastructure in a timely manner. Provide technical guidance and support to developers and DevOps teams. Maintain detailed documentation of Kubernetes configurations and operational processes. Maintain and support of Ci/CD pipelines are not part of the support scope of this position. Preferred skills and experience: At least 3 years of experience in managing and supporting Kubernetes clusters at platform operation layer, and its ecosystem. At least 2 years of infrastructure management and support, not limited to SSL certificate, Virtual IP. Proficiency in managing Kubernetes clusters using tools such as `kubectl`, Helm, or Kustomize. In-depth knowledge and experience of container technologies, including Docker. Experience with cloud platforms (AWS, GCP, Azure) and Kubernetes services (EKS, GKE, AKS). Understanding of infrastructure-as-code (IaC) tools such as Terraform or CloudFormation. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of centralized logging systems like Fluentd, Logstash, or Loki. Proficiency in scripting languages (e.g., Bash, Python, or Go). Experience in supporting Public Cloud or hybrid cloud environments. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s 85,000 colleagues advise clients in 130 countries. With annual revenue of over $20 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh advises individual and commercial clients of all sizes on insurance broking and innovative risk management solutions. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and wellbeing for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and Twitter Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_310034 Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled and passionate GKE Platform Engineer to join our growing team. This role is ideal for someone with deep experience in managing Google Kubernetes Engine (GKE) platforms at scale, particularly with enterprise-level workloads on Google Cloud Platform (GCP). As part of a dynamic team, you will design, develop, and optimize Kubernetes-based solutions, using tools like GitHub Actions, ACM, KCC, and workload identity to provide high-quality platform services to developers. You will drive CI/CD pipelines across multiple lifecycle stages, manage GKE environments at scale, and enhance the developer experience on the platform. You should have a strong mindset for developer experience, focused on creating reliable, scalable, and efficient infrastructure to support developer needs. This is a fast-paced environment where collaboration across teams is key to delivering impactful results. Responsibilities Responsibilities: GKE Platform Management at Scale: Manage and optimize large-scale GKE environments in a multi-cloud and hybrid-cloud context, ensuring the platform is highly available, scalable, and secure. CI/CD Pipeline Development: Build and maintain CI/CD pipelines using tools like GitHub Actions to automate deployment workflows across the GKE platform. Ensure smooth integration and delivery of services throughout their lifecycle. Enterprise GKE Management: Leverage advanced features of GKE such as ACM (Anthos Config Management) and KCC (Kubernetes Cluster Config) to manage GKE clusters efficiently at the enterprise scale. Workload Identity & Security: Implement workload identity and security best practices to ensure secure access and management of GKE workloads. Custom Operators & Controllers: Develop custom operators and controllers for GKE, automating the deployment and management of custom services to enhance the developer experience on the platform. Developer Experience Focus: Maintain a developer-first mindset to create an intuitive, reliable, and easy-to-use platform for developers. Collaborate with development teams to ensure seamless integration with the GKE platform. GKE Deployment Pipelines: Provide guidelines and best practices for GKE deployment pipelines, leveraging tools like Kustomize and Helm to manage and deploy GKE configurations effectively. Ensure pipelines are optimized for scalability, security, and repeatability. Zero Trust Model: Ensure GKE clusters operate effectively within a Zero Trust security model. Maintain a strong understanding of the principles of Zero Trust security, including identity and access management, network segmentation, and workload authentication. Ingress Patterns: Design and manage multi-cluster and multi-regional ingress patterns to ensure seamless traffic management and high availability across geographically distributed Kubernetes clusters. Deep Troubleshooting & Support: Provide deep troubleshooting knowledge and support to help developers pinpoint issues across the GKE platform, focusing on debugging complex Kubernetes issues, application failures, and performance bottlenecks. Utilize diagnostic tools and debugging techniques to resolve critical platform-related issues. Observability & Logging Tools: Implement and maintain observability across GKE clusters, using monitoring, logging, and alerting tools like Prometheus, Dynatrace, and Splunk. Ensure proper logging and metrics are in place to enable developers to effectively monitor and diagnose issues within their applications. Platform Automation & Integration: Automate platform management tasks, such as scaling, upgrading, and patching, using tools like Terraform, Helm, and GKE APIs. Continuous Improvement & Learning: Stay up-to-date with the latest trends and advancements in Kubernetes, GKE, and Google Cloud services to continuously improve platform capabilities. Qualifications Qualifications: Experience: 8+ years of overall experience in cloud platform engineering, infrastructure management, and enterprise-scale operations. 5+ years of hands-on experience with Google Cloud Platform (GCP), including designing, deploying, and managing cloud infrastructure and services. 5+ years of experience specifically with Google Kubernetes Engine (GKE), managing large-scale, production-grade clusters in enterprise environments. Experience with deploying, scaling, and maintaining GKE clusters in production environments. Hands-on experience with CI/CD practices and automation tools like GitHub Actions. Proven track record of building and managing GKE platforms in a fast-paced, dynamic environment. Experience developing custom Kubernetes operators and controllers for managing complex workloads. Deep Troubleshooting Knowledge: Strong ability to troubleshoot complex platform issues, with expertise in diagnosing problems across the entire GKE stack. Technical Skills: Must Have: Google Cloud Platform (GCP): Extensive hands-on experience with GCP, particularly Kubernetes Engine (GKE), Cloud Storage, Cloud Pub/Sub, Cloud Logging, and Cloud Monitoring. Kubernetes (GKE) at Scale: Expertise in managing large-scale GKE clusters, including security configurations, networking, and workload management. CI/CD Automation: Strong experience with CI/CD pipeline automation tools, particularly GitHub Actions, for building, testing, and deploying applications. Kubernetes Operators & Controllers: Ability to develop custom Kubernetes operators and controllers to automate and manage applications on GKE. Workload Identity & Security: Solid understanding of Kubernetes workload identity and access management (IAM) best practices, including integration with GCP Identity and Google Cloud IAM. Anthos & ACM: Hands-on experience with Anthos Config Management (ACM) and Kubernetes Cluster Config (KCC) to manage and govern GKE clusters and workloads at scale. Infrastructure as Code (IaC): Experience with tools like Terraform to manage GKE infrastructure and cloud resources. Helm & Kustomize: Experience in using Helm and Kustomize for packaging, deploying, and managing Kubernetes resources efficiently. Ability to create reusable and scalable Kubernetes deployment templates. Observability & Logging Tools: Experience with observability tools such as Prometheus, Dynatrace, and Splunk to monitor and log GKE performance, providing developers with actionable insights for troubleshooting. Nice to Have: Zero Trust Security Model: Strong understanding of implementing and maintaining security in a Zero Trust model for GKE, including workload authentication, identity management, and network security. Ingress Patterns: Experience with designing and managing multi-cluster and multi-regional ingress in Kubernetes to ensure fault tolerance, traffic management, and high availability. Familiarity with Open Policy Agent (OPA) for policy enforcement in Kubernetes environments. Education & Certification: Bachelor’s degree in Computer Science, Engineering, or a related field. Relevant GCP certifications, such as Google Cloud Certified Professional Cloud Architect or Google Cloud Certified Professional Cloud Developer. Soft Skills: Collaboration: Strong ability to work with cross-functional teams to ensure platform solutions meet development and operational needs. Problem-Solving: Excellent problem-solving skills with a focus on troubleshooting and performance optimization. Communication: Strong written and verbal communication skills, able to communicate effectively with both technical and non-technical teams. Initiative & Ownership: Ability to take ownership of platform projects, driving them from conception to deployment with minimal supervision. Adaptability: Willingness to learn new technologies and adjust to evolving business needs. Show more Show less

Posted 1 week ago

Apply

1.0 - 6.0 years

1 - 6 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

The Cloud Platforms team is responsible for the day to day maintenance and operation of various cloud platforms and their supporting services. Cloud Platform Engineers are engaged in the project lifecycle as infrastructure is designed and ultimately inherit responsibility of the new environment. The primary responsibility of the team is to react to incidents in Cloud, PaaS, Cloud Foundry, Kubernetes and pipeline infrastructure platforms. Secondary objectives of the team are to improve the resiliency of systems through active contribution back to design as well as automating repetitive tasks. Requires a moderate understanding of various platforms used for delivery of traditional and cloud services. Requires the ability to write automation code to remediate repeat issues. Requires a working understanding of Continuous Delivery principles, software development methodologies and infrastructure automation. This is a hands on role that deals with reacting to and proactively avoiding issues with infrastructure platforms used for delivery of cloud services. Do you enjoy creating code-based solutions to replace manual processes Are you the type of person that will drive the solution and team in the best direction instead of the easiest Role : Interact with and create effective monitoring systems that reduce the need for human intervention in daily web operations Respond to incidents in various platforms Create high quality and rugged code solutions to automate detection and recovery of common operational problems Identify and create automation that can be used by initial responders for easily resolvable issues All About You Expert understanding of Operational duties, Compensated On-call shifts. Working knowledge of target platforms (Cloud Foundry, Concourse, Kubernetes, Helm, Kustomize, KubeCTL, RKE, KubeADM) Experience with Python and Bash Scripting. Familiarity with F5 load balancers, API technology and XML Gateways preferred Highly attuned to security needs and best practices Ability to write quality automation code in more than one language Familiarity with multiple cloud vendors and past experience a plus.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Staff Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. This role is a critical bridge between teams. It requires excellent organization and communication as the coordinator of work across multiple engineers and projects. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Organize work from multiple Data Platform teams and customers with other Data Engineers Communicate status, progress and blockers of active projects to Data Platform leaders Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 10+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Excellent communication skills Demonstrated ability to internalize business needs and drive execution from a small team Excellent organization of work tasks and status of new and in flight tasks including impact analysis of new work Strong understanding of python Good understanding of Java Strong understanding of SQL and data modeling Familiarity with airflow Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges Learn the skills and abilities of your teammates and align expertise with available work By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Increasing team velocity and showing contribution to improving maturation and delivery of Data Platform vision. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. Engaging with team members. Providing them with challenging work and building cross skill expertise Planning project support and execution with peers and Data Platform leaders SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

SailPoint is the leader in identity security for the cloud enterprise. Our identity security solutions secure and enable thousands of companies worldwide, giving our customers unmatched visibility into the entirety of their digital workforce, ensuring workers have the right access to do their job – no more, no less. Built on a foundation of AI and ML, our Identity Security Cloud Platform delivers the right level of access to the right identities and resources at the right time—matching the scale, velocity, and changing needs of today’s cloud-oriented, modern enterprise. About the role: Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Data Platform team does just that. SailPoint is seeking a Senior Data/Software Engineer to help build robust data ingestion and processing system to power our data platform. We are looking for well-rounded engineers who are passionate about building and delivering reliable, scalable data pipelines. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities : Spearhead the design and implementation of ELT processes, especially focused on extracting data from and loading data into various endpoints, including RDBMS, NoSQL databases and data-warehouses. Develop and maintain scalable data pipelines for both stream and batch processing leveraging JVM based languages and frameworks. Collaborate with cross-functional teams to understand diverse data sources and environment contexts, ensuring seamless integration into our data ecosystem. Utilize AWS service-stack wherever possible to implement lean design solutions for data storage, data integration and data streaming problems. Develop and maintain workflow orchestration using tools like Apache Airflow. Stay abreast of emerging technologies in the data engineering space, proactively incorporating them into our ETL processes. Thrive in an environment with ambiguity, demonstrating adaptability and problem-solving skills. Qualifications : BS in computer science or a related field. 5+ years of experience in data engineering or related field. Demonstrated system-design experience orchestrating ELT processes targeting data Must be willing to work 4 overlapping hours with US timezone. will work closely with US based managers and engineers Hands-on experience with at least one streaming or batch processing framework, such as Flink or Spark. Hands-on experience with containerization platforms such as Docker and container orchestration tools like Kubernetes. Proficiency in AWS service stack. Experience with DBT, Kafka, Jenkins and Snowflake. Experience leveraging tools such as Kustomize, Helm and Terraform for implementing infrastructure as code. Strong interest in staying ahead of new technologies in the data engineering space. Comfortable working in ambiguous team-situations, showcasing adaptability and drive in solving novel problems in the data-engineering space. Preferred Experience with AWS Experience with Continuous Delivery Experience instrumenting code for gathering production performance metrics Experience in working with a Data Catalog tool ( Ex: Atlan / Alation ) What success looks like in the role Within the first 30 days you will: Onboard into your new role, get familiar with our product offering and technology, proactively meet peers and stakeholders, set up your test and development environment. Seek to deeply understand business problems or common engineering challenges and propose software architecture designs to solve them elegantly by abstracting useful common patterns. By 90 days: Proactively collaborate on, discuss, debate and refine ideas, problem statements, and software designs with different (sometimes many) stakeholders, architects and members of your team. Take a committed approach to prototyping and co-implementing systems alongside less experienced engineers on your team—there’s no room for ivory towers here. By 6 months: Collaborates with Product Management and Engineering Lead to estimate and deliver small to medium complexity features more independently. Occasionally serve as a debugging and implementation expert during escalations of systems issues that have evaded the ability of less experienced engineers to solve in a timely manner. Share support of critical team systems by participating in calls with customers, learning the characteristics of currently running systems, and participating in improvements. SailPoint is an equal opportunity employer and we welcome everyone to our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

The Opportunity "We are seeking a senior software engineer to undertake a range of feature development tasks that continue the evolution of our DMP Streaming product. You will demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading edge technologies and integration frameworks. Given your depth of experience, we also want you to technically guide more junior members of the team, instilling both good engineering practices and inspiring them to grow" What You'll Contribute Implement product changes, undertaking detailed design, programming, unit testing and deployment as required by our SDLC process Investigate and resolve reported software defects across supported platforms Work in conjunction with product management to understand business requirements and convert them into effective software designs that will enhance the current product offering Produce component specifications and prototypes as necessary Provide realistic and achievable project estimates for the creation and development of solutions. This information will form part of a larger release delivery plan Develop and test software components of varying size and complexity Design and execute unit, link and integration test plans, and document test results. Create test data and environments as necessary to support the required level of validation Work closely with the quality assurance team and assist with integration testing, system testing, acceptance testing, and implementation Produce relevant system documentation Participate in peer review sessions to ensure ongoing quality of deliverables. Validate other team members' software changes, test plans and results Maintain and develop industry knowledge, skills and competencies in software development What We're Seeking A Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 10+ Java software development experience within an industry setting Ability to work in both Windows and UNIX/Linux operating systems Detailed understanding of software and testing methods Strong foundation and grasp of design models and database structures Proficient in Kubernetes, Docker, and Kustomize Exposure to the following technologies: Apache Storm, MySQL or Oracle, Kafka, Cassandra, OpenSearch, and API (REST) development Familiarity with Eclipse, Subversion and Maven Ability to lead and manage others independently on major feature changes Excellent communication skills with the ability to articulate information clearly with architects, and discuss strategy/requirements with team members and the product manager Quality-driven work ethic with meticulous attention to detail Ability to function effectively in a geographically-diverse team Ability to work within a hybrid Agile methodology Understand the design and development approaches required to build a scalable infrastructure/platform for large amounts of data ingestion, aggregation, integration and advanced analytics Experience of developing and deploying applications into AWS or a private cloud Exposure to any of the following: Hadoop, JMS, Zookeeper, Spring, JavaScript, Angular, UI Development Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Banyan Software provides the best permanent home for successful enterprise software companies, their employees, and customers. We are on a mission to acquire, build and grow great enterprise software businesses all over the world that have dominant positions in niche vertical markets. In recent years, Banyan was named the #1 fastest-growing private software company in the US on the Inc. 5000 and amongst the top 10 fastest-growing companies by the Deloitte Technology Fast 500. Founded in 2016 with a permanent capital base setup to preserve the legacy of founders, Banyan focuses on a buy and hold for life strategy for growing software companies that serve specialized vertical markets. About SmartDocuments Are you ready for the next step in your career as a Senior Software Developer? At SmartDocuments, you will work in a multidisciplinary team on innovative software solutions. With room for initiative, the latest technologies, and an Agile work environment, you will actively contribute to the development of our products. Your Role as a Lead Dev Ops Engineer We're looking for an experienced DevOps Engineer to help build, automate, and maintain both our SaaS cloud infrastructure and on-premise client installations. You'll work closely with development teams to implement robust CI/CD pipelines, manage Kubernetes deployments, and ensure security across our microservices architecture in multiple environments, with a focus on search, AI, and vector database technologies. What Will You Do? Design and implement automated CI/CD pipelines for both cloud and on-premise environments. Develop and maintain automated installation and update processes for on-premise client deployments. Manage SaaS infrastructure and maintain Kubernetes clusters across multiple environments. Deploy and support Elasticsearch clusters and vector database solutions. Set up and manage Large Language Model (LLM) deployments using Azure AI services. Implement and maintain authentication systems across microservices and platforms. Provide support for PostgreSQL database infrastructure, including performance tuning and backup management. Monitor system health and performance across distributed systems using observability tools. Apply security best practices across all deployment models—cloud, hybrid, and on-premise. Collaborate with development teams to streamline and optimize deployment workflows and automation strategies. Must-haves Version Control & CI/CD: Proficient in using Bitbucket for source control and managing CI/CD pipelines. Identity & Access Management: Hands-on experience with Keycloak and Azure SSO for secure authentication and user management. Kubernetes: In-depth knowledge of Kubernetes, including implementing SSO in containerized environments. Database Management: Skilled in PostgreSQL administration, including performance tuning and optimization. Search & Analytics: Experienced in configuring, optimizing, and scaling Elasticsearch for high-performance search and analytics workloads. Vector Databases: Practical exposure to vector database technologies, supporting AI/ML-driven applications. Azure AI & LLMs: Familiar with deploying and managing Large Language Models (LLMs) using Azure AI, particularly through the Azure OpenAI Service. System Architecture: Expertise in implementing microservices and MACH architecture (Microservices, API-first, Cloud-native, Headless). Code Quality & Governance: Proficient in integrating SonarQube for continuous code quality analysis and enforcement across the development lifecycle. Nice To Have Deployment Automation: Experience in creating reproducible, automated installation processes for on-premise environments. Infrastructure as Code (IaC): Proficient with Terraform, Ansible, or similar tools for automating deployments in both cloud and on-premise setups. Cloud Platforms: Strong hands-on experience with AWS, Azure, and Google Cloud Platform (GCP). CI/CD Pipelines: Skilled in using Jenkins, GitHub Actions, and Azure DevOps for continuous integration and delivery. Monitoring & Observability: Experience with Prometheus, Grafana, and the ELK stack in managing distributed systems. Security & DevSecOps: Knowledge of integrating vulnerability scanning, secret management (e.g., HashiCorp Vault), and DevSecOps best practices into the delivery pipeline. Containerization: Proficient with Docker and managing container registries. Scripting & Automation: Strong scripting skills in Bash, Python, and PowerShell for automating tasks and processes. Network Management: Experience working with ingress controllers and service mesh technologies such as Istio or Linkerd. Configuration Management: Hands-on with Helm charts and Kustomize for Kubernetes resource configuration. Qualifications 5+ years of DevOps/SRE experience in both cloud and on-premise environments Demonstrated experience with microservices architecture Experience with Elasticsearch and modern AI infrastructure components Familiarity with vector databases (such as Pinecone, Milvus, or Weaviate) Experience deploying LLMs on Azure AI or similar platforms Experience automating complex installation processes Strong problem-solving abilities and communication skillsRelevant certifications (e.g., CKA, AWS/Azure certifications) a plus Diversity, Equity, Inclusion & Equal Employment Opportunity at Banyan: Banyan affirms that inequality is detrimental to our Global Teams, associates, our Operating Companies, and the communities we serve. As a collective, our goal is to impact lasting change through our actions. Together, we unite for equality and equity. Banyan is committed to equal employment opportunities regardless of any protected characteristic, including race, color, genetic information, creed, national origin, religion, sex, affectional or sexual orientation, gender identity or expression, lawful alien status, ancestry, age, marital status, or protected veteran status and will not discriminate against anyone on the basis of a disability. We support an inclusive workplace where associates excel based on personal merit, qualifications, experience, ability, and job performance. Show more Show less

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Dear Candidate , Greetings from Peoplefy Info solutions ! We are recruiting for Senior Devops engineer role for one of our client in Pune location. Role - Senior Devops Engineer Experience - 9 years - 14 years Location - Pune ( 5 days office) We prefer candidates who are currently working in Product based company About the Client (Product company) - Our client is a rapidly growing, mission-driven technology company focused on transforming the public sector through cutting-edge cloud solutions. They specialize in modernizing critical government operations, including budgeting and planning, procurement, asset and financial management, permitting, and citizen engagement. With a strong emphasis on transparency, efficiency, and data-driven decision-making, the company supports thousands of public agencies in delivering smarter, more responsive services to their communities. Backed by top-tier investors and recognized for their innovative approach, they are a leader in the government technology space. This is a unique opportunity to join a high-impact organization that is reshaping how government works in the digital age. Job Description :- As a Sr. DevOps Engineer at the company, you'll build best-in-class multi-tenant SaaS solutions that enable efficiency, transparency, and accountability. You'll be a key member of our engineering team, writing software, delivering new cloud infrastructure, and automating CI/CD processes in a fast-paced, agile environment using modern technologies, including GitHub Actions, Terraform, Kubernetes, AWS, Cloudflare, and Grafana. In this role, you'll have the opportunity to collaborate closely with engineering leadership and application engineers. Your strong collaboration skills and ability to execute quickly will be key to your success. A typical day would involve optimizing deployment processes, building, upgrading, and re-architecting infrastructure, fine-tuning resource utilization, and ensuring monitoring and alerting are configured for all aspects of the application. At the company, we value natural self-starters who can effectively communicate ideas and contribute to our respect, dedication, and fun culture. If you have a passion for good deployment design and solid cloud architecture and love clean code, principles over dogma, and making the world a little better every day, you'll find a perfect alignment with our values. Responsibilities: Architect, deploy, and maintain a highly available and scalable multi-tenant SaaS environment in AWS. Implement and manage infrastructure as code using tools such as Terraform. Optimize services for cost-efficiency, performance, and reliability. Ensure high reliability and disaster recovery readiness across all services. Enforce security policies, procedures, and standards to protect sensitive data. Implement best practices for securing resources, including VPC configurations, IAM roles, security groups, and network ACLs. Work with compliance teams to ensure adherence to regulations and industry standards such as SOC2 and StateRamp. Design, build, and maintain robust CI/CD pipelines using tools like GitHub Actions and CircleCI. Facilitate seamless integration and continuous deployment processes to ensure rapid and reliable delivery of new features and patches. Set up and manage comprehensive monitoring and alerting systems using tools like CloudWatch, Prometheus, or Grafana. Develop and implement incident response protocols; lead incident investigations and post-mortems. Provide mentorship and training to junior engineers, fostering a culture of continuous learning and improvement. Collaborate with cross-functional teams to define technical requirements and deliver high-quality solutions. Requirements and Preferred Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field from a premier institution. 10+ years of experience in software engineering with a deep focus on cloud operations, security, and DevOps. 4+ years of running Kubernetes at scale and in production on public clouds Proficiency in at least one modern programming language (e.g., Python, Java, Go, or Ruby). Proven track record in designing and managing AWS infrastructure for SaaS applications. Extensive experience with infrastructure as code tools such as Terraform. Strong knowledge of AWS services, including EC2, RDS, S3, VPC, IAM, and Lambda, among others. Experience working with Redis, AWS ( S3, CloudFront), Cloudflare, Kubernetes, kustomize, and Docker. Experience with PostgreSQL and/or MS SQL Server Expertise in CI/CD tools like GitHub Actions, CircleCI, Jenkins or equivalent. Advanced skills in monitoring and logging tools such as CloudWatch, Prometheus, or Grafana. In-depth understanding of cloud security practices, including encryption, identity and access management (IAM), and network security. Strong problem-solving skills with a proactive approach to identifying issues and delivering solutions. Excellent communication skills with the ability to collaborate effectively across various teams. If you have any queries please connect to bhuvaneshwaran.se@peoplefy.com mail id Show more Show less

Posted 2 weeks ago

Apply

2.5 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Edits new and existing applications. Implements, testing and debugging defined software components. Documents all development activity. Works with moderate guidance in own area of knowledge. Job Description Position: DevOps Engineer 2 Experience: 2.5 years to 4.5 years Job Location: Chennai Tami Nadu Technical Skills Must Have : Terraform, Docker and Kubernetes, CICD, AWS, Bash, Python, Linux/Unix, Git, DBMS (e.g. MySQL), NoSQL (e.g. MongoDB) Good to have: Ansible, Helm, Prometheus, ELK stack, R, GCP/Azure Key Responsibilities Design, build, and maintain efficient, reusable, and reliable code Work with analysis, operations, and test teams to achieve the best possible outcome within time and budget Troubleshoot infrastructure issues Attend cloud engineering meetings Participate in code reviews and quality assurance activities Participate in estimation discussions with the product team Continuously improve knowledge and coding skills Qualifications & Requirements Bachelor’s degree in computer science, Engineering, or a related field. experience in a scripting language (e.g. Bash, Python) 3+ years of hands-on experience with Docker and Kubernetes 3+ years of hands-on experience with CI tools (e.g. Jenkins, GitLab CI, GitHub Actions, Concourse CI, ...) 2+ years of hands-on experience with CD tools (e.g. ArgoCD, Helm, kustomize) 2+ years of hands-on experience with LINUX/UNIX systems 2+ years of hands-on experience with cloud providers (e.g. AWS, GCP, Azure) 2+ years of hands-on experience with one IAC framework (e.g. Terraform, Pulumi, Ansible) Basic knowledge of virtualization technologies (e.g. VMware) is a plus Basic knowledge of one database (MySQL, SQL Server, Couchbase, MongoDB, Redis, ...) is a plus Basic knowledge of GIT and one Git Provider (e.g. GitLab, GitHub) Basic knowledge of networking Experience writing technical documentation. Good Communication & Time Management Skills. Able to work independently and as part of a team. Analytical thinking & Problem-Solving Attitude. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Position- DevOps Engineer Years of Experience- 2-4 Years Location - Onsite, Indore Core Responsibilities Infrastructure Management: Design and manage scalable AWS infrastructure for microservices. CI/CD & Deployment: Build and maintain CI/CD pipelines using GitHub Actions and ArgoCD. Container Orchestration: Manage Docker containers with Kubernetes, EKS, ECS, and Fargate. Automation: Use Terraform and Ansible to automate infrastructure setup and configuration. Security & Configuration: Manage secrets and app settings securely using AWS Secrets Manager. Release Management: Handle deployments, rollbacks, and releases using Helm and Kustomize. Troubleshooting: Fix infrastructure, deployment, and performance issues across all environments. Documentation: Maintain clear documentation for systems, processes, and operations. Required Technical Expertise Cloud: AWS (EC2, S3, VPC, ELB, Route 53, Lambda, IAM, CloudFront, etc.) Containers: Docker, Kubernetes, ECS, EKS, Fargate Microservices: Experience with scaling, routing, monitoring, and service discovery IaC & Config Management: Terraform, CloudFormation, Ansible CI/CD Tools: GitHub Actions, Jenkins, GitLab CI, ArgoCD Deployment Tools: Helm, Kustomize Monitoring: Prometheus, Grafana, CloudWatch Scripting: Bash, Python Version Control: Git, GitLab Operating Systems: Linux (Ubuntu/CentOS), Windows Server Databases & Messaging: MongoDB, PostgreSQL, MySQL, RabbitMQ, Redis Code Quality & Security: SonarQube Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Software Engineer II Overview The Cloud Platforms team is responsible for the day to day maintenance and operation of various cloud platforms and their supporting services. Cloud Platform Engineers are engaged in the project lifecycle as infrastructure is designed and ultimately inherit responsibility of the new environment. The primary responsibility of the team is to react to incidents in Cloud, PaaS, Cloud Foundry, Kubernetes and pipeline infrastructure platforms. Secondary objectives of the team are to improve the resiliency of systems through active contribution back to design as well as automating repetitive tasks. Requires a moderate understanding of various platforms used for delivery of traditional and cloud services. Requires the ability to write automation code to remediate repeat issues. Requires a working understanding of Continuous Delivery principles, software development methodologies and infrastructure automation. This is a “hands on” role that deals with reacting to and proactively avoiding issues with infrastructure platforms used for delivery of cloud services. Do you enjoy creating code-based solutions to replace manual processes? Are you the type of person that will drive the solution and team in the best direction instead of the easiest? Role Interact with and create effective monitoring systems that reduce the need for human intervention in daily web operations Respond to incidents in various platforms Create high quality and rugged code solutions to automate detection and recovery of common operational problems Identify and create automation that can be used by initial responders for easily resolvable issues All About You Expert understanding of Operational duties, Compensated On-call shifts. Working knowledge of target platforms (Cloud Foundry, Concourse, Kubernetes, Helm, Kustomize, KubeCTL, RKE, KubeADM) Experience with Python and Bash Scripting. Familiarity with F5 load balancers, API technology and XML Gateways preferred Highly attuned to security needs and best practices Ability to write quality automation code in more than one language Familiarity with multiple cloud vendors and past experience a plus Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-245067 Show more Show less

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role In Systems Management at Kyndryl, you will be critical in ensuring the smooth operation of our customers’ IT infrastructure. You'll be the mastermind behind maintaining and optimizing their systems, ensuring they're always running at peak performance. Key Responsibilities: Develop and customise OpenTelemetry Collectors to support platform-specific instrumentation (Linux, Windows, Docker, Kubernetes). Build processors, receivers, and exporters in OTEL to align with Elastic APM data schemas. Create robust and scalable pipelines for telemetry data collection and delivery to Elastic Stack. Work closely with platform and application teams to enable auto-instrumentation and custom telemetry. Automate deployment of collectors via Ansible, Terraform, Helm, or Kubernetes operators. Collaborate with Elastic Observability team to validate ingestion formats, indices, and dashboard readiness. Benchmark performance and recommend cost-effective designs. Your Future at Kyndryl Kyndryl's focus on providing innovative IT solutions to its customers means that in Systems Management, you will be working with the latest technology and will have the opportunity to learn and grow your skills. You may also have the opportunity to work on large-scale projects and collaborate with other IT professionals from around the world. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 5+ years of experience with Golang or similar languages in a systems development context. Deep understanding of OpenTelemetry Collector architecture, pipelines, and customization. Experience with Elastic APM ingestion endpoints and schema alignment. Familiarity with Docker, Kubernetes, system observability (eBPF optional but preferred). Hands-on with deployment automation tools: Ansible, Terraform, Helm, Kustomize. Strong grasp of telemetry protocols: OTLP, gRPC, HTTP, and metrics formats like Prometheus, StatsD. Strong knowledge of Elastic APM, Fleet, and integrations with OpenTelemetry and metric sources. Experience with data ingest and transformation using Logstash, Filebeat, Metricbeat, or custom agents. Proficiency in designing dashboards, custom visualizations, and alerting in Kibana. Understanding of ILM, hot-warm-cold tiering, and Elastic security controls. Preferred Technical and Professional Experience Contributions to OpenTelemetry Collector or related CNCF projects. Elastic Observability certifications or demonstrable production experience. Experience in cost modeling and telemetry data optimization. Exposure to Elastic Cloud, ECE, or ECK. Familiarity with alternatives like Dynatrace, Datadog, AppDynamics or SigNoz for benchmarking. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS). Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability. Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments. Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments. Support deployment of infrastructure lambda functions. Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment. Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management. Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments. Ensure auditability and observability of pipeline states. Implement security best practices, audit, and compliance requirements within the infrastructure. Provide technical leadership, mentorship, and training to engineering staff. Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads. Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams. Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role. Extensive experience with Amazon Web Services (AWS). Proven ability to architect for scale, availability, and high-performance workloads. Ability to plan and execute zero-disruption migrations. Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC. Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK. Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD). Solid understanding of git, branching models, CI/CD pipelines and deployment strategies. Experience with security, audit, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership. Experience with client relationship management and project planning. Certifications: Relevant certifications (for example Kubernetes Certified Administrator,AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional, etc). Software development experience (for example Terraform, Python). Experience with machine learning infrastructure. Education: B.Tech/B.E in computer sciences, a related field, or equivalent experience. Sandeep Kumar sandeep.vinaganti@quesscorp.com Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Cloud Engineer will be a part of the Engineering team and will require a strong knowledge of application monitoring, infrastructure monitoring, automation, maintenance, and Service Reliability Improvements. Specifically, we are searching for someone who brings fresh ideas, demonstrates a unique and informed viewpoint, and enjoys collaborating with a cross-functional team to develop real-world solutions and positive user experiences at every interaction. Responsibilities Role & Responsibilities Design, automate and manage a highly available and scalable cloud deployment that allows development teams to deploy and run their services. Collaborating with engineering and Architects teams to evaluate and identify optimal cloud solutions, also leveraging scalability, high-performance and security. Design and implement sustainable cloud and platform services. Build a robust, scalable and stable infrastructure. Manage hosting external containers in Private cloud. Extensively automated deployments and managed applications in GCP. Developing and maintaining cloud solutions in accordance with best practices. Ensuring efficient functioning of data storage and processing functions in accordance with company security policies and best practices in cloud security. Collaborate with Engineering teams to identify optimization strategies, help develop self-healing capabilities Experience in developing a strong observability capabilities Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues. Regularly reviewing existing systems and making recommendations for improvements. Required Skills and Selection Criteria: Proven work experience in designing, deploying and operating mid to large scale public cloud environments. Proven work experience in Docker/Kubernetes (image building, k8s schedule) Experience in package, config and deployment management via Helm, Kustomize, ArgoCD. Proven working experience in onboarding and troubleshooting Cloud Services. Proven work experience in provisioning Infrastructure as Code (IaC) using Terraform Enterprise or community edition. Proven work experience in writing custom terraform providers/plug-ins with Sentinel Policy as Code Professional Certification is an advantage Public Cloud >> GCP is a good to have. Strong knowledge in Github, DevOps (Cloud Build is an advantage) Should be proficient in scripting and coding, that include traditional languages like Python, PowerShell, GoLang,Java, JS and Node.js. Proven working experience in Messaging Middleware - Apache Kafka, RabbitMQ, Apache ActiveMQ Proven working experience in API gateway, Apigee is an advantage. Proven working experience in API development, REST. Proven working experience in Sec and IAM, SSL/TLS, OAuth and JWT. Extensive knowledge and hands-on experience in Grafana and Prometheus micro libraries. Exposure to Cloud Monitoring and logging. Experience with distributed storage technologies like NFS, HDFS, Ceph, S3 as well as dynamic resource management frameworks (Mesos, Kubernetes, Yarn) Experience with automation tools should be a priority Qualifications Preferred Qualifications Previous success in technical engineering Must have > 5 overall experience Must have >3 years of experience in public cloud Must have >3 years of experience in Cloud Infrastructure provisioning Must have >3 years of experience in Cloud Engineering Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Do you love understanding every detail of how new technologies work? Join the team that serves as Apple’s nerve center, our Information Systems and Technology group. There are countless ways you’ll contribute here, whether you’re coordinating technology needs for product launches, designing music solutions for retail locations, or ensuring the strength of in-store Wi-Fi connections. From Apple Pay to the Apple website to our data centers around the globe, you’ll help design and manage the massive systems that countless employees and customers rely on every day. You’ll also build custom tools for employees, empowering them to solve complex problems on their own. Join our team, and together we’ll explore all the ways to improve how Apple operates, freeing our employees to do what they do best: craft magical experiences for our customers. Are you a passionate operations engineer who wants to work on solving large scale problems? Join us in building best in class solutions and implementing sophisticated software applications across IS&T. At Apple, we support both open source and home-grown technologies to provide internal Apple developers with the best possible CI/CD solutions. In this role you will have the unique opportunity to own and improve tooling for best of the class large-scale platform solutions to help build modern software systems! This role is primarily responsible for building and managing tools that enable software releases in a fast paced enterprise environment We operate with on-prem, private, and public cloud platforms. A DevOps Engineer would be partnering closely with global software development teams and infrastructure teams. Description As a virtue of being part of this team you would be exposed to a variety of challenges supporting and building highly available systems, working closely with U.S. and India based teams and have the opportunity to expand the capabilities the team has to offer to the wider organization. This may include: - Designing and implementing new solutions to streamline manual operations. - Triaging security and production issues along with other operational team members. Conduct root cause analysis of critical issues. - Expand the capacity and performance of current operational systems. The ideal candidate will be a self-motivated, hands-on, dynamic and detail oriented individual with a strong technical background. Minimum Qualifications 1-2 yrs of experience in software engineering Bachelor’s or Master’s degree (or equivalent) in Computer Science or a related field (equivalent practical experience) Key Qualifications Knowledge of software engineering standard processes. Understanding of the software architecture, deploy & optimize infrastructure across onprem and third-party cloud. Good foundation in at least one or more programming and scripting languages. Ability to support, troubleshoot and maintain aspects of infrastructure, including Compute, System, Network, Storage and datastore. Implement applications in private/public cloud infrastructure and container technologies, like Kubernetes and Docker. Experience with develop software tooling to deliver programmable infrastructure & environments and building CI/CD pipeline with tools like Terraform, CloudFormation, Ansible, and Kubernetes toolset (e.g, kubectl, kustomize). Preferred Qualifications Familiarity with build and deployment systems using Maven and GIT Familiarity with observability tools (e.g., Grafana, Splunk etc) is a plus Experience or interest in automation is a huge plus Self-motivated, independent, and dedicated with great organizational skills Excellent written and verbal communication skills. Submit CV

Posted 4 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies