Jobs
Interviews

72 Gke Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a GCP Architect with AI Expertise, you will play a crucial role in leading the design, implementation, and optimization of cloud and AI-driven solutions on Google Cloud Platform (GCP). Your deep expertise in cloud infrastructure, AI/ML, networking, and security will be essential for architecting scalable, secure, and highly available cloud solutions while leveraging AI to drive automation and intelligence across enterprise environments. Your responsibilities will include designing and implementing scalable, secure, and cost-effective architectures on GCP, developing AI/ML-driven solutions using tools such as Vertex AI, BigQuery ML, TensorFlow, and AutoML, and architecting end-to-end data pipelines to support AI model training, deployment, and monitoring. You will also be responsible for building high-availability, backup, and disaster recovery solutions in GCP. In terms of infrastructure and security, you will implement identity management, IAM policies, security protocols, and network segmentation, design and automate provisioning of networks, firewalls, ACLs, and access control policies, and ensure compliance with security and governance frameworks for cloud-based AI workloads. Additionally, you will manage multi-VPC networking, hybrid cloud connectivity, and firewall configurations. Your role will also involve leading large-scale application and network migrations to GCP, developing Terraform-based infrastructure automation, and managing Infrastructure as Code (IaC). You will implement CI/CD pipelines for AI/ML workloads using tools like GitHub, Jenkins, and Ansible, and design DevOps solutions with cloud-native tools and containerized workloads on GKE. As a technical leader and advisor, you will provide strategic guidance on cloud adoption, AI/ML use cases, and infrastructure modernization, lead requirements gathering, solution design, and implementation for GCP cloud-based AI solutions, and support pre-sales activities, including POCs, RFP responses, and SOW creation. You will mentor and provide technical oversight to engineering teams and serve as a trusted cloud advisor for customers, helping them solve complex cloud infrastructure and AI challenges. Furthermore, you will be responsible for optimizing cloud resource usage, cost management, and AI model performance, conducting cloud usage analytics, security audits, and AI model monitoring, and developing strategies to improve network reliability, redundancy, and high availability. If you are passionate about cloud transformation, AI, and solving complex infrastructure challenges, this role presents a perfect opportunity for you to make a meaningful impact in the field of cloud and AI technologies.,

Posted 22 hours ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a GCP CloudOps Engineer, you will be responsible for deploying, integrating, and testing solutions using Infrastructure as Code (IaC) and DevSecOps techniques. With over 8 years of experience in infrastructure design and delivery, including 5 years of hands-on experience in Google Cloud technologies, you will play a key role in ensuring continuous, repeatable, secure, and automated deployment processes. Your responsibilities will also include: - Utilizing monitoring tools such as Datadog, New Relic, or Splunk for effective performance analysis and troubleshooting. - Implementing container orchestration services like Docker or Kubernetes, with a preference for GKE. - Collaborating with diverse teams across different time zones and cultures. - Maintaining comprehensive documentation, including principles, standards, practices, and project plans. - Building data warehouses using Databricks and IaC patterns with tools like Terraform, Jenkins, Spinnaker, CircleCI, etc. - Enhancing platform observability and optimizing monitoring and alerting tools for better performance. - Developing CI/CD frameworks to streamline application deployment processes. - Contributing to Cloud strategy discussions and implementing best practices for Cloud solutions. Your role will involve proactive collaboration, automation of long-term solutions, and adherence to incident, problem, and change management best practices. You will also be responsible for debugging applications, enhancing deployment architectures, and measuring cost and performance metrics of cloud services to drive informed decision-making. Preferred qualifications for this role include experience with Databricks, Multicloud environments (GCP, AWS, Azure), GitHub, and GitHub Actions. Strong communication skills, a proactive approach to problem-solving, and a deep understanding of Cloud technologies and tools are essential for success in this position. Key Skills: Splunk, Terraform, Google Cloud Platform, GitHub Workflows, AWS, Datadog, Python, Azure DevOps, Infrastructure as Code (IaC), Data Warehousing (Databricks), New Relic, CircleCI, Container Orchestration (Docker, Kubernetes, GKE), Spinnaker, DevSecOps, Jenkins, etc.,

Posted 1 day ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Java Full Stack Developer with 7-10 years of experience, you will be responsible for designing, developing, and maintaining scalable Java-based backend systems. Your role will involve building dynamic and responsive user interfaces using React.js, as well as developing and integrating RESTful APIs and microservices. You will work with distributed systems and event-driven architecture, collaborating closely with cross-functional teams in an Agile environment. Your key responsibilities will include participating in code reviews, troubleshooting, and performance tuning. You will also be expected to integrate applications with cloud services (AWS preferred), work with containerized environments, and use Kubernetes for deployment. Ensuring code quality, scalability, and maintainability will be essential aspects of your work. To excel in this role, you should possess a Bachelor's degree in Computer Science or a related field (preferred) and have at least 8 years of experience in Java application development. Proficiency in Java 11+ is required, including Streams, Lambdas, and functional programming. Strong knowledge of Spring Boot, Spring Framework, and RESTful API development is essential, as well as experience with microservices architecture and monitoring tools. You should have a solid understanding of persistence layers such as JPA, Hibernate, MS-SQL, and PostgreSQL. Hands-on experience with React.js and strong frontend development skills with HTML, CSS3/Tailwind, and responsive design are also necessary. Experience with CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions, or AWS DevOps) and familiarity with cloud platforms like AWS, Azure, or GCP (AWS preferred) are important. Exposure to container orchestration using EKS, AKS, or GKE, knowledge of Domain-Driven Design (DDD) and Backend-for-Frontend (BFF) patterns, and working knowledge of Kafka, MQ, or other event-driven technologies are advantageous. Strong problem-solving, debugging, and optimization skills, proficiency in Agile methodologies, version control (Git), and SDLC best practices are also required. Experience in the hospitality domain is a plus.,

Posted 1 day ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

Join the Agentforce team in AI Cloud at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our cutting edge new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a subject matter expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and interface with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 12+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a seasoned Software Development professional with 5-7 years of experience, you will play a crucial role in researching and designing cutting-edge technology architectures to meet the strategic objectives of the company. Your responsibilities will include designing, developing, and maintaining products while collaborating with Product teams to align with functional and non-functional requirements in line with the business strategy. You will also engage with senior technical leaders to ensure alignment with the overall technology strategy and coordinate with QA for seamless product releases. Your role will involve facilitating technology and methodology decision-making within the team, focusing on standardization, code reviews, developing reusable code bases, implementing best practices, managing source control, and streamlining deployment processes. Additionally, you will be responsible for driving hiring initiatives, creating career plans, providing training, and overseeing OKR reviews for the software engineering team. This will encompass activities such as employee coaching, mentoring, development, and fostering team cohesion. The ideal candidate for this position should possess a strong background in software development, with expertise in Node.Js and proficiency in languages like Typescript, Go, Python, and Java. Experience in building applications based on Microservices Architecture and a solid grasp of object-oriented and functional programming concepts are essential. Proficiency in MySQL and MongoDB, along with a foundation in container orchestration using Kubernetes or GKE, is highly desirable. Familiarity with Nginx, Redis, IOC, CI/CD, and Unit Testing will be advantageous. At AeroQube, you will enjoy a range of benefits, including medical insurance, flexible working hours, skills development opportunities, food and beverage provisions, employee clubs and activities, gifts, a focus on work-life balance, and various financial benefits. Join our team and be part of a dynamic work environment that values your expertise and promotes professional growth and well-being.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You should have a minimum of 5 years of experience in DevOps, SRE, or Infrastructure Engineering. Your expertise should include a strong command of Azure Cloud and Infrastructure-as-Code using tools such as Terraform and CloudFormation. Proficiency in Docker and Kubernetes is essential. You should be hands-on with CI/CD tools and scripting languages like Bash, Python, or Go. A solid knowledge of Linux, networking, and security best practices is required. Experience with monitoring and logging tools such as ELK, Prometheus, and Grafana is expected. Familiarity with GitOps, Helm charts, and automation will be an advantage. Your key responsibilities will involve designing and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, and GitHub Actions. You will be responsible for automating infrastructure provisioning through tools like Terraform, Ansible, and Pulumi. Monitoring and optimizing cloud environments, implementing containerization and orchestration with Docker and Kubernetes (EKS/GKE/AKS), and maintaining logging, monitoring, and alerting systems (ELK, Prometheus, Grafana, Datadog) are crucial aspects of the role. Ensuring system security, availability, and performance tuning, managing secrets and credentials using tools like Vault and Secrets Manager, troubleshooting infrastructure and deployment issues, and implementing blue-green and canary deployments will be part of your responsibilities. Collaboration with developers to enhance system reliability and productivity is key. Preferred skills include certification as an Azure DevOps Engineer, experience with multi-cloud environments, microservices, and event-driven systems, as well as exposure to AI/ML pipelines and data engineering workflows.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

indore, madhya pradesh

On-site

As a GCP Cloud Engineer at Ascentt, you will play a crucial role in designing, deploying, and managing cloud infrastructure on Google Cloud Platform to provide scalable solutions for our development teams. Your expertise will contribute to turning enterprise data into real-time decisions using advanced machine learning and GenAI, with a focus on solving hard engineering problems with real-world industry impact. Your key responsibilities will include designing and managing GCP infrastructure such as Compute Engine, GKE, Cloud Run, and networking components. You will be expected to implement CI/CD pipelines and infrastructure as code, preferably using Terraform, and configure monitoring, logging, and security using Cloud Operations Suite. Automation of deployments and maintenance of disaster recovery procedures will be essential aspects of your role, along with collaborating closely with development teams on architecture and troubleshooting. To excel in this role, you should possess at least 5 years of GCP experience with core services like Compute, Storage, Cloud SQL, and BigQuery. Strong knowledge of Kubernetes, Docker, and networking is essential, along with proficiency in Terraform and scripting languages such as Python and Bash. Experience with CI/CD tools, cloud migrations, and GitHub is required, and holding GCP Associate/Professional certification would be advantageous. A Bachelor's degree or equivalent experience is also necessary to succeed in this position. Additionally, preferred skills for this role include experience with multi-cloud environments like AWS/Azure, familiarity with configuration management tools such as Ansible and Puppet, database administration knowledge, and expertise in cost optimization strategies. If you are a passionate builder looking to shape the future of industrial intelligence through cutting-edge data analytics and AI/ML solutions, Ascentt welcomes your application to join our team and make a significant impact in the automotive and manufacturing industries.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Team Lead in DevOps with 6+ years of experience, you will be responsible for managing, mentoring, and developing a team of DevOps engineers. Your role will involve overseeing the deployment and maintenance of applications such as Odoo (Python/PostgreSQL), Magento (PHP/MySQL), and Node.js (JavaScript/TypeScript). You will design and manage CI/CD pipelines using tools like Jenkins, GitHub Actions, and GitLab CI. Additionally, you will handle environment-specific configurations for staging, production, and QA. Your responsibilities will include containerizing legacy and modern applications using Docker and deploying them via Kubernetes (EKS/AKS/GKE) or Docker Swarm. You will implement and maintain Infrastructure as Code using tools like Terraform, Ansible, or CloudFormation. Monitoring application health and infrastructure using tools such as Prometheus, Grafana, ELK, Datadog, and ensuring systems are secure, resilient, and compliant with industry standards will also be part of your role. Optimizing cloud costs and infrastructure performance, collaborating with development, QA, and IT support teams, and troubleshooting performance, deployment, or scaling issues across tech stacks are essential tasks. To excel in this role, you must have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with hands-on experience. You should have a minimum of 2 years of experience managing or leading DevOps teams. Proficiency in supporting and deploying Odoo on Ubuntu/Linux with PostgreSQL, Magento with Apache/Nginx, PHP-FPM, MySQL/MariaDB, and Node.js with PM2/Nginx or containerized setups is required. Experience with AWS, Azure, or GCP infrastructure in production, strong scripting skills (Bash, Python, PHP CLI, or Node CLI), and a deep understanding of Linux system administration and networking fundamentals are essential. In addition, you should have experience with Git, SSH, reverse proxies (Nginx), and load balancers. Good communication skills and exposure to managing clients are crucial. Preferred certifications that are highly valued include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Additionally, experience with Magento Cloud DevOps or Odoo Deployment is considered a bonus. Nice-to-have skills include experience with multi-region failover, HA clusters, RPO/RTO-based design, familiarity with MySQL/PostgreSQL optimization, and knowledge of Redis, RabbitMQ, or Celery. Previous experience with GitOps, ArgoCD, Helm, or Ansible Tower, as well as knowledge of VAPT 2.0, WCAG compliance, and infrastructure security best practices, are also advantageous for this role.,

Posted 1 week ago

Apply

2.0 - 8.0 years

0 Lacs

karnataka

On-site

Dexcom Corporation is a pioneer and global leader in continuous glucose monitoring (CGM), with a mission to revolutionize diabetes management and improve health outcomes. With a vision to empower individuals to take control of their health, Dexcom is dedicated to providing personalized, actionable insights to address important health challenges. As a part of the team, you will contribute to the innovative solutions that are transforming the healthcare industry and improving human health on a global scale. Joining the high-growth and fast-paced environment at Dexcom, you will collaborate with leading-edge cloud and cybersecurity teams to develop cutting-edge diabetes medical device systems. As a Cloud Operations Engineer specializing in Google Cloud Platform (GCP) and PKI operations, you will play a crucial role in deploying and operating cloud-based services for the next generation of Dexcom products. Your responsibilities will include supporting the secure design, development, and deployment of products and services, working closely with various teams to ensure the PKI system is securely deployed and operated in the cloud. In this role, you will: - Support designs and architectures for enterprise cloud-based systems in GCP, ensuring careful planning and consideration of changes within the internal platform ecosystem. - Deploy and support PKI, device identity management, and key management solutions for Dexcom products and services. - Assist in software building and delivering processes using CI/CD tools, with a focus on automation and traceability. - Monitor and maintain production systems, troubleshoot issues, and participate in on-call rotations to address service outages. - Collaborate with stakeholders and development teams to integrate capabilities into the system architecture and align delivery plans with project timelines. - Provide cloud training to team members and support system testing, integration, and deployment. To be successful in this role, you should have experience with: - CI/CD processes and GitHub actions - Scripting languages such as Bash, Python - Automation of operational work - Cloud development and containerized Docker applications - GKE, Datadog, and cloud services - Developing and deploying cloud-based systems via CI/CD pipelines and cloud-native tools Additionally, having knowledge of PKI fundamentals, cryptography, and cloud security expertise is preferred. By joining Dexcom, you will have the opportunity to work with life-changing CGM technology, access comprehensive benefits, grow on a global scale, and contribute to an innovative organization committed to its employees, customers, and communities. Travel Required: 0-5% Experience And Education Requirements: - Bachelors degree in a technical discipline with 5-8 years related experience - Masters degree with 2-5 years equivalent industry experience - PhD with 0-2 years experience ,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a hands-on backend expert, you will be responsible for taking our FastAPI-based platform to the next level by building production-grade model-inference services, agentic AI workflows, and seamless integration with third-party LLMs and NLP tooling. Please note that this role is being hired for one of our client companies, and the company name will be disclosed during the interview process. In this role, you will have the opportunity to work on the following key areas: Core Backend Enhancements: - Build APIs - Harden security with OAuth2/JWT, rate-limiting, SecretManager, and observability with structured logging and tracing - Implement CI/CD, test automation, health checks, and SLO dashboards Awesome UI Interfaces: - Develop UI interfaces using React.js/Next.js, Redact/Context, and various CSS frameworks like Tailwind, MUI, Custom-CSS, and Shadcn LLM & Agentic Services: - Design micro/mini-services to host and route to OpenAI, Anthropic, local HF models, embeddings, and RAG pipelines - Implement autonomous/recursive agents that orchestrate multi-step chains including Tools, Memory, and Planning Model-Inference Infrastructure: - Set up GPU/CPU inference servers behind an API gateway - Optimize throughput with batching, streaming, quantization, and caching using technologies like Redis and pgvector NLP & Data Services: - Own the NLP stack focusing on Transformers for classification, extraction, and embedding generation - Build data pipelines that combine aggregated business metrics with model telemetry for analytics You will be working with the following tech stack: - Python, FastAPI, Starlette, Pydantic - Async SQLAlchemy, Postgres, Alembic, pgvector - Docker, Kubernetes, or ECS/Fargate on AWS or GCP - Redis, RabbitMQ, Celery for jobs and caching - Prometheus, Grafana, OpenTelemetry - HuggingFace Transformers, LangChain, Torch, TensorRT - OpenAI, Anthropic, Azure OpenAI, Cohere APIs - Pytest, GitHub Actions - Terraform or CDK To be successful in this role, you must have: - 3+ years of experience building production Python REST APIs using FastAPI, Flask, or Django-REST - Strong SQL schema design and query optimization skills in Postgres - Deep knowledge of async patterns and concurrency - Hands-on experience with UI applications that integrate with backend APIs - Experience with RAG, LLM/embedding workflows, prompt-engineering, and agent-ops frameworks - Cloud container orchestration experience - Proficiency in CI/CD pipelines and infrastructure-as-code Nice-to-have experience includes familiarity with streaming protocols, NGINX Ingress, RBAC, multi-tenant SaaS security, data privacy, event-sourced data models, and more. This role is crucial as our products are live and evolving rapidly. You will have the opportunity to own systems end-to-end, scale AI services, work closely with the founder, and shape the future of our platform. If you are seeking meaningful ownership and enjoy working on challenging, forward-looking problems, this role is perfect for you.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

You have an exciting opportunity to join Ripplehire as a Senior DevOps Team Lead - GCP Specialist. In this role, you will play a crucial part in shaping and executing the cloud infrastructure strategy using Google Cloud Platform (GCP), with a particular focus on GKE, networking, and optimization strategies. As the Senior DevOps Team Lead, your responsibilities will include designing, implementing, and managing GCP-based infrastructure, optimizing GKE clusters for performance and cost-efficiency, establishing secure VPC architectures and firewall rules, setting up logging and monitoring systems, driving cost optimization initiatives, mentoring team members on GCP best practices, and collaborating with development teams on CI/CD pipelines. To excel in this role, you must possess extensive experience with GCP, including GKE, networking, logging, monitoring, and cost optimization. Additionally, you should have a strong background in Infrastructure as Code, CI/CD pipeline design, container orchestration, troubleshooting, incident management, and performance optimization. Qualifications for this position include at least 5 years of DevOps experience with a focus on GCP environments, GCP Professional certifications (Cloud Architect, DevOps Engineer preferred), experience leading technical teams, cloud security expertise, and a track record of scaling infrastructure for high-traffic applications. If you are ready for a new challenge and an opportunity to advance your career in a supportive work environment, don't miss this chance to apply. Click on Apply, complete the Screening Form, upload your resume, and increase your chances of getting shortlisted for an interview. Uplers is committed to making the hiring process reliable, simple, and fast, and we are here to support you throughout your engagement. Apply today and take the next step in your career journey with us!,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

ahmedabad, gujarat

On-site

You will be responsible for leading a team of DevOps engineers in Ahmedabad. Your main duties will include managing and mentoring the team, overseeing the deployment and maintenance of various applications such as Odoo, Magento, and Node.js. You will also be in charge of designing and managing CI/CD pipelines using tools like Jenkins and GitLab CI, handling environment-specific configurations, and containerizing applications using Docker. In addition, you will need to implement and maintain Infrastructure as Code using tools like Terraform and Ansible, monitor application health and infrastructure, and ensure systems are secure, resilient, and compliant with industry standards. Collaboration with development, QA, and IT support teams is essential for seamless delivery, and troubleshooting performance, deployment, or scaling issues across tech stacks will also be part of your responsibilities. To be successful in this role, you should have at least 6 years of experience in DevOps/Cloud/System Engineering roles, with a minimum of 2 years managing or leading DevOps teams. Hands-on experience with Odoo, Magento, Node.js, and AWS/Azure/GCP infrastructure is required. Strong scripting skills in Bash, Python, PHP, or Node CLI, as well as a deep understanding of Linux system administration and networking fundamentals, are essential. Experience with Git, SSH, reverse proxies, and load balancers is also necessary, along with good communication skills and client management exposure. Preferred certifications that would be highly valued for this role include AWS Certified DevOps Engineer Professional, Azure DevOps Engineer Expert, and Google Cloud Professional DevOps Engineer. Bonus skills that are nice to have include experience with multi-region failover, HA clusters, MySQL/PostgreSQL optimization, GitOps, ArgoCD, Helm, VAPT 2.0, WCAG compliance, and infrastructure security best practices.,

Posted 1 week ago

Apply

5.0 - 9.0 years

13 - 20 Lacs

Bengaluru

Hybrid

Apigee SRE Bangalore (Hybrid), India Seeking a talented Apigee SRE in Bangalore for a hybrid role (3 days in office)! Drive the reliability and scalability of our API platforms. Intro: We are looking for a dedicated Apigee Site Reliability Engineer (SRE) to join our team in Bangalore. In this hybrid role, you will be crucial in ensuring the robustness, scalability, and performance of our Apigee API platform and related API ecosystems. You'll work on designing, implementing, and maintaining reliable systems, and actively contribute to an engineering culture that values innovation and continuous improvement. You'll gain broad exposure working with diverse clients, expanding your skillset, and having a direct impact on our success. What Experience is Mandatory: Proven hands-on experience with Google Apigee Edge (or Apigee X/Hybrid), including API proxy development, security, and traffic management. Strong understanding and practical application of Site Reliability Engineering (SRE) principles and practices. Experience with monitoring, logging, and alerting tools relevant to API platforms (e.g., Cloud Monitoring, Prometheus, Grafana, ELK Stack). Proficiency in diagnosing and resolving complex production issues related to APIs and distributed systems. Solid understanding of cloud platforms, with a preference for Google Cloud Platform (GCP) services relevant to Apigee. Experience with scripting and automation (e.g., Python, Bash, Go). What Experience is Beneficial but Optional: Familiarity with Infrastructure as Code (IaC) tools like Terraform for managing cloud resources and Apigee configurations. Knowledge of containerization (Docker) and orchestration (Kubernetes/GKE). Experience with other API Gateway solutions or API management platforms. Background in network troubleshooting and security best practices for APIs. Experience with CI/CD pipelines for API deployments. What We Offer: Real Impact: You'll have a direct impact on our clients' success and the growth of our company. Your work ensures critical systems remain highly available and performant. Learning and Growth: We invest in your development. We cover education expenses, encourage certifications, and provide opportunities to work with the latest technologies. Equity in the Company: Become a part-owner of Aviato (after 6 months). We believe in sharing our success with the team. Competitive Compensation and Benefits: We offer a competitive salary and comprehensive benefits package, including an annual bonus. Hybrid Work Model: Enjoy a balanced work-life with our hybrid setup in Bangalore (3 days in office). Great Place to Work: Join a company certified as a Great Place to Work, founded by Ex-Googlers who prioritize a positive and innovative culture. Ready to build something amazing? Apply now!

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

Join the Agentforce team in AI Cloud at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organizations. We work in a highly collaborative environment, and you will partner with a highly cross-functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our cutting-edge new AI framework. We value execution, clear communication, feedback, and making learning fun. Your impact - You will: Architect, design, implement, test, and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company. Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps. Analyze and provide feedback on product strategy and technical feasibility. Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events. Actively communicate with, encourage, and motivate all levels of staff. Be a subject matter expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements. Troubleshoot complex production issues and interface with support and customers as needed. Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events. Required Skills: 12+ years of experience in building highly scalable Software-as-a-Service applications/platform. Experience building technical architectures that address complex performance issues. Thrive in dynamic environments, working on cutting-edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt. Deep knowledge of object-oriented programming and experience with at least one object-oriented programming language, preferably Java. Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development. High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.). Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax. Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL. Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing, including unit and functional testing using Java, JUnit, JSUnit, Selenium. Demonstrated ability to drive long-term design strategies that span multiple complex projects. Experience delivering technical reports and presentations to customers and at industry events. Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues. Experience with the full software lifecycle in highly agile and ambiguous environments. Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management, and/or client SDKs development. Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE. Experience with AI/ML and Data science, including Predictive and Generative AI. Experience with data engineering, data pipelines, or distributed systems. Experience with continuous integration (CI) and continuous deployment (CD), and service ownership. Familiarity with Salesforce APIs and technologies. Ability to support/resolve production customer escalations with excellent debugging and problem-solving skills.,

Posted 1 week ago

Apply

23.0 - 25.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Job Role : DevOps Engineer Year Of Experience- 23 Years Location: Ghansoli Education: BE/ B.Tech Overview : Looking for a motivated and skilled DevSecOps Engineer with 23 years of hands-on experience in implementing DevSecOps practices, CI/CD pipelines, and integrating security into the development lifecycle. The ideal candidate will have working knowledge of Kubernetes (K8S), cloud platforms like GKE and AKS, and build/deployment automation tools including Azure DevOps and Jenkins. Experience with security scanning tools (SAST, DAST, Fortify, SonarQube) and scripting knowledge in Groovy, ANT, and JavaScript is essential. Job Role: Design, implement, and maintain secure and scalable CI/CD pipelines. Integrate security tools and processes into DevOps workflows (DevSecOps). Automate infrastructure and deployments using Azure DevOps and Jenkins. Deployment using On-Premises K8S clusters and Manage Kubernetes clusters - GKE and AKS. Deployment using Windows based servers - IIS Implement and maintain Static and Dynamic Application Security Testing (SAST/DAST) tools. Integrate and configure Fortify, SonarQube, and other security tools into pipelines. Write and maintain automation scripts using Groovy, ANT, and JavaScript. Collaborate with development, QA, and security teams to ensure secure software delivery. Conduct security assessments and remediations as part of the SDLC. Required Skills & Qualifications : Bachelor degree in Engineering or Equivalent. 23 years of hands-on experience in DevSecOps / DevOps. Strong knowledge and hands-on experience with: - Azure DevOps Pipelines and Jenkins for CI/CD. - Security tools: Fortify, SonarQube, Blackduck, DAST/SAST tools (e.g., OWASP ZAP, Burp Suite, etc.). - Kubernetes (K8s) with GKE and AKS. Proficiency in scripting languages such as Groovy, ANT, and JavaScript. Basic programming / scripting capabilities to automate security checks & workflows. Understanding of application security principles and best practices. Experience working in Agile and collaborative team environments. Excellent troubleshooting, documentation, and communication skills.

Posted 2 weeks ago

Apply

8.0 - 13.0 years

15 - 20 Lacs

Bengaluru

Hybrid

Hello, We are hiring for "GCP Cloud Engineer" for Bangalore location. Exp :8+Years Loc : Bangalore(Hybrid) Shift Timings : 7:30 AM to 4:00 PM Notice Period: Immediate joiners(notice period served or serving candidate) NOTE: We are looking for Immediate joiners(notice period served or serving candidate) Apply only If you are Immediate joiners(notice period served or serving candidate) Job Description: 3 years of experience with building modern applications utilizing GCP services like Cloud Build, Cloud Functions/ Cloud Run, GKE, Logging, GCS, CloudSQL & IAM. Primary proficiency in Python and experience with a secondary language such as Golang or Java. In-depth knowledge and hands-on experience with GKE/K8s. You place a high emphasis on Software Engineering fundamentals such as code and configuration management, CICD/Automation and automated testing. Working with operations, security, compliance and architecture groups to develop secure, scalable and supportable solutions.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

18 - 25 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Experience : 5-12 Years Location : Mumbai, Bangalore, Pune, Chennai ,Hyderabad, Kolkata, Noida, Kochi, Coimbatore, Mysore, Nagpur, Bhubaneswar, Indore, Warangal Notice Period : 0-30 Days Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale Java applications using Spring Boot microservices architecture. Collaborate with cross-functional teams to identify requirements and implement solutions that meet business needs. Ensure high availability, scalability, security, and performance of cloud-based systems on GCP/AWS platforms. Participate in code reviews to ensure adherence to coding standards and best practices. Troubleshoot complex issues related to application deployment on Kubernetes clusters. Desired Candidate Profile 5-10 years of experience in developing Java applications with expertise in Spring Boot microservices architecture. Strong understanding of cloud computing concepts (GCP/AWS) and containerization technologies (Kubernetes). Proficiency in building RESTful APIs using Spring WebFlux framework. Experience working with CI/CD pipelines for automated testing and deployment.

Posted 2 weeks ago

Apply

10.0 - 20.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Description: Cloud Infrastructure & Deployment Design and implement secure, scalable, and highly available cloud infrastructure on GCP. Provision and manage compute, storage, network, and database services. Automate infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Deployment Manager. Architecture & Design Translate business requirements into scalable cloud solutions. Recommend GCP services aligned with application needs and cost optimization. Participate in high-level architecture and solution design discussions. DevOps & Automation Build and maintain CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI). Integrate monitoring, logging, and alerting (e.g., Stackdriver / Cloud Operations Suite). Enable autoscaling, load balancing, and zero-downtime deployments. Security & Compliance Ensure compliance with security standards and best Migration & Optimization Support cloud migration projects from on-premise or other cloud providers to GCP. Optimize performance, reliability, and cost of GCP workloads. Documentation & Support Maintain technical documentation and architecture diagrams. Provide L2/L3 support for GCP-based services and incidents. Required Skills and Qualifications: Google Cloud Certification Associate Cloud Engineer or Professional Cloud Architect/Engineer Hands-on experience with GCP services (Compute Engine, GKE, Cloud SQL, BigQuery, etc.) Strong command of Linux , shell scripting , and networking fundamentals Proficiency in Terraform , Cloud Build , Cloud Functions , or other GCP-native tools Experience with containers and orchestration – Docker, Kubernetes (GKE) Familiarity with monitoring/logging – Cloud Monitoring , Prometheus , Grafana Understanding of IAM , VPCs , firewall rules , service accounts , and Cloud Identity

Posted 2 weeks ago

Apply

9.0 - 13.0 years

45 - 60 Lacs

Bengaluru

Work from Office

About the Role As our new Senior Infrastructure Engineer youll be part of the GDI (Getting Data In) group under NG-SIEM with a focus on data ingestion at petabyte-scale. The group focuses on the data ingestion and onboarding experience for NG-SIEM customers, ingesting several petabytes of data per day across multiple regions from 100+ 1st party and 3rd party data sources. The 3P Ingestion team within the GDI group is responsible for all data ingestion from 3rd party data sources. As a member of this team, you will have the opportunity to work on designing and scaling infrastructure for a microservices based architecture using cell-based design principles. You’ll work closely with engineers and other infrastructure engineering teams to build highly available, observable, and secure systems deployed across multiple regions. Your work will directly impact our ability to deliver fast, resilient, and scalable services to support multiple petabyte data ingestion. This role is targeting candidates located in Bangalore and willing to work in a hybrid environment. What You'll Do Design and maintain Kubernetes clusters and supporting infrastructure across multiple regions across different cloud providers using Infrastructure as Code (IaC). Design and implement infrastructure for cell-based architecture — logically isolating workloads by region, business unit, or customer group for scale and fault isolation. Set up and manage observability systems (monitoring, logging, tracing) for microservices across cells and regions. Automate infrastructure provisioning, deployment pipelines, and service rollout strategies (e.g., canary, blue/green). Work with developers to define SLOs, SLIs, and build self-healing, observable services. Ensure compliance, security, and cost optimization in cloud-native infrastructure. Own the incident response process across infrastructure and service-level issues. What You'll Need 8+ years of experience in DevOps, Infrastructure, or SRE roles, preferably in large-scale environments. Strong hands-on experience with Kubernetes (EKS, GKE, AKS, or self-hosted) and Helm. Demonstrated experience in designing and scaling cell-based architecture or similar isolation-based infrastructure models. Deep knowledge of multi-region deployment strategies and global traffic routing. Proficient with observability stacks such as Prometheus, Grafana, Loki, ELK, OpenTelemetry, Jaeger, Datadog. Expert-level skills in Terraform, Pulumi, or similar IaC frameworks. Fluency in at least one scripting language (Python, Bash, Go preferred). Cloud platform experience (AWS/GCP/Azure), especially with services related to networking, compute, and identity. Strong communication and cross-functional collaboration skills. Bonus Points Experience with GitOps tools like Argo/Flux CD Experience with service meshes (e.g., Istio, Linkerd). Familiarity with chaos engineering or fault-injection testing. Exposure to compliance/regulatory needs (e.g., SOC2, HIPAA) in infrastructure.

Posted 2 weeks ago

Apply

10.0 - 20.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Description: Cloud Infrastructure & Deployment Design and implement secure, scalable, and highly available cloud infrastructure on GCP. Provision and manage compute, storage, network, and database services. Automate infrastructure using Infrastructure as Code (IaC) tools such as Terraform or Deployment Manager. Architecture & Design Translate business requirements into scalable cloud solutions. Recommend GCP services aligned with application needs and cost optimization. Participate in high-level architecture and solution design discussions. DevOps & Automation Build and maintain CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI). Integrate monitoring, logging, and alerting (e.g., Stackdriver / Cloud Operations Suite). Enable autoscaling, load balancing, and zero-downtime deployments. Security & Compliance Ensure compliance with security standards and best Migration & Optimization Support cloud migration projects from on-premise or other cloud providers to GCP. Optimize performance, reliability, and cost of GCP workloads. Documentation & Support Maintain technical documentation and architecture diagrams. Provide L2/L3 support for GCP-based services and incidents. Required Skills and Qualifications: Google Cloud Certification Associate Cloud Engineer or Professional Cloud Architect/Engineer Hands-on experience with GCP services (Compute Engine, GKE, Cloud SQL, BigQuery, etc.) Strong command of Linux , shell scripting , and networking fundamentals Proficiency in Terraform , Cloud Build , Cloud Functions , or other GCP-native tools Experience with containers and orchestration – Docker, Kubernetes (GKE) Familiarity with monitoring/logging – Cloud Monitoring , Prometheus , Grafana Understanding of IAM , VPCs , firewall rules , service accounts , and Cloud Identity

Posted 3 weeks ago

Apply

6.0 - 8.0 years

3 - 7 Lacs

Mumbai, Maharashtra, India

On-site

We are looking for a skilled GCP Cloud Engineer with a minimum of 5+ years of hands-on experience in designing, implementing, and managing cloud infrastructure on Google Cloud Platform (GCP) . The ideal candidate must have strong expertise in Terraform for infrastructure as code (IaC) and should be well-versed with GCP-native services, cloud networking, automation, and CI/CD processes. Required Skills & Qualifications: 5+ years of experience in GCP cloud engineering (mandatory) Strong hands-on experience with Terraform Proficient in GCP services including Compute Engine, VPC, IAM, GKE, Cloud Storage, Cloud Functions, etc. Solid understanding of cloud networking, security, and automation tools Experience with CI/CD tools and DevOps practices Familiarity with scripting languages (e.g., Python, Shell) Excellent problem-solving and communication skills

Posted 3 weeks ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Pune, Bengaluru

Hybrid

Technical Project Manager - GCP Devops Immediate Joiner Preferred Job Summary We are looking for a seasoned Project Manager with a strong background in Google Cloud Platform (GCP) and DevOps methodologies. The ideal candidate will be responsible for planning, executing, and finalizing projects according to strict deadlines and within budget. This includes acquiring resources and coordinating the efforts of team members and third-party contractors or consultants in order to deliver projects according to plan. The GCP DevOps Project Manager will also define the projects objectives and oversee quality control throughout its life cycle. Key Responsibilities Project Leadership: Lead and manage the end-to-end lifecycle of complex cloud infrastructure and DevOps projects on Google Cloud Platform. Planning & Scoping: Define project scope, goals, and deliverables that support business objectives in collaboration with senior management and stakeholders. Agile/Scrum Management: Facilitate sprint planning, daily stand-ups, retrospectives, and sprint demos within an Agile framework. Resource Management: Effectively communicate project expectations to team members and stakeholders in a timely and clear fashion; manage and allocate resources efficiently. Risk & Issue Management: Proactively identify, track, and mitigate project risks and issues. Develop and implement effective contingency plans. Budget & Timeline: Develop and manage project budgets, timelines, and resource allocation plans. Track project milestones and deliverables. Stakeholder Communication: Serve as the primary point of contact for project stakeholders. Prepare and present regular status reports on project progress, problems, and solutions. Technical Oversight: Work closely with technical leads and architects to ensure solutions are designed and implemented in line with best practices for security, reliability, and scalability on GCP. CI/CD Pipeline Management: Oversee the implementation and optimization of CI/CD pipelines to automate the deployment, testing, and delivery of software. Quality Assurance: Ensure that all project deliverables meet high-quality standards and are fully tested before release. Required Skills and Qualifications Experience: 5+ years of experience in technical project management, with at least 2-3 years focused on cloud infrastructure projects, specifically on GCP. GCP Expertise: Strong understanding of core GCP services (e.g., Compute Engine, GKE, Cloud Storage, BigQuery, Cloud SQL, IAM, Cloud Build). DevOps Acumen: In-depth knowledge of DevOps principles and hands-on experience with CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI, Cloud Build), infrastructure as code (e.g., Terraform, Deployment Manager), and containerization (e.g., Docker, Kubernetes). Project Management Methodology: Proven experience with Agile, Scrum, and/or Kanban methodologies. PMP or Certified ScrumMaster (CSM) certification is a strong plus. Leadership: Demonstrated ability to lead and motivate cross-functional technical teams in a fast-paced environment. Communication: Exceptional verbal, written, and interpersonal communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Problem-Solving: Strong analytical and problem-solving skills with a high attention to detail. Preferred Qualifications GCP Professional Cloud Architect or Professional Cloud DevOps Engineer certification. Experience with hybrid or multi-cloud environments. Background in software development or systems administration. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack, Google Cloud's operations suite).

Posted 4 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description: We are seeking a skilled and proactive GCP Cloud Engineer with 3 5 years of hands on experience in managing and optimizing cloud infrastructure using Google Cloud Platform GCP The ideal candidate will be responsible for designing deploying and maintaining secure and scalable cloud environments collaborating with cross functional teams and driving automation and reliability across our cloud infrastructure Key Responsibilities: Design and implement cloud native solutions on Google Cloud Platform Deploy and manage infrastructure using Terraform Cloud Deployment Manager or similar IaC tools Manage GCP services such as Compute Engine GKE Kubernetes Cloud Storage Pub Sub Cloud Functions BigQuery etc Optimize cloud performance cost and scalability Ensure security best practices and compliance across the GCP environment Monitor and troubleshoot issues using Stackdriver Cloud Monitoring Collaborate with development DevOps and security teams Automate workflows CI CD pipelines using tools like Jenkins GitLab CI or Cloud Build Technical Requirements: 3 5 years of hands on experience with GCP Strong expertise in Terraform GCP networking and cloud security Proficient in container orchestration using Kubernetes GKE Experience with CI CD DevOps practices and shell scripting or Python Good understanding of IAM VPC firewall rules and service accounts Familiarity with monitoring logging tools like Stackdriver or Prometheus Strong problem solving and troubleshooting skills Additional Responsibilities: GCP Professional certification e g Professional Cloud Architect Cloud Engineer Experience with hybrid cloud or multi cloud architecture Exposure to other cloud platforms AWS Azure is a plus Strong communication and teamwork skills Preferred Skills: Cloud Platform ->Google Cloud Platform Developer->GCP/ Google Cloud,Java,Java->Springboot,.Net,Python

Posted 4 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Pune

Work from Office

GCP Infrastructure Engineer Exp.: 5+ Years Deploy, configure and maintain GCP infrastructure. Work with CI/CD pipelines for Infrastructure automation and Application Code Deployment. Required Past Experience: 5 to 10 years of demonstrated relevant experience deploying, configuring and supporting public cloud infrastructure (GCP as primary), IaaS and PaaS. Experience in configuring and managing the GCP infrastructure environment components Foundation components Networking (VPC, VPN, Interconnect, Firewall and Routes), IAM, Folder Structure, Organization Policy, VPC Service Control, Security Command Centre, etc. Application Components BigQuery, Cloud Composer, Cloud Storage, Google Kubernetes Engine (GKE), Compute Engine, Cloud SQL, Cloud Monitoring, Dataproc, Data Fusion, Big Table, Dataflow, etc. Operational Components Audit Logs, Cloud Monitoring, Alerts, Billing Exports, etc. Security Components KMS, Secrets Manager etc. Experience with infrastructure automation using Terraform. Experience in designing and implementing CI/CD pipelines with Cloud Build, Jenkins, GitLab, Bitbucket Pipelines, etc., and source code management tools like Git. Experience with scripting Shell Scripting and Python Required Skills and Abilities: Mandatory Skills GCP Networking & IAM, Terraform, Shell Scripting/Python Scripting, CI/CD Pipelines Secondary Skills: Composer, BigQuery, GKE, Dataproc, GCP Networking Good To Have – Certifications in any of the following: Cloud Devops Engineer, Cloud Security Engineer, Cloud Network Engineer Good verbal and written communication skills. Strong Team Player About Us! A global leader in data warehouse migration and modernisation to the cloud, we empower businesses by migrating their data/workload/ETL/analytics to the cloud by leveraging automation. We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, DataStage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization. Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

Posted 4 weeks ago

Apply

4.0 - 8.0 years

3 - 12 Lacs

Hyderabad, Telangana, India

On-site

4 to 8 years of experience as a DevOps Engineer We will align with Clients existing agile methodology, terminology and backlog management tool DevOps Tools have been chosen (Jenkins, Github, etc.) and agreed with developers, some specific tools e.g. scanning may need to be selected and acquired. Focus is on native GCP services (GKE and no 3rd party or OSS platforms) GCP will be provisioned as code using Terraform Initial dev sandbox for October will have limited platform capabilities (enable development with MVP) Security testing & scanning tools will be decided during discovery Databases will be re-platformed to Cloud SQL Environment will be specified for up to 8 initial microservices (release plan finalized before end of Discovery) Design will include multi-zone, not multi-region Essential functions The primary objective of this project is to unify the e-commerce experience into a single, best-of-breed platform that not only caters to the current needs but also sets the stage for seamless migration of other e-commerce experiences in the future Qualifications - GCP - Kubernetes - Terraform - Cloud Run - Ansible - GKE Would be a plus

Posted 4 weeks ago

Apply
Page 1 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies