Home
Jobs

215 Gcp Cloud Jobs - Page 2

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Remote

Naukri logo

Role & responsibilities This SRO Engineer is part of GPU Cloud Business Unit. Here the team doesn't have difference in L0-L3 level of resources. This Engineer will be helping in building clusters from scratch, supporting, monitoring and troubleshooting them. SRO Engineers should be able to automate their daily tasks using Ansible playbooks. They will be supporting the firmware upgradations, failure analysis, read logs etc. Any incidents coming, a ticket will be raised on it. SRO Engineers will take up and resolve the same. Should be available to respond to any Alert/Alarm on the incidents. Should be documenting and do additional troubleshooting as and when required. Documenting done to share the details with the next shift SRO Engineers to refer. They should be able to understand any hardware failure or configuration issues that come up as incidents. Preferred candidate profile • 5+ Years of hands-on Linux Administration experience. • Should have understanding of Hardware clusters. • Should have experience in any Cloud Service Provider (CSP) environment. (AWS/Azure/Oracle/GCP). • Ansible/Python Scripting experience. • Should be a reliable team player. • Should have proven experience in SSH, DNS, DHCP, Bare Metal etc.

Posted 3 days ago

Apply

8.0 - 10.0 years

14 - 24 Lacs

Pune, Mumbai (All Areas)

Work from Office

Naukri logo

Job Title: GCP Cloud Engineer Experience: 8 to 9 Years (5+Mandatory in GCP) Location: Mumbai & Pune Employment Type: (Full-Time) Notice Period: Immediate Joiners Preferred Job Description: We are looking for a skilled GCP Cloud Engineer with a minimum of 5+ years of hands-on experience in designing, implementing, and managing cloud infrastructure on Google Cloud Platform (GCP) . The ideal candidate must have strong expertise in Terraform for infrastructure as code (IaC) and should be well-versed with GCP-native services, cloud networking, automation, and CI/CD processes. Required Skills & Qualifications: 5+ years of experience in GCP cloud engineering (mandatory) Strong hands-on experience with Terraform Proficient in GCP services including Compute Engine, VPC, IAM, GKE, Cloud Storage, Cloud Functions, etc. Solid understanding of cloud networking, security, and automation tools Experience with CI/CD tools and DevOps practices Familiarity with scripting languages (e.g., Python, Shell) Excellent problem-solving and communication skills Thanks & Regards Chetna Gidde | HR Associate-Talent Acquisition | chetna.gidde@rigvedit.com |

Posted 3 days ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Databuzz is Hiring for Java Developer-3+yrs-Bangalore/Chennai/Hyderabad-Hybrid Please mail your profile to haritha.jaddu@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) Position: Java Developer Location: Bangalore/Chennai/Hyderabad Exp -3+ yrs Mandatory skills : Candidate should have Overall 3-7 years of professional experience Candidate should have experience in developing microservices and cloud native apps using JavaJ2EE, REST APIs ,Spring Core, Spring MVC Framework ,Spring Boot Framework ,JPA Java Persistence API Or any other ORM Spring Security and similar tech stacks Open source and proprietary Proficiency in Java and related technologies eg Spring, Spring Boot ,Hibernate ,JPA Experience in working with Unit testing using framework such as Junit, Mockito ,JBehave Build and deploy services using Gradle Maven Jenkins etc. as part of CICD process Experience working in Google Cloud Platform Experience with any Relational Database like Oracle ,SQL server, MySQL, PostgreSQL etc. Build and deploy components as part of CICD process Experience in building and consuming RESTful APIs Experience with version control systems eg Git Solid understanding of software development life cycle SDLC and agile methodologies Regards, Haritha Talent Acquisition specialist haritha.jaddu@databuzzltd.com

Posted 3 days ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

Gurugram

Hybrid

Naukri logo

Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Gurgaon Payroll: BCforward Work Mode: Hybrid JD Description: Skills: React / React JS; Kotlin; Spring Boot; REST Web Services; JUnit; Git (GitHub, GitLab, BitBucket, SVN); GCP Full stack developer with expertise in React -React, Java/ Kotlin, Springboot , Restful Apis,Junit,GitHub Actions, Postgres, GCP Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 15-Days joiners at most. All the best

Posted 3 days ago

Apply

7.0 - 11.0 years

10 - 20 Lacs

Indore, Pune

Work from Office

Naukri logo

Exp Range 7+ Years Location Pune/Indore Notice – Immediate Senior DevOps Engineer Location: Indore, Pune – work from office. Job Summary: We are seeking an experienced and enthusiastic Senior DevOps Engineer with 7+ years of dedicated experience to join our growing team. In this pivotal role, you will be instrumental in designing, implementing, and maintaining our continuous integration, continuous delivery (CI/CD) pipelines, and infrastructure automation. You will champion DevOps best practices, optimize our cloud-native environments, and ensure the reliability, scalability, and security of our systems. This role demands deep technical expertise, an initiative-taking mindset, and a strong commitment to operational excellence. Key Responsibilities: CI/CD Pipeline Management: Design, build, and maintain robust and automated CI/CD pipelines using GitHub Actions to ensure efficient and reliable software delivery from code commit to production deployment. Infrastructure Automation: Develop and manage infrastructure as code (IaC) using Shell scripting and GCloud CLI to provision, configure, and manage resources within Google Cloud Platform (GCP) . Deployment Orchestration: Implement and optimize deployment strategies, leveraging GitHub for version control of deployment scripts and configurations, ensuring repeatable and consistent releases. Containerization & Orchestration: Work extensively with Docker for containerizing applications, including building, optimizing, and managing Docker images. Artifact Management: Administer and optimize artifact repositories, specifically Artifactory in GCP , to manage dependencies and build artifacts efficiently. System Reliability & Performance: Monitor, troubleshoot, and optimize the performance, scalability, and reliability of our cloud infrastructure and applications. Collaboration & Documentation: Work closely with development, QA, and operations teams. Utilize Jira for task tracking and Confluence for comprehensive documentation of systems, processes, and best practices. Security & Compliance: Implement and enforce security best practices within the CI/CD pipelines and cloud infrastructure, ensuring compliance with relevant standards. Mentorship & Leadership: Provide technical guidance and mentorship to junior engineers, fostering a culture of learning and continuous improvement within the team. Incident Response: Participate in on-call rotations and provide rapid response to production incidents, perform root cause analysis, and implement preventative measures. Required Skills & Experience (Mandatory - 7+ Years): Proven experience (7+ years) in a DevOps, Site Reliability Engineering (SRE), or similar role. Expert-level proficiency with Git and GitHub , including advanced branching strategies, pull requests, and code reviews. Experience designing and implementing CI/CD pipelines using GitHub Actions. Deep expertise in Google Cloud Platform (GCP) , including compute, networking, storage, and identity services. Advanced proficiency in Shell scripting for automation, system administration, and deployment tasks. Strong firsthand experience with Docker for containerization, image optimization, and container lifecycle management. Solid understanding and practical experience with Artifactory (or similar artifact management tools) in a cloud environment. Expertise in using GCloud CLI for automating GCP resource management and deployments. Demonstrable experience with Continuous Integration (CI) principles and practices. Proficiency with Jira for agile project management and Confluence for knowledge sharing. Strong understanding of networking concepts, security best practices, and system monitoring. Excellent critical thinking skills and an initiative-taking approach to identifying and resolving issues. Nice-to-Have Skills: Experience with Kubernetes (GKE) for container orchestration. Familiarity with other Infrastructure as Code (IaC) tools like Terraform . Experience with monitoring and logging tools such as Prometheus, Grafana, or GCP's Cloud Monitoring/Logging. Proficiency in other scripting or programming languages (e.g., Python, Go) for automation and tool development. Experience with database management in a cloud environment (e.g., Cloud SQL, Firestore). Knowledge of DevSecOps principles and tools for integrating security into the CI/CD pipeline. GCP Professional Cloud DevOps Engineer or other relevant GCP certifications. Experience with large-scale distributed systems and microservices architectures.

Posted 3 days ago

Apply

4.0 - 9.0 years

8 - 18 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

About the Role: We are looking for a highly skilled and experienced Machine Learning / AI Engineer to join our team at Zenardy. The ideal candidate needs to have a proven track record of building, deploying, and optimizing machine learning models in real-world applications. You will be responsible for designing scalable ML systems, collaborating with cross-functional teams, and driving innovation through AI-powered solutions. Location: Chennai, Hyderabad Key Responsibilities: Design, develop, and deploy machine learning models to solve complex business problems Work across the full ML lifecycle: data collection, preprocessing, model training, evaluation, deployment, and monitoring Collaborate with data engineers, product managers, and software engineers to integrate ML models into production systems Conduct research and stay up-to-date with the latest ML/AI advancements, applying them where appropriate Optimize models for performance, scalability, and robustness Document methodologies, experiments, and findings clearly for both technical and non-technical audiences Mentor junior ML engineers or data scientists as needed Required Qualifications: Bachelors or Masters degree in Computer Science, Machine Learning, Data Science, or related field (Ph.D. is a plus) Minimum of 5 hands-on ML/AI projects, preferably in production or with real-world datasets Proficiency in Python and ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost Solid understanding of core ML concepts: supervised/unsupervised learning, neural networks, NLP, computer vision, etc. Experience with model deployment using APIs, containers (Docker), cloud platforms (AWS/GCP/Azure) Strong data manipulation and analysis skills using Pandas, NumPy, and SQL Knowledge of software engineering best practices: version control (Git), CI/CD, unit testing Preferred Skills: Experience with MLOps tools (MLflow, Kubeflow, SageMaker, etc.) Familiarity with big data technologies like Spark, Hadoop, or distributed training frameworks Experience working in Fintech environments would be a plus Strong problem-solving mindset with excellent communication skills Experience in working with vector database. Understanding of RAG vs Fine-tuning vs Prompt Engineering Why Join Us: Work on impactful, real-world AI challenges Collaborate with a passionate and innovative team Opportunities for career advancement and learning Flexible work environment (remote/hybrid options) Competitive compensation and benefits

Posted 3 days ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education:Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities:Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations.Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes.Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and PrometheusProven track record in supporting and deploying various public cloud services.Experience in building or managing self-service platforms to boost developer productivity.Proficiency in using Infrastructure as Code (IaC) tools like Terraform.Skilled in diagnosing and resolving complex issues in automation and cloud environments.Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems.Strong understanding of infrastructure CI/CD pipelines and associated tools.Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions.Experience working in GKE, Edge/GDCE environments.Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset:Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions.At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies.Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules.Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE).Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications:GCP ACE certification is mandatory.CKA certification is highly desirable.HashiCorp Terraform certification is a significant plus. Qualification 15 years full time education

Posted 3 days ago

Apply

10.0 - 11.0 years

24 - 30 Lacs

Kochi

Work from Office

Naukri logo

7+ years in data architecture,3 years in GCP environments. Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services. data warehousing, data lakes, and real-time data pipelines. SQL, Python

Posted 3 days ago

Apply

4.0 - 9.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative problem solvers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry standard processes, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilons success story Candidate will be a member of the Skynet Development Team responsible for developing, managing, and implementing multi-cloud Infrastructure as Code framework (IaC) for the product engineering group using GCP, AWS, Azure, Terraform and Ansible . Why we are looking for you: You have experience in Cloud Engineering and use Terraform to develop Infrastructure as code. You have a strong hands-on experience in GCP, AWS and Azure. You enjoy new challenges and are solution oriented. You have a flair in writing scripts in Python. What you will enjoy in this role: As part of the Epsilon Product Engineering team, the pace of the work matches the fast-evolving demands of Fortune 500 clients across the globe As part of an innovative team that is not afraid to try new things, your ideas will come to life in digital marketing products that support more than 50% automotive dealers in the US The open and transparent environment that values innovation and efficiency. Opportunity to explore various GCP, AWS & Azure services at depth and enrich your experience on these fast-growing Cloud Services. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What you will do: Evaluate services of GCP and use Terraform to design and develop re-usable Infrastructure as Code modules. Work across product engineering team to learn about their deployment challenges and help them overcome by delivering reliable solution. Be part of an enriching team and tackle real Production engineering challenges. Improve your knowledge in the areas of DevOps & Cloud Engineering by using enterprise tools and contributing to projects success. Qualification BE / B.Tech / MCA No correspondence course 2 - 8 years of experience. Proven senior-level experience designing, developing, and handling complex infrastructure-as-code solutions using Terraform specifically for Google Cloud Platform. This includes extensive work with GCP modules, state management, and standard methodologies for large-scale environments. At least 2+ years of experience of working on GCP (primary). Must have strong experience of working with Terraform. Experience in working on GIT (or equivalent source control) Experience in AWS, Azure & Python will be an advantage.

Posted 4 days ago

Apply

5.0 - 8.0 years

12 - 15 Lacs

Pune

Remote

Naukri logo

Job Title: JAVA Developer Required Experience: 7+ years Job Overview: We are looking for a passionate Java developer with 7 years of experience to join our dynamic team. The ideal candidate should have a solid understanding of Java programming, experience with web frameworks, and a strong desire to develop efficient, scalable, and maintainable applications. Key Responsibilities: Design, develop, and maintain scalable and high-performance Java applications. Write clean, modular, and well-documented code that follows industry best practices. Collaborate with cross-functional teams to define, design, and implement new features. Debug, test, and troubleshoot applications across various platforms and environments. Participate in code reviews and contribute to the continuous improvement of development processes. Work with databases such as MySQL, and PostgreSQL to manage application data. Implement and maintain RESTful APIs for communication between services and front-end applications. Assist in optimizing application performance and scalability. Stay updated with emerging technologies and apply them in development projects when appropriate. Requirements: 2+ years of experience in Java development. Strong knowledge of Core Java, OOP concepts, struts, and Java SE/EE. Experience with Spring Framework (Spring Boot, Spring MVC) or Hibernate for developing web applications. Familiarity with RESTful APIs and web services. Proficiency in working with relational databases like MySQL or PostgreSQL. Familiarity with JavaScript, HTML5, and CSS3 for front-end integration. Basic knowledge of version control systems like Git. Experience with Agile/Scrum development methodologies. Understanding of unit testing frameworks such as JUnit or TestNG. Strong problem-solving and analytical skills. Experience with Kafka Experience with GCP Preferred Skills: Experience with DevOps tools like Docker, Kubernetes, or CI/CD pipelines. Familiarity with microservice architecture and containerization. Experience with NoSQL databases like MongoDB is a plus. Company Overview: Aventior is a leading provider of innovative technology solutions for businesses across a wide range of industries. At Aventior, we leverage cutting-edge technologies like AI, ML Ops, DevOps, and many more to help our clients solve complex business problems and drive growth. We also provide a full range of data development and management services, including Cloud Data Architecture, Universal Data Models, Data Transformation & and ETL, Data Lakes, User Management, Analytics and visualization, and automated data capture (for scanned documents and unstructured/semi-structured data sources). Our team of experienced professionals combines deep industry knowledge with expertise in the latest technologies to deliver customized solutions that meet the unique needs of each of our clients. Whether you are looking to streamline your operations, enhance your customer experience, or improve your decision-making process, Aventior has the skills and resources to help you achieve your goals. We bring a well-rounded cross-industry and multi-client perspective to our client engagements. Our strategy is grounded in design, implementation, innovation, migration, and support. We have a global delivery model, a multi-country presence, and a team well-equipped with professionals and experts in the field.

Posted 4 days ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

We are seeking a highly skilled and experienced Cloud Data Engineer to join our dynamic team. You will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure on GCP/AWS/Azure, ensuring data is accessible, reliable, and available for business use. Key Responsibilities: Data Pipeline Development: Design, develop, and maintain data pipelines using GCP/AWS/Azure services such as Dataflow, Dataproc, BigQuery, and Cloud Storage. Data Integration: Work on integrating data from various sources (structured, semi-structured, and unstructured) into GCP/AWS/Azure environments. Data Modeling: Develop and maintain efficient data models in BigQuery to support analytics and reporting needs. Data Warehousing: Implement data warehousing solutions on GCP, optimizing performance and scalability. ETL/ELT Processes: Build and manage ETL/ELT processes using tools like Apache Airflow, Data Fusion, and Python. Data Quality & Governance: Implement data quality checks, data lineage, and data governance best practices to ensure high data integrity. Automation: Automate data pipelines and workflows to reduce manual effort and improve efficiency. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet business needs. Optimization: Continuously monitor and optimize the performance of data pipelines and queries for cost and efficiency. Security: Ensure data security and compliance with industry standards and best practices. Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 8+ years of experience in data engineering, with at least 2 years working with GCP/Azure/AWS Technical Skills: Strong programming skills in Python, SQL,Pyspark and familiarity with Java/Scala. Experience with orchestration tools like Apache Airflow. Knowledge of ETL/ELT processes and tools. Experience with data modeling and designing data warehouses in BigQuery. Familiarity with CI/CD pipelines and version control systems like Git. Understanding of data governance, security, and compliance. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work in a fast-paced environment and manage multiple priorities. Preferred Qualifications: Certifications: GCP Professional Data Engineer or GCP Professional Cloud Architect certification. Domain Knowledge: Experience in the finance, e-commerce, healthcare domain is a plus.

Posted 4 days ago

Apply

12.0 - 18.0 years

35 - 60 Lacs

Hyderabad

Hybrid

Naukri logo

Senior Manager, Site Reliability Engineering Hyderabad Shift Timings: 1.00 PM - 10.00 PM Duties and Responsibilities: People Leader Responsibility Position will manage 5 to 10 engineers both directly and indirectly. The engineers will include Site Reliability Engineers, Observability Engineers, Performance Engineers, DevSecOps Engineers, and others These individuals will vary from entry level to senior titles. Responsibilities: Lead and manage a team of Site Reliability Engineers, providing mentorship, guidance, and support to ensure the team's success. Develop and implement strategies for improving system reliability, scalability, and performance. Establish and enforce SRE best practices, including monitoring, alerting, error budget tracking, and post-incident reviews. Collaborate with software engineering teams to design and implement reliable, scalable, and efficient systems. Implement and maintain monitoring and alerting systems to proactively identify and address issues before they impact customers. Implement performance engineering processes to ensure reliability of Products, Services, & Platforms. Drive automation and tooling efforts to streamline operations and improve efficiency. Continuously evaluate and improve our infrastructure, processes, and practices to ensure reliability and scalability. Provide technical leadership and guidance on complex engineering projects and initiatives. Stay up-to-date with industry trends and emerging technologies in site reliability engineering and cloud computing. Other duties as assigned. Required Work Experience: 10+ years of experience in site reliability engineering or a related field. 5+ years of experience in a leadership or management role, managing a team of engineers. 5+ years of hands on working experience with Dynatrace (administrative, deployment, etc). Strong understanding of DevSecOps principles. Strong understanding of cloud computing principles and technologies, preferably AWS, Azure, or GCP. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams. Proven track record of driving projects to successful completion in a fast-paced, dynamic environment. Experience with driving cultural change in technical excellence, quality, and efficiency. Experience managing and growing technical leaders and teams. Constructing, interpreting, and applying metrics to your work and decision making, able to use those metrics to identify correlation between drivers and results, and using that information to drive prioritization and action Preferred Work Experience: Proficiency in programming/scripting languages such as Python, Go, or Bash. Experience with infrastructure as code tools such as Terraform or CloudFormation. Deep understanding of Linux systems administration and networking principles. Experience with containerization and orchestration technologies such as Docker and Kubernetes. Experience or familiarity with IIS, HTML, Java, Jboss. Knowledge: Site Reliability Engineering Principles DevSecOps Principles Agile (SAFe) Healthcare industry ITLT ServiceNow Jira/Confluence Skills: Strong communication skills Leadership Programming languages (see above) Project Management Mentorship Continuous learning

Posted 4 days ago

Apply

8.0 - 11.0 years

25 - 32 Lacs

Hyderabad

Work from Office

Naukri logo

This role is a full stack developer who provides the technical expertise for the implementation DevSecOps practices. Principal DevSecOps Engineer is a Senior technical expert role focused on DevSecOps engineering practices and enable automation across the enterprise. This purpose of this role will not only provide technical directions but also oversight DevSecOps enablement and enhancements to deliver business applications in AWS/GCP Cloud. DevSecOps Prinipal engineer will also be accountable for the successful implementation, deployments, and configuration management along with development of new automation and agile practices (CI/CD). Lastly, this resource will work with DevOps engineers, architects and team of developers to enhance devops standards across the organization.

Posted 4 days ago

Apply

8.0 - 13.0 years

35 - 55 Lacs

Bengaluru

Hybrid

Naukri logo

8+ yrs exp in Cloud Architect with GCP. Expertise in GCP services like Compute Engine, Kubernetes. understanding of cloud security. Proficiency in DevOps tools. looking for immediate joiners

Posted 4 days ago

Apply

8.0 - 13.0 years

18 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Naukri logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Senior Principal Consultant – Google Cloud Engineer We are seeking a highly skilled and visionary Google Cloud Engineer – Senior Principal Consultant to architect, design, and implement cutting-edge cloud infrastructure and automation solutions on Google Cloud Platform (GCP) . The ideal candidate will bring deep technical expertise, leadership, and strategic insight to drive enterprise-level cloud adoption, modernization, and optimization initiatives. Responsibilities Design and implement cloud-native infrastructure solutions on GCP using Terraform, Deployment Manager, or Pulumi Build and manage CI/CD pipelines using Cloud Build, GitHub Actions, Jenkins, or Spinnaker Architect and manage secure, scalable environments using GKE, Cloud Run, Compute Engine, and Cloud Functions Implement monitoring, logging, and alerting using Google Cloud Operations Suite (formerly Stackdriver) Automate cloud operations using Python, Bash, or Go, following SRE and DevOps best practices Collaborate with solution architects, security teams, and application teams to meet performance, security, and compliance goals Participate in cloud migration and modernization projects, including lift-and-shift, re-platforming, and refactoring Review and improve cost optimization, scalability, and resiliency strategies Lead proof-of-concepts (PoCs), assessments, and workshops with clients Mentor engineering teams and support pre-sales efforts for technical proposals. Lead initiatives related to cloud networking, storage, and security, ensuring compliance with industry standards. Author and review technical documentation for infrastructure processes, providing clear guidelines for future implementations. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree in information technology, Computer Science, or a related field. Deep hands-on experience with Google Cloud Platform (GCP) and core services (GKE, VPC, IAM, Cloud SQL, Pub/Sub, etc.) Deep understanding of cloud security, access controls, and compliance frameworks. Experience with containers and orchestration tools such as Docker, Kubernetes, and Helm Solid understanding of networking, security policies, and identity/access management in GCP Strong programming/scripting skills in Python, Go, or Shell Ability to independently solve complex problems while mentoring and collaborating with peers. Excellent problem-solving, debugging, and performance tuning skills Strong interpersonal and communication skills in both technical and business contexts Preferred Qualifications/ Skills Additional GCP certifications: DevOps Engineer, Security Engineer, or Network Engineer Experience in hybrid/multi-cloud environments Background in regulated industries such as finance, healthcare, or telecom GCP Certifications (Professional Cloud Architect, Cloud DevOps Engineer, etc.) Experience with multi-cloud environments or hybrid cloud architecture Strong knowledge of networking concepts, Kubernetes orchestration, and enterprise security Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 4 days ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing!! Role :Data Scientist Experience Required :5 to 10 yrs Work Location :Hyderabad/Bangalore/Chennai Required Skills, Python GCP AI Platform, BigQuery ML, Cloud AutoML, and Vertex AI. Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 5 days ago

Apply

4.0 - 9.0 years

14 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Job Title: GCP Data Engineer Overview: We are looking for a skilled GCP Data Engineer with 4 to 5 years of real hands-on experience in data ingestion, data engineering, data quality, data governance, and cloud data warehouse implementations using GCP data services. The ideal candidate will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment. Key Responsibilities: Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Develop and maintain data ingestion frameworks and pipelines from various data sources using GCP services. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and implement data and semantic interoperability specifications. Work closely with business teams to define and scope requirements. Analyze existing systems to identify appropriate data sources and drive continuous improvement. Implement and continuously enhance automation processes for data ingestion and data transformation. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Skills and Qualifications: Overall 4-5 years of hands-on experience as a Data Engineer, with at least 3 years of direct GCP Data Engineering experience . Strong SQL and Python development skills are mandatory. Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies. Demonstrated knowledge and experience with Google Cloud BigQuery is a must. Experience with DataProc and Dataflow is highly preferred. Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks . Extensive experience in SQL across various database platforms. Experience in data mapping and data modeling . Familiarity with data analytics tools and best practices. Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell . Practical experience with Google Cloud services including but not limited to: Big Query , BigTable Cloud Dataflow , Cloud Data proc Cloud Storage , Pub/Sub Cloud Functions , Cloud Composer Cloud Spanner , Cloud SQL Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark ). Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker , etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. GCP Data Engineer Certification is highly preferred. Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Total Experience: Relevant Experience : Current Role / Skillset: Current CTC: Fixed: Variables(if any): Bonus(if any): Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: Willing to work from office: ************* 5DAYS WORK FROM OFFICE MANDATORY ****************

Posted 5 days ago

Apply

7.0 - 12.0 years

30 - 35 Lacs

Noida

Hybrid

Naukri logo

Job Title: Java Technical Lead / Developer (Full stack) Location: Noida Experience: 7-15 Years Key Responsibilities: Lead and mentor a team of Java developers through the full software development lifecycle. Architect, design, and develop scalable and secure Java-based applications. Collaborate with cross-functional teams including DevOps, QA, and Product Management. Conduct code reviews, enforce coding standards, and ensure high code quality. Translate business requirements into technical specifications and solutions. Drive adoption of best practices in design, development, and deployment. Troubleshoot and resolve complex technical issues in development and production. Stay current with emerging technologies and propose their adoption where relevant. Required Skills & Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or related field. 10-12 years of hands-on experience in Java development with react.js. Strong expertise in Java 11+ / Java 17 , React.js Spring Boot , Spring Cloud , and Microservices architecture . Experience with RESTful APIs , JPA/Hibernate , and SQL/NoSQL databases (MySQL, PostgreSQL, MongoDB). Proficiency in CI/CD tools (Jenkins, GitHub Actions), Docker , and Kubernetes . Familiarity with cloud platforms (AWS, Azure, or GCP). Solid understanding of design patterns , system architecture , and performance tuning . Experience with Agile/Scrum methodologies . Preferred Skills: Exposure to Reactive Programming (Spring WebFlux, Project Reactor). Experience with GraphQL , Kafka , or gRPC . Knowledge of DevSecOps and application security best practices . Familiarity with observability tools (ELK, Prometheus, Grafana). What We Offer: Competitive compensation and performance-based bonuses. Flexible work environment and hybrid/remote options. Opportunities for leadership, innovation, and continuous learning. A collaborative and inclusive work culture.

Posted 5 days ago

Apply

0.0 - 5.0 years

2 - 6 Lacs

New Delhi, Gurugram, Delhi / NCR

Work from Office

Naukri logo

CUSTOMER SERVICE ROLE FOR INTERNATIONAL PROCESS KAJAL - 8860800235 TRAVEL/BANKING/TECHNICAL GRAD/UG/FRESHER/EXPERIENCE SALARY DEPENDING ON LAST TAKEHOME(UPTO 5 LPA) LOCATION - GURUGRAM/ NOIDA WFO, 5 DAYS, 24*7 SHIFTS CAB+ INCENTIVES Required Candidate profile GOOD COMMUNICATION SKILLS IMMEDIATE JOINERS SHOULD BE WILLING TO DO 24*7 SHIFTS

Posted 6 days ago

Apply

4.0 - 9.0 years

14 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Job Title: GCP Data Engineer Overview: We are looking for a skilled GCP Data Engineer with 4 to 5 years of real hands-on experience in data ingestion, data engineering, data quality, data governance, and cloud data warehouse implementations using GCP data services. The ideal candidate will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment. Key Responsibilities: Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Develop and maintain data ingestion frameworks and pipelines from various data sources using GCP services. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and implement data and semantic interoperability specifications. Work closely with business teams to define and scope requirements. Analyze existing systems to identify appropriate data sources and drive continuous improvement. Implement and continuously enhance automation processes for data ingestion and data transformation. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Skills and Qualifications: Overall 4-5 years of hands-on experience as a Data Engineer, with at least 3 years of direct GCP Data Engineering experience . Strong SQL and Python development skills are mandatory. Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies. Demonstrated knowledge and experience with Google Cloud BigQuery is a must. Experience with DataProc and Dataflow is highly preferred. Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks . Extensive experience in SQL across various database platforms. Experience in data mapping and data modeling . Familiarity with data analytics tools and best practices. Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell . Practical experience with Google Cloud services including but not limited to: Big Query , BigTable Cloud Dataflow , Cloud Data proc Cloud Storage , Pub/Sub Cloud Functions , Cloud Composer Cloud Spanner , Cloud SQL Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark ). Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker , etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. GCP Data Engineer Certification is highly preferred. Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Total Experience: Relevant Experience : Current Role / Skillset: Current CTC: Fixed: Variables(if any): Bonus(if any): Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: Willing to work from office: ************* 5DAYS WORK FROM OFFICE MANDATORY ****************

Posted 6 days ago

Apply

1.0 - 3.0 years

2 - 4 Lacs

Kolkata

Hybrid

Naukri logo

Required Skills Strong proficiency in Python (3.x) and Django (2.x/3.x/4.x) Hands-on experience with Django REST Framework (DRF) Expertise in relational databases like PostgreSQL or MySQL Proficiency with Git and Bitbucket Solid understanding of RESTful API design and integration Experience in domain pointing and hosting setup on AWS or GCP Deployment knowledge on EC2 , GCP Compute Engine , etc. SSL certificate installation and configuration Familiarity with CI/CD pipelines (GitHub Actions, Bitbucket Pipelines, GitLab CI) Basic usage of Docker for development and containerization Ability to independently troubleshoot server/deployment issues Experience managing cloud resources like S3 , Load Balancers , and IAM roles Preferred Skills Experience with Celery and Redis / RabbitMQ for asynchronous task handling Familiarity with front-end frameworks like React or Vue.js Exposure to Cloudflare or similar CDN/DNS tools Experience with monitoring tools: Prometheus , Grafana , Sentry , or CloudWatch Why Join Us? Work on impactful and modern web solutions Growth opportunities across technologies and cloud platforms Collaborative, inclusive, and innovation-friendly work environment Exposure to challenging and rewarding projects

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Key Skills: 3 years of experience with building modern applications utilizing GCP services like Cloud Build, Cloud Functions/ Cloud Run, GKE, Logging, GCS, CloudSQL & IAM. Primary proficiency in Python and experience with a secondary language such as Golang or Java. In-depth knowledge and hands-on experience with GKE/K8s. You place a high emphasis on Software Engineering fundamentals such as code and configuration management, CICD/Automation and automated testing. Working with operations, security, compliance and architecture groups to develop secure, scalable and supportable solutions. Working and delivering solution in a complex enterprise environment. Proficiency in designing and developing scalable and decoupled microservices and adeptness in implementing event-driven architecture to ensure seamless and responsive service interactions. Proficiency in designing scalable and robust solutions leveraging cloud-native technologies and architectures. Expertise in managing diverse stakeholder expectations and adept at prioritizing tasks to align with strategic objectives and deliver optimal outcomes. Good to have knowledge, skills and experiences The ‘good to have’ knowledge, skill and experience (KSE) the role requires are: Ability to integrate Kafka to handle real-time data. Proficiency in monitoring tools Experience using Robot Framework for automated UAT is highly desirable.

Posted 1 week ago

Apply

8.0 - 12.0 years

15 - 30 Lacs

Pune, Chennai, Thiruvananthapuram

Work from Office

Naukri logo

Job description: Hiring Technical Project Manager's with experience range 9 years & above Mandatory Skills: Project Management->UI->Cloud ->Angular-> React->Node-> Azure Cloud-> Aws Cloud Education: B.Tech ->BCA -> B.sc

Posted 1 week ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Gurugram

Work from Office

Naukri logo

Role & responsibilities Skills : Bigdata, Pyspark, Hive, Spark Optimization Good to have : GCP Preferred candidate profile Skills : Bigdata, Pyspark, Hive, Spark Optimization Good to have : GCP Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in GCP. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices.

Posted 1 week ago

Apply

10.0 - 15.0 years

10 - 20 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Job Summary / Purpose The Network Engineer job family has responsibility for infrastructure/technical planning, implementation and support activities for systems owned by the HA IT Operations Team. Specific responsibilities include installing and supporting system hardware and software, performing system upgrades, and evaluating and installing patches and software updates. Responsibilities also include operational support activities such as resolving software and hardware related problems, managing backup and recovery activity, administering technology layers, managing monitoring and alerting functions, performing capacity planning and conducting version management. Network Engineers work closely with architects, infrastructure support, database administrators and application support teams to ensure seamless and quality IT support for HA customers and alignment with HA IT standards, controls and governance. Essential Key Job Responsibilities Provides evaluation, engineering/design and implementation services for new products, technologies and solutions to address corporate business requirements. Provides escalation support to Tier 1 and 2 engineers. Design, implement, and maintain large-scale network infrastructure on Google Cloud Platform (GCP) Demonstrates creativity and takes initiative in problem solving. Resolves or facilitates resolution of complex problems for assigned program Has a thorough and comprehensive mastery of supported platforms/products and environments. Focuses the majority of time on complex engineering, architectural and implementation tasks. Project assignments are large in scope and are highly complex. Provide exceptional customer service to HA end users, Business Stakeholders and other members of HA IT Perform daily production support activities Participate in team on-call rotations. Perform patching and code deployments across all environments Lead projects associated to the enhancement, upgrade/patching, or implementation of new or existing technology solutions. Coordinate implementation and support efforts between IT Operations and other HA IT teams. Conduct performance tuning and troubleshooting Provide oversight and facilitation of the HA IT change management process as it applies to the IT Operations team. Review, recommend and monitor the source code/versioning management function adhering to technical management guidelines Provide technical leadership and ownership of issues across multiple disciplines and technologies Design, implement and maintain a comprehensive monitoring and alerting process across all IT Operations platforms With oversight from the System Engineering Lead and Operations Manager(s), initiate and facilitate strategic planning activities (capacity planning, process improvement, maintenance, upgrade and end-of-life planning, roadmap development) Build new test and production environments on existing or new hardware as required. Identify automation opportunities and implement scripted solutions. Identify technical innovation and process improvement opportunities. Design and implement a comprehensive and ongoing release management and planning program. Utilize standard tools and methodology to develop system and support performance metrics. Demonstrate potential leadership qualities through team motivation, coaching, and mentoring. Demonstrate comprehensive knowledge & expertise with HA business processes and routines. Research and recommend appropriate System Administration best practices, and tools. Perform crisis management during high-severity operational incidents. The job summary and responsibilities listed above are designed to indicate the general nature of the work performed within this job. They are not designed to contain or be interpreted as a comprehensive inventory of all job responsibilities required of employees assigned to this job. Employees may be required to perform other duties as assigned. Minimum Qualifications Required Education and Experience BA or BS in Computer Science, Technology, or Business discipline or equivalent experience is required. 6-10 years of professional experience in an IT technical or infrastructure field is required. 5-7 years of experience in Google cloud large complex project oversight Established understanding of Infrastructure and relative technologies. Established understanding of network connectivity and protocols. Required Licensure and Certifications CCNA or CCNP, Google Cloud Network Engineer Certification Preferred Required Minimum Knowledge, Skills, Abilities and Training 7+ years of experience with Cisco Route Switch and Wireless infrastructure. Experience with datacenter and MDF/IDF closet construction design (cabling, racks, power, cooling) Experience or understanding of Cisco Nexus platform 5-7 years of experience with vendor management Advanced experience with Cisco wireless, and strong understanding of wireless design and coverage Understanding and experience with WAN Design MPLS QoS BGP Metro Optical Ethernet MultiPoint MOE Healthcare industry experience a plus

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies