Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Overview Analyzes, develops, designs, and maintains software for the organization's products and systems. Performs system integration of software and hardware to maintain throughput and program consistency. Develops, validates, and tests: structures and user documentation. Work may be reviewed for accuracy and overall adequacy. Follows established processes and directions. We are seeking a passionate and detail-oriented SDET to join our QA engineering team. The ideal candidate will have strong Python programming skills and hands-on experience testing cloud-native microservices deployed on Google Cloud Platform (GCP) or equivalant Cloud platform . You will be responsible for designing and implementing automated test frameworks, ensuring the reliability and scalability of distributed systems. Responsibilities SDET – Python & GCP Microservices Testing Experience Level: 2–4 Years Key Responsibilities Develop and maintain automated test suites using Python and frameworks like Pytest , Robot Framework Design and execute test cases for RESTful APIs , microservices , and event-driven architectures Collaborate with developers and DevOps to integrate tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions) Perform functional , integration , regression , and performance testing Validate deployments and configurations in GCP environments using tools like Cloud Build , GKE , and Terraform Monitor and troubleshoot test failures, log issues, and ensure timely resolution Contribute to test strategy, documentation, and quality metrics Implement AI-driven testing methodologies to enhance test coverage and efficiency. Required Skills 2–4 years of experience in software testing or SDET roles Strong proficiency in Python for test automation Experience testing microservices and cloud-native applications Familiarity with GCP services such as Cloud Functions , Pub/Sub , Cloud Run , and GKE Hands-on experience with Docker , Kubernetes , and Linux-based environments Knowledge of Git , Jenkins , and CI/CD workflows Understanding of QA methodologies , SDLC , and Agile practices Preferred Skills Exposure to performance testing tools like JMeter , Locust , or k6 Experience with Jenkins Familiarity with Github Co-Pilot MongoDB Knowledge of BDD/TDD practices Qualifications SDET – Python & GCP Microservices Testing Experience Level: 2–4 Years Required Skills 2–4 years of experience in software testing or SDET roles Strong proficiency in Python for test automation Experience testing microservices and cloud-native applications Familiarity with GCP services such as Cloud Functions , Pub/Sub , Cloud Run , and GKE Hands-on experience with Docker , Kubernetes , and Linux-based environments Knowledge of Git , Jenkins , and CI/CD workflows Understanding of QA methodologies , SDLC , and Agile practices Preferred Skills Exposure to performance testing tools like JMeter , Locust , or k6 Experience with Jenkins Familiarity with Github Co-Pilot MongoDB Knowledge of BDD/TDD practices Nice-to-Have - Experience with LLMs and generative AI platforms. - Contributions to open-source testing frameworks or AI communities. Preferred Education: Bachelor's or Masters degree in an appropriate engineering discipline required. Preferred Work Experience (years):Bachelors degree and 2+ years or Masters degree with no experience. All Other Regions: Bachelor’s or Master’s degree in an appropriate engineering discipline required Preferred work experience (years): 2+ years experience
Posted 1 day ago
6.0 - 11.0 years
20 - 35 Lacs
Bengaluru
Work from Office
Education: Bachelors Field: Information Technology, Data Management, Data Analytics, Business, Supply Chain, Operations Experience: Type: Bachelors or Masters degree in Data Science, Computer Science, Statistics, or a related field Number of Years: 5+ years in relevant roles Other: At least five years of relevant project experience in successfully launching, planning, and executing data science projects, including statistical analysis, data engineering, and data visualization Proven experience in conducting statistical analysis and building models with advanced scripting languages Experience leading projects that apply ML and data science to business functions Specialization in text analytics, image recognition, graph analysis, or other specialized ML techniques, such as deep learning, is preferred Skills: (Describe any special skill and/or competency requirements applicable to this position.) Fluency in multiple programming languages and statistical analysis tools such as Python, PySpark, C++, JavaScript, R, SAS, Excel, SQL Knowledge of distributed data/computing tools such as MapReduce, Hadoop, Hive, or Kafka Knowledge of statistical and data mining techniques such as generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN) Strong understanding of AI, its potential roles in solving business problems, and the future trajectory of generative AI models Willingness and ability to learn new technologies on the job Ability to communicate complex projects, models, and results to a diverse audience with a wide range of understanding Ability to work in diverse, cross-functional teams in a dynamic business environment Superior presentation skills, including storytelling and other techniques to guide and inspire Familiarity with big data, versioning, and cloud technologies such as Apache Spark, Azure Data Lake Storage, Git, Jupyter Notebooks, Azure Machine Learning, and Azure Databricks. Familiarity with data visualization tools (Power BI experience preferred). Knowledge of database systems and SQL. Strong communication and collaboration abilities.
Posted 1 day ago
7.0 - 12.0 years
37 - 45 Lacs
Pune
Work from Office
: Job Title- Corporate Bank Technology Commercial Banking Senior Data Engineer, AVP Location- Pune, India Role Description Responsible to provide fast and reliable data solutions for warehousing, reporting, Customer- and Business Intelligence solutions. Loading data from various systems of record into our platform and make them available for further use. Automate deployment and test processes to deliver fast incremental improvements of our application and platform. Implement data governance and protection to adhere regulatory requirements and policies. Transform and combine data into a data model which supporting our data analysts or can easily consumed by operational databases. Maintain hygiene, Risk and Control and Stability at to core to every delivery. Be a role model for the team. Work in an agile setup, helping with feedback to improve our way of working. Commercial Banking Tribe Youll be joining the Commercial Bank Tribe, who is focusing on the special needs of the small and medium enterprise clients in Germany, a designated area for further growth and investment within Corporate Bank. We are responsible for the digital transformation of ~800.000 clients in 3 brands, i.e. the establishment of the BizBanking platform including development of digital sales and service processes as well as the automation of processes for this client segment. Our tribe is on a journey of an extensive digitalisation of business processes and to migrate our applications to the cloud. On that we are working jointly together with our business colleagues in an agile setup and collaborating closely with stakeholders and engineers from other areas thriving to achieve a highly automated and adoptable process and application landscape. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design, develop, and deploy data processing pipelines and data-driven applications on GCP Write and maintain SQL queries and use data modeling tools like Dataform or dbt for data management. Write clean, maintainable code in Java and/or Python, adhering to clean code principles. Apply concepts of deployments and configurations in GKE/OpenShift, and implement infrastructure as code using Terraform. Set up and maintain CI/CD pipelines using GitHub Actions, write and maintain unit and integration tests. Your skills and experience Bachelor's degree in Computer Science, Data Science, or related field, or equivalent work experience. Proven experience as a Data Engineer or Backend Engineer or similar role. Strong experience with Cloud, Terraform, and GitHub Actions. Proficiency in SQL and Java and/or Python, experience with tools and frameworks like Apache Beam, Spring Boot and Apache Airflow. Familiarity with data modeling tools like Dataform or dbt, and experience writing unit and integration tests. Understanding of clean code principles and commitment to writing maintainable code. Excellent problem-solving skills, attention to detail, and strong communication skills. How well support you
Posted 4 days ago
3.0 - 6.0 years
5 - 10 Lacs
Mumbai, Mumbai Suburban, Navi Mumbai
Work from Office
Description Education B.E./B.Tech/MCA in Computer Science Experience 3 to 6 Years of Experience in Kubernetes/GKE/AKS/OpenShift Administration Mandatory Skills ( Docker and Kubernetes) Should have good understanding of various components of various types of kubernetes clusters (Community/AKS/GKE/OpenShift) Should have provisioning experience of various type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have Upgradation and monitoring experience of variouos type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have good experience on Conatiner Security Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin) Should have goood experiene / knowlede of cloud platforms preferably Azure / Google / OpenStack Should have good experience of container runtimes like docker/cotainerd Should have basic understanding of application life cycle management on container platform Should have good understatning of container registry Should have good understanding of Helm and Helm Charts Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK Should have good exeperince on Linux operating system Should have basis understanding of enterprise networks and container networks Should able to handle Severity#2 and Severity#3 incidents Good communication skills Should have capability to provide the support Should have analytical and problem solving capabilities, ability to work with teams Should have experince on 24*7 operation support framework) Should have knowledge of ITIL Process Preferred Skills/Knowledge Container Platforms - Docker, Kubernetes, GKE, AKS OR OpenShift Automation Platforms - Shell Scripts, Ansible, Jenkin Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu Container Storage and Backup Desired Skills 1. Certified Kubernetes Administrator OR 2. Certified Redhat OpenShift Administrator 3. Certification of administration of any Cloud Platform will be an added advantage Soft Skills 1. Must have good troubleshooting skills 2. Must be ready to learn new technologies and acquire new skills 3. Must be a Team Player 4. Should be good in Spoken and Written English
Posted 5 days ago
7.0 - 12.0 years
30 - 35 Lacs
Pune
Work from Office
: Job TitleSenior Engineer LocationPune, India Corporate TitleAVP Role Description Investment Banking is technology centric businesses, with an increasing move to real-time processing, an increasing appetite from customers for integrated systems and access to supporting data. This means that technology is more important than ever for business. The IB CARE Platform aims to increase the productivity of both Google Cloud and on-prem application development by providing a frictionless build and deployment platform that offers service and data reusability. The platform provides the chassis and standard components of an application ensuring reliability, usability and safety and gives on-demand access to services needed to build, host and manage applications on the cloud/on-prem. In addition to technology services the platform aims to have compliance baked in, enforcing controls/security reducing application team involvement in SDLC and ORR controls enabling teams to focus more on application development and release to production faster. We are looking for a platform engineer to join a global team working across all aspects of the platform from GCP/on-prem infrastructure and application deployment through to the development of CARE based services. Deutsche Bank is one of the few banks with the scale and network to compete aggressively in this space, and the breadth of investment in this area is unmatched by our peers. Joining the team is a unique opportunity to help build a platform to support some of our most mission critical processing systems. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your Key Responsibilities As a CARE platform engineer you will be working across the board on activities to build/support the platform and liaising with tenants. To be successful in this role the below are key responsibility areas: Responsible for managing and monitoring cloud computing systems and providing technical support to ensure the systems efficiency and security Work with platform leads and platform engineers at technical level. Liaise with tenants regarding onboarding and providing platform expertise. Contribute to the platform offering as part of Sprint deliverables. Support the production platform as part of the wider team. Your skills and experience Understanding of GCP and services such as GKE, IAM, identity services and Cloud SQL. Kubernetes/Service Mesh configuration. Experience in IaaS tooling such as Terraform. Proficient in SDLC / DevOps best practices. Github experience including Git workflow. Exposure to modern deployment tooling, such as ArgoCD, desirable. Programming experience (such as Java/Python) desirable. A strong team player comfortable in a cross-cultural and diverse operating environment Result oriented and ability to deliver under tight timelines. Ability to successfully resolve conflicts in a globally matrix driven organization. Excellent communication and collaboration skills Must be comfortable with navigating ambiguity to extract meaningful risk insights. How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 6 days ago
7.0 - 12.0 years
35 - 40 Lacs
Pune
Work from Office
: Job Title DevOps, Test Auto & AI, AVP LocationPune, India Role Description We are seeking a results-driven engineer with a strong foundation in Test Automation, DevSecOps, and the use of AI-enhanced developer tools like Gemini, GitHub Copilot, and OpenRewrite. The role involves building robust automated test solutions, integrating secure DevOps practices on GCP, and continuously improving software quality and delivery through intelligent tooling. If you are actively coding, have a passion for AI and want to be part of developing innovative products then apply today. What well offer you , 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Develop and maintain automated test frameworks for APIs, UI, and integration workflows. Implement and manage CI/CD pipelines on Google Cloud Platform (GCP) using Cloud Build, Cloud Functions, and related services. Utilize Gemini, GitHub Copilot, and OpenRewrite to accelerate test development, modernize codebases, and enforce best practices. Integrate tools like Dependabot, SonarQube, Veracode, and CodeQL to drive secure, high-quality code. Promote and apply shift-left testing strategies and DevSecOps principles across all stages of the SDLC. Collaborate cross-functionally to deliver scalable, intelligent automation capabilities embedded within engineering workflows. Your skills and experience Skills Youll Need Experience with test automation frameworks such as Selenium, Cypress, REST Assured, or Playwright. Deep understanding of DevOps and cloud-native delivery pipelines, especially using GCP. Hands-on with AI/ML tools in the development lifecycle, including Gemini, GitHub Copilot, and OpenRewrite. Familiar with DevSecOps toolsSonarQube, Veracode, CodeQL, Dependabot. Proficient in scripting (Python, Shell) and using version control systems like Git. Knowledge of Agile methodologies (Scrum, Kanban), TDD, and BDD. Experience with Infrastructure-as-Code (Terraform, GCP Deployment Manager Skills That Will Help You Excel Stakeholder CommunicationAbility to explain AI concepts to non-technical audiences and collaborate cross-functionally. Adaptability & InnovationFlexibility in learning new tools and developing innovative solutions. Experience in GCP Vertex AI. Exposure to GKE, Docker, or Kubernetes.). Knowledge of performance/load testing tools (e.g., JMeter, k6). Relevant certifications in GCP, DevOps, Test Automation, or AI/ML. How well support you . . . . About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm
Posted 6 days ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity Join our dynamic and forward-thinking Platform Engineering team at a world-class analytics company. Our mission is to accelerate innovation by delivering a cohesive internal developer platform that combines an enterprise-grade Spotify Backstage portal, Buf Schema Registry, GitOps automation, and cloud-native tooling. As a Lead Platform Engineer, youll architect and own the services, plugins, and pipelines that power a world-class developer experience for thousands of engineers building fraud, risk, marketing, and customer-management solutions. Sr. Director, 1ES Engineering What Youll Contribute Operate and scale Backstage as the single pane of glass for developers. Design and publish custom Backstage plugins, templates, and software catalog integrations that reduce cognitive load and surface business context. Define governance & RBAC models for Backstage groups, entities, and APIs. Establish and maintain BSR as the system of record for Protobuf and gRPC APIs. Automate linting, breaking-change detection, versioning, and dependency insights in CI/CD. Integrate BSR metadata into Backstage to provide full API lineage and documentation. Collaborate with product and infrastructure teams to deliver resilient, self-service platform building blocks. Own GitHub Actions, Argo CD, Crossplane, and policy-as-code workflows that enable secure, audit-ready deployments. Continuously experiment with new ideashack days, proofs-of-concept, and brown-bag sessionsto push the envelope of DevEx. Champion data-driven improvements using DORA/SPACE metrics and developer feedback loops. Instrument, monitor, and tune platform components for scale (Prometheus/Grafana, Splunk, Cribl, CloudWatch). Embed security controls (SCA, SAST, OPA/Kyverno) early in the SDLC. Guide engineers across domains, codify best practices, and foster a culture of psychological safety, creativity, and ownership. What Were Seeking Deep Backstage ExpertiseProven experience deploying, customizing, and scaling Backstage in production, including authoring plugins (React/Node), scaffolder templates, and catalog processors. Buf Schema Registry MasteryHands-on knowledge of managing API contracts in BSR, enforcing semantic versioning, and integrating breaking-change gates into CI/CD. Cloud-Native & GitOps ProficiencyKubernetes (EKS/GKE/AKS), Argo CD, Crossplane, Docker, Helm; expert-level GitHub Actions workflow design. Programming Skills: Strong in TypeScript/JavaScript (for Backstage), plus one or more of Go, Python, or NodeJS for platform services. Infrastructure as Code & AutomationTerraform, Pulumi, or Ansible to codify cloud resources and policies. Observability & Incident ManagementPrometheus, Grafana, Datadog, PagerDuty; ability to design SLOs/SLA dashboards. Creative Problem-Solving & Growth MindsetDemonstrated ability to think big, prototype quickly, and iterate based on data and feedback. Excellent Communication & CollaborationClear written and verbal skills; ability to translate technical details to diverse stakeholders. Education / ExperienceBachelors in Computer Science or equivalent experience; 7 + years in platform, DevOps, or developer-experience roles, with 2 + years focused on Backstage and/or BSR. Our Offer to You An inclusive culture strongly reflectingourcore valuesAct Like an Owner, DelightOurCustomers and Earn the Respect of Others. The opportunitytomake an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourageyoutobring yourbest every day and be recognized for doing so. An engaging, people-first work environmentoffering work/life balance, employee resource groups, and social eventstopromote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy
Posted 6 days ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Key Responsibilities: Design, develop, and maintain backend services and APIs using Python (Flask/Django/FastAPI). Develop scalable and secure microservices for data processing, analytics, and APIs. Manage and optimize data storage with SQL (PostgreSQL/MySQL) and NoSQL databases (MongoDB/Firestore/Bigtable). Design and implement CI/CD pipelines and automate cloud deployments on GCP (App Engine, Cloud Run, Cloud Functions, GKE). Collaborate with front-end developers, product owners, and other stakeholders to integrate backend services with business logic and UI. Optimize application performance and troubleshoot issues across backend systems. Implement best practices in code quality, testing (unit/integration), security, and scalability. Qualifications: Bachelors or masters degree in computer science, Data Science, or a related field. Must have 3+ years of relevant IT experience Strong hands-on programming experience in Python. Experience with one or more Python frameworks: Flask, Django, or FastAPI. Deep understanding of RESTful API design and development. Proficient in working with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Firestore, BigQuery). Solid understanding and experience with GCP services Familiarity with Git, CI/CD tools (e.g., Cloud Build, Jenkins, GitHub Actions). Strong debugging, problem-solving, and performance tuning skills.
Posted 1 week ago
8.0 - 10.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Company Overview: Maximus is a leading innovator in the government space, providing transformative solutions in the management and service delivery of government health and human services programs. We pride ourselves in our commitment to excellence, innovation, and a customer-first approach, driven by our core values. This has fostered our continual support of public programs and improving access to government services for citizens. Maximus continues to grow its Digital Solutions organization to better serve the needs of our organization, our customers in the government, health, and human services space, while improving access to government services for citizens. We use an approach grounded in design thinking, lean, and agile to help solve complicated problems and turn bold ideas into delightful solutions. Job Description: We are seeking a hands-on and strategic Lead DevOps Engineer to architect, implement, and lead the automation and CI/CD practices across our cloud infrastructure. This role demands deep expertise in cloud-native technologies and modern DevOps tooling with a strong emphasis on AWS, Kubernetes, ArgoCD, and Infrastructure Code. The ideal candidate is also expected to be a motivated self-starter with a proactive approach to resolving problems and issues with minimal supervision Key Responsibilities: Design and manage scalable infrastructure across AWS and Azure using Terraform (IaC) Define and maintain reusable Terraform modules to enforce infrastructure standards and best practices Implement secrets management , configuration management, and automated environment provisioning Architect and maintain robust CI/CD pipelines using Jenkins and ArgoCD Implement GitOps workflows for continuous delivery and environment promotion Automate testing, security scanning, and deployment processes across multiple environments Design and manage containerized applications with Docker Deploy and manage scalable, secure workloads using Kubernetes (EKS/ECS/GKE/AKS/self-managed) Create and maintain Helm charts , C ustomize configs, or other manifest templating tools Manage Git repositories, branching strategies, and code review workflows Promote version control best practices including commit hygiene and semantic release tagging Set up and operate observability stacks: Prometheus , Grafana , ELK , Loki , Alertmanager any of those. Define SLAs, SLOs, and SLIs for critical services Lead incident response , perform root cause analysis, and publish post-mortems documentations Integrate security tools and checks directly into CI/CD workflows Manage access control, secrets, and ensure compliance with standards such as FedRamp Mentor and guide DevOps engineers to build a high-performing team Collaborate closely with software engineers, QA, product managers, and security teams Promote a culture of automation , reliability , and continuous improvement Roles and Responsibilities Qualifications: Bachelor's degree in computer science, Information Security, or a related field (or equivalent experience). 8+ years of experience in DevOps or a similar role, with a strong security focus. Preferred AWS Certified Cloud Practitioner certification or AWS Certified Devops Engineer – Professional or AWS Certified Solution Architect or similar. Knowledge of cloud platforms (AWS) (Azure – Good to have) and containerization technologies (Docker, Kubernetes) with a key focus on AWS and EKS, ECS. Experience with infrastructure such as code (IaC) tools such as Terraform. Proficiency in CI/CD tools like AWS CodePipeline, Jenkins, Azure DevOps Server Familiarity with programming and scripting languages (e.g., Python, Bash, Go, Bash). Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication skills, with the ability to convey complex security concepts to technical and non-technical stakeholders. Preferred Qualifications: Strong understanding and working experience with enterprise applications and containerized application workloads. Knowledge of networking concepts Knowledge of network security principles and technologies (e.g., Firewalls, VPNs, IDS/IPS).
Posted 1 week ago
3.0 - 8.0 years
3 - 7 Lacs
Coimbatore
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Infrastructure Good to have skills : Google Cloud Platform Administration, AWS Administration, Google Cloud Compute ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education:We are seeking an engineer proficient in cloud computing platforms to collaborate with our business teams in implementing solutions that enhance agility, flexibility, and durability across diverse environments.Responsibilities:Extensive hands-on experience with Google Cloud Platform (GCP) and Google Kubernetes Engine (GKE) implementations.Proven track record in supporting and deploying various public cloud services.Experience in building or managing self-service platforms to boost developer productivity.Proficiency in using Infrastructure as Code (IaC) tools like Terraform.Demonstrated expertise in operating and managing container orchestration engines such as Kubernetes.Skilled in diagnosing and resolving complex issues in automation and cloud environments.Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems.Strong understanding of infrastructure CI/CD pipelines and associated tools.Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions.Experience working in GKE, Edge/GDCE environments.Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset:Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions.At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies.Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules.Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE).Familiarity with CI/CD tools (e.g., GitHub) and processes.GCP certification is mandatory.CKA or CKAD certification is highly desirable.HashiCorp Terraform certification is a significant plus. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Cloud Infrastructure, Kubernetes, Red Hat OpenShift Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time educationSUMMARYAs an Deployment and managing Kubernetes Cluster in Azure and GCP. You will act as software detectives, providing a dynamic service that identifies and resolves issues within various components of critical business systems. Your typical day will involve collaborating with team members to troubleshoot problems, analyzing system performance, and ensuring the smooth operation of applications. You will engage with stakeholders to understand their needs and provide timely solutions, all while maintaining a focus on enhancing system reliability and user satisfaction. Roles & Responsibilities:1.The Candidate should have hands-on experience in managing Kubernetes Cluster in Cloud (Azure / GCP) with the ability to Upgrade Kubernetes version and Cluster Version based on requirement.2.Should be able to troubleshoot and fix issues related to pods deployment and Connectivity issues.3.Configure the Cluster with Monitoring solution and manage with best security practices.4.Candidate should be capable of managing storages for the Pods and configuring CI/CD tools for Application deployment to container Cluster.5.Other Container Management tools like Openshift and Rancher is value added. Professional & Technical Skills: 4+ years of hands-on experience on Kubernetes management (AKS/GKE/Openshift)Strong Knowledge in Container Management in Cloud Managed services (AKS/EKS/GKE)Good Understanding of Microservice Architecture with ability to build and scale Container ClustersHands on experience in Kubernetes Upgrade and Container Cluster Upgrades.Ability to troubleshoot and fix container & Cluster service issues.Experience in integrating with Network, storage, Monitoring and firewall components.Configuration of Helm-Charts, Config Maps and automatic rollout and service discovery.Knowledge in Application Deployments to Container Clusters using CI/CD ToolsExperience in Container Monitoring Tools, Access Management and security best practices Hands-on experience on AKS, GKE, Openshift. Tools :Kubernetes Dashboard, Kube-monkey, Kube-hunter, Project Quay, Kube-burner,Kube-bench. Certification Kubernetes and Openshift. Bridge the relationships between offshore and onshore/client/stakeholders/third part vendor support teams Maintain the confidence level of the client by adhering to the SLA and deliverables Maximize the contribution from Offshore teams for a better and effective support Better understanding of client process, architecture and necessary execution Quick response, timely follow-up and ownership till closure Additional Informationcandidate should have minimum 3 years of experience in Kubernetes.High personal drive; results oriented; makes things happen.Excellent communication, interpersonal skills.Effective in building close working relationships with others.Innovative and creative and adaptive to new environment.Strong data point analytical, team work skills.Good attitude to learn and develop. A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
1.0 - 3.0 years
5 - 10 Lacs
Navi Mumbai
Work from Office
Job Role : DevOps Engineer Year Of Experience- 2–3 Years Location: Ghansoli Education: BE/ B.Tech Overview : Looking for a motivated and skilled DevSecOps Engineer with 2–3 years of hands-on experience in implementing DevSecOps practices, CI/CD pipelines, and integrating security into the development lifecycle. The ideal candidate will have working knowledge of Kubernetes (K8S), cloud platforms like GKE and AKS, and build/deployment automation tools including Azure DevOps and Jenkins. Experience with security scanning tools (SAST, DAST, Fortify, SonarQube) and scripting knowledge in Groovy, ANT, and JavaScript is essential. Job Role: • Design, implement, and maintain secure and scalable CI/CD pipelines. • Integrate security tools and processes into DevOps workflows (DevSecOps). • Automate infrastructure and deployments using Azure DevOps and Jenkins. • Deployment using On-Premises K8S clusters and Manage Kubernetes clusters - GKE and AKS. • Deployment using Windows based servers - IIS • Implement and maintain Static and Dynamic Application Security Testing (SAST/DAST) tools. • Integrate and configure Fortify, SonarQube, and other security tools into pipelines. • Write and maintain automation scripts using Groovy, ANT, and JavaScript. • Collaborate with development, QA, and security teams to ensure secure software delivery. • Conduct security assessments and remediations as part of the SDLC. Required Skills & Qualifications : • Bachelor degree in Engineering or Equivalent. • 2–3 years of hands-on experience in DevSecOps / DevOps. • Strong knowledge and hands-on experience with: - Azure DevOps Pipelines and Jenkins for CI/CD. - Security tools: Fortify, SonarQube, Blackduck, DAST/SAST tools (e.g., OWASP ZAP, Burp Suite, etc.). - Kubernetes (K8s) – with GKE and AKS. • Proficiency in scripting languages such as Groovy, ANT, and JavaScript. • Basic programming / scripting capabilities to automate security checks & workflows. • Understanding of application security principles and best practices. • Experience working in Agile and collaborative team environments. • Excellent troubleshooting, documentation, and communication skills.
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Kolkata
Work from Office
Job Title: Senior SAP EHS Consultant Location: Kolkata Job Summary: Seeking an experienced Senior SAP EHS Consultant to lead and support the implementation, configuration, and maintenance of SAP EHS solutions. The role focuses on end-to-end delivery of Environment, Health & Safety (EHS) processes with particular emphasis on regulatory compliance, substance volume tracking, WWI report generation, and global label management. Key Responsibilities: Manage, implement, and support end-to-end SAP EHS processes within ECC or S/4HANA environments. Lead and contribute to SAP EHS implementation, rollout, and enhancement projects. Design, develop, and maintain WWI templates for Safety Data Sheets (SDS), Labels, and other compliance-related documentation. Manage SERC content updates and ensure synchronization with WWI servers. Configure and maintain modules such as: Product Safety, Dangerous Goods Management, WWI Report Shipping, Global Label Management (GLM) Ensure compliance with global regulatory requirements (e.g., REACH, OSHA, GHS ). Handle Specification Management for raw materials, intermediates, and finished goods. Oversee Report Management , including document generation, printing, and distribution. Support and monitor Substance Volume Tracking (SVT) processes in alignment with legal thresholds. Coordinate with cross-functional teams in Product Safety, Compliance, and Legal for timely and accurate documentation delivery. Advise on best practices for SERC deployment , integration testing, and regulatory impact assessments. Required Skills: Strong hands-on expertise in the SAP EHS Suite (ECC/S4HANA). Proven experience with: SVT (Substance Volume Tracking), WWI Report Management and Label Logic, Specification Database , Substance Hierarchy , and Property Trees, SERC regulatory content. Solid understanding of EHS regulatory frameworks (REACH, OSHA, GHS, etc.). Familiarity with SAP module integration including MM, PP, SD, and PLM . Strong knowledge of material master dependencies and regulatory compliance workflows. Ability to troubleshoot WWI server issues and address SDS/label rendering errors. Educational Qualifications: Bachelors Degree in Engineering , Environmental Sciences , or a related technical field . Domain knowledge or work experience in pharmaceuticals , manufacturing , or FMCG sectors is preferred. Experience: Minimum 5 years of hands-on experience in SAP EHS implementation or support , including full lifecycle project involvement.
Posted 1 week ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Educational Master Of Engineering,MCA,MTech,BTech,BE,BCA,Bachelor of Engineering,Bachelor Of Science,Master Of Science Service Line Application Development and Maintenance Responsibilities Design and implement cloud-native solutions on Google Cloud PlatformDeploy and manage infrastructure using Terraform, Cloud Deployment Manager, or similar IaC toolsManage GCP services such as Compute Engine, GKE (Kubernetes), Cloud Storage, Pub/Sub, Cloud Functions, BigQuery, etc.Optimize cloud performance, cost, and scalabilityEnsure security best practices and compliance across the GCP environmentMonitor and troubleshoot issues using Stackdriver/Cloud MonitoringCollaborate with development, DevOps, and security teamsAutomate workflows, CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Build Additional Responsibilities: GCP Professional certification (e.g., Professional Cloud Architect, Cloud Engineer)Experience with hybrid cloud or multi-cloud architectureExposure to other cloud platforms (AWS/Azure) is a plusStrong communication and teamwork skills Technical and Professional : 3–5 years of hands-on experience with GCPStrong expertise in Terraform, GCP networking, and cloud securityProficient in container orchestration using Kubernetes (GKE)Experience with CI/CD, DevOps practices, and shell scripting or PythonGood understanding of IAM, VPC, firewall rules, and service accountsFamiliarity with monitoring/logging tools like Stackdriver or PrometheusStrong problem-solving and troubleshooting skills Preferred Skills: .Net Java Python Java-Springboot Cloud Platform -Google Cloud Platform Developer-GCP/ Google Cloud
Posted 1 week ago
3.0 - 6.0 years
3 - 7 Lacs
Mumbai, Mumbai Suburban, Delhi
Work from Office
Job Description Education B.E./B.Tech/MCA in Computer Science Experience 3 to 6 Years of Experience in Kubernetes/GKE/AKS/OpenShift Administration Mandatory Skills ( Docker and Kubernetes) Should have good understanding of various components of various types of kubernetes clusters (Community/AKS/GKE/OpenShift) Should have provisioning experience of various type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have Upgradation and monitoring experience of variouos type of kubernetes clusters (Community/AKS/GKE/OpenSHIFT) Should have good experience on Conatiner Security Should have good experience of Container storage Should have good experience on CICD workflow (Preferable Azure DevOps, Ansible and Jenkin) Should have goood experiene / knowlede of cloud platforms preferably Azure / Google / OpenStack Should have good experience of container runtimes like docker/cotainerd Should have basic understanding of application life cycle management on container platform Should have good understatning of container registry Should have good understanding of Helm and Helm Charts Should have good understanding of container monitoring tools like Prometheus, Grafana and ELK Should have good exeperince on Linux operating system Should have basis understanding of enterprise networks and container networks Should able to handle Severity#2 and Severity#3 incidents Good communication skills Should have capability to provide the support Should have analytical and problem solving capabilities, ability to work with teams Should have experince on 24*7 operation support framework) Should have knowledge of ITIL Process Preferred Skills/Knowledge Container Platforms - Docker, Kubernetes, GKE, AKS OR OpenShift Automation Platforms - Shell Scripts, Ansible, Jenkin Cloud Platforms - GCP/AZURE/OpenStack Operating System - Linux/CentOS/Ubuntu Container Storage and Backup Desired Skills 1. Certified Kubernetes Administrator OR 2. Certified Redhat OpenShift Administrator 3. Certification of administration of any Cloud Platform will be an added advantage Soft Skills 1. Must have good troubleshooting skills 2. Must be ready to learn new technologies and acquire new skills 3. Must be a Team Player 4. Should be good in Spoken and Written English
Posted 1 week ago
0.0 years
9 - 14 Lacs
Noida
Work from Office
Required Skills: GCP Proficiency Strong expertise in Google Cloud Platform (GCP) services and tools. Strong expertise in Google Cloud Platform (GCP) services and tools, including Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, Cloud SQL, Cloud Load Balancing, IAM, Google Workflows, Google Cloud Pub/Sub, App Engine, Cloud Functions, Cloud Run, API Gateway, Cloud Build, Cloud Source Repositories, Artifact Registry, Google Cloud Monitoring, Logging, and Error Reporting. Cloud-Native Applications Experience in designing and implementing cloud-native applications, preferably on GCP. Workload Migration Proven expertise in migrating workloads to GCP. CI/CD Tools and Practices Experience with CI/CD tools and practices. Python and IaC Proficiency in Python and Infrastructure as Code (IaC) tools such as Terraform. Responsibilities: Cloud Architecture and Design Design and implement scalable, secure, and highly available cloud infrastructure solutions using Google Cloud Platform (GCP) services and tools such as Compute Engine, Kubernetes Engine, Cloud Storage, Cloud SQL, and Cloud Load Balancing. Cloud-Native Applications Design Development of high-level architecture design and guidelines for develop, deployment and life-cycle management of cloud-native applications on CGP, ensuring they are optimized for security, performance and scalability using services like App Engine, Cloud Functions, and Cloud Run. API ManagementDevelop and implement guidelines for securely exposing interfaces exposed by the workloads running on GCP along with granular access control using IAM platform, RBAC platforms and API Gateway. Workload Migration Lead the design and migration of on-premises workloads to GCP, ensuring minimal downtime and data integrity. Skills (competencies)
Posted 1 week ago
6.0 - 10.0 years
6 - 11 Lacs
Mumbai
Work from Office
Primary Skills Google Cloud Platform (GCP) Expertise in Compute (VMs, GKE, Cloud Run), Networking (VPC, Load Balancers, Firewall Rules), IAM (Service Accounts, Workload Identity, Policies), Storage (Cloud Storage, Cloud SQL, BigQuery), and Serverless (Cloud Functions, Eventarc, Pub/Sub). Strong experience in Cloud Build for CI/CD, automating deployments and managing artifacts efficiently. Terraform Skilled in Infrastructure as Code (IaC) with Terraform for provisioning and managing GCP resources. Proficient in Modules for reusable infrastructure, State Management (Remote State, Locking), and Provider Configuration . Experience in CI/CD Integration with Terraform Cloud and automation pipelines. YAML Proficient in writing Kubernetes manifests for deployments, services, and configurations. Experience in Cloud Build Pipelines , automating builds and deployments. Strong understanding of Configuration Management using YAML in GitOps workflows. PowerShell Expert in scripting for automation, managing GCP resources, and interacting with APIs. Skilled in Cloud Resource Management , automating deployments, and optimizing cloud operations. Secondary Skills CI/CD Pipelines GitHub Actions, GitLab CI/CD, Jenkins, Cloud Build Kubernetes (K8s) Helm, Ingress, RBAC, Cluster Administration Monitoring & Logging Stackdriver (Cloud Logging & Monitoring), Prometheus, Grafana Security & IAM GCP IAM Policies, Service Accounts, Workload Identity Networking VPC, Firewall Rules, Load Balancers, Cloud DNS Linux & Shell Scripting Bash scripting, system administration Version Control Git, GitHub, GitLab, Bitbucket
Posted 1 week ago
8.0 - 12.0 years
25 - 27 Lacs
Bengaluru
Work from Office
About the Role As DevOps Engineer - IV, you will design systems capable of serving as the brains of complex distributed products. In addition, you will also closely mentor younger engineers on the team and contribute to team building.Overall, you will be a strong technologist at Meesho who cares about code modularity, scalability, re-usability. What you will do Develop reusable Infrastructure code and testing frameworks for Infrastructure. Develop tools and frameworks to allow Meesho engineers to provision and manage Infrastructure access controls. Design and develop solutions for cloud security, secrets-management and key rotations. Design a centralized logging and metrics platform that can handle Meeshos scale. Take on new Infrastructure requirements and develop infrastructure code Work with service teams to help them onboard container platform. Scale the Meesho platform to handle millions of requests concurrently. Drive solutions to reduce MTTR and MTTD, enabling High Availability and Disaster Recovery. What you will need Must Have : Bachelors / Masters in Computer Science 8-12 years of in-depth and hands-on professional experience in DevOps /Systems Engineering domain Proficiency in Strong Systems, Linux, Open Source, Infra Structure Engineering, DevOps fundamentals. Proficiency on container platforms like Docker, Kubernetes, EKS/GKE, etc.Exceptional design and architectural skills Experience in building large scale distributed systems Experience in Scalable Systems - transactional systems (B2C) Expertise in Capacity Planning Design, cost and effort estimations and cost-optimisation Ability to deliver the best operations tooling and practices, including CI/CD In-depth understanding of SDLC Ability to write infrastructure as code for public or private clouds Ability to implement modern cloud Integration architecture Knowledge of configuration and infra management (Terraform) or CI tools (Any) Knowledge of coding languagePython, Go (proficiency in any one). Ability to architect and implement end-to-end monitoring of solutions in the cloud Ability to design for failover, high availability, MTTR, MTTD, RTO, RPO and so on Good to have : Good to have hands on experience on data processing frameworks(eg. Spark, Databricks) Familiar with Big Data Technologies. Experience with DataOps concepts and tools(eg. Airflow, Zeplin) Expertise in Security Hardening of cloud infrastructure application/web server against known/unknown vulnerabilities Understanding of compliance and security Ability to assess business needs and requirements to ensure appropriate approaches Ability to define and report on business and processes metrics Ability to balance governance, ownership and freedom against reliability Ability to develop and motivate individual contributors on the team
Posted 1 week ago
5.0 - 9.0 years
20 - 25 Lacs
Pune
Work from Office
Primary Responsibilities Provide engineering leadership, mentorship, technical direction to small team of other engineers (~6 members). Partner with your Engineering Manager to ensure engineering tasks understood, broken down and implemented to the highest of quality standards. Collaborate with members of the team to solve challenging engineering tasks on time and with high quality. Engage in code reviews and training of team members. Support continuous deployment pipeline code. Situationally troubleshoot production issues alongside the support team. Continually research and recommend product improvements. Create and integrate features for our enterprise software solution using the latest Python technologies. Assist and adhere to enforcement of project deadlines and schedules. Evaluate, recommend, and proposed solutions to existing systems. Actively communicate with team members to clarify requirements and overcome obstacles to meet the team goals. Leverage open-source and other technologies and languages outside of the Python platform. Develop cutting-edge solutions to maximize the performance, scalability, and distributed processing capabilities of the system. Provide troubleshooting and root cause analysis for production issues that are escalated to the engineering team. Work with development teams in an agile context as it relates to software development, including Kanban, automated unit testing, test fixtures, and pair programming. Requirement of 4-8 or more years experience as a Python developer on enterprise projects using Python, Flask, FastAPI, Django, PyTest, Celery and other Python frameworks. Software development experience including: object-oriented programming, concurrency programming, modern design patterns, RESTful service implementation, micro-service architecture, test-driven development, and acceptance testing. Familiarity with tools used to automate the deployment of an enterprise software solution to the cloud, Terraform, GitHub Actions, Concourse, Ansible, etc. Proficiency with Git as a version control system Experience with Docker and Kubernetes Experience with relational SQL and NoSQL databases, including MongoDB and MSSQL. Experience with object-oriented languages: Python, Java, Scala, C#, etc. Experience with testing tools such as PyTest, Wiremock, xUnit, mocking frameworks, etc. Experience with GCP technologies such as BigQuery, GKE, GCS, DataFlow, Kubeflow, and/or VertexAI Excellent problem solving and communication skills. Experience with Java and Spring a big plus. For individuals with disabilities that need additional assistance at any point , please email .
Posted 2 weeks ago
10.0 - 16.0 years
12 - 18 Lacs
Pune
Work from Office
We are looking for a Cloud Technology Manager who will manage Cloud Platform for our strategic Identity & Access Management GCP based application. The role is to set up, stabilize, manage the platform activities for this GCP application, it also includes further activities to enable the management of the platform: Vendor Engagement, Migration from On-Premises to Cloud, Production Incident Management, Audit Coordination, Compliant Requirement Enablement. The team Access Lifecycle Solutions within the area Identity & Access Management (IAM) is responsible for providing centralized Identity & Access Management solutions. These provide permissions and roles to application users and recertification of those in a standardized and compliant process. The assignment and revocation of roles can either be automated or manual. There is currently multi-year Cloud Migration Program, transferring these 5 legacy IAM applications from On-Premises to GCP hosted ForgeRock product, namely Accessio . This application was live since 2023, providing certain Request, Approval, Provisioning and Recertification and further IAM services. In upcoming years, more IAM services of various asset scopes will be added through the migration program. This role demands a strategic leader with strong technical skills in GCP, experience in cloud security, and expertise in managing enterprise IAM applications on GCP while ensuring seamless cloud operations. Your key responsibilities GCP Platform management , this includes Oversee GCP cloud environments, ensuring optimal performance, scalability, and security. Manage GCP based applications, projects, networking, storage, and compute resources. Collaborate with DevOps teams to implement CI/CD pipelines using Cloud Build, Artifact Registry, and Git repositories. Oversee GCP cloud repositories, ensuring proper version control and release management. Manage Artifact Registry, Cloud Source Repositories, and Terraform automation. Conduct security assessments, vulnerability scans, and penetration testing. Set up the automation patching, upgrade process for GCP component to ensure compliant requirement. Define and implement disaster recovery (DR) and backup strategies for applications hosted on GCP. Manage Recertification services as subset of IAM, this includes Vendor management : Review Run-the-Bank contracts for both operations and engineering teams; suggest change to adapt with new business and compliant requirements; drive the negotiation with vendor and procurement team to final agreement; perform Vendor Risk Management and Vendor Assessment to ensure compliance around Vendor Management; review Vendor performance per agreed SLA/KPI and further agreements in the contract, and release invoice accordingly. Migration Lead of legacy onPrem recertification applications to Cloud; responsible for the technical part of the migration (non-functional requirement, interfaces, design); support migration team to unblock the technical issues by engaging with various stakeholders; ensure the migration plan, report transparent to team, management. Your skills and experience Skills: Completed degree in IT or a comparable qualification. GCP Cloud Engineer Professional certificate, GCP Security Engineer, or equivalent preferred. Excellent communication skills (English fluent, German is a plus) Very strong analytical and problem-solving skills Ability to work under pressure, reliability, flexibility, and leadership High degree of customer orientation Experiences: Experience in Cloud Technology, with a focus on Google Cloud Platform (GCP). Strong expertise in GCP infrastructure components (Compute, Storage, Networking, IAM, Security, and Kubernetes). Hands-on experience with GCP IAM, Cloud Security, and compliance frameworks. Expertise in SDLC, DevOps, CI/CD pipelines, and application release management within GCP. Experience with IAM solutions such as Forgerock, Sailpoint prefereable Experience in application vulnerability management and security best practices. Knowledge of disaster recovery planning and implementation in GCP. Proficiency in Terraform, Kubernetes (GKE), Cloud Functions, and serverless architectures. Experience in Production Services &managing technology of larger, complex IT systems Experience in managing vendor teams, including experience working with the contract negotiation Knowledge of access lifecycle systems (with a focus on request & approval, provisioning, recertification, admissions/exits) is desired. Dev/Ops Knowledge of Mainframe access, Active Directory access, and Cloud solutions is a plus Minimum of 8 years of experience
Posted 2 weeks ago
3.0 - 8.0 years
11 - 16 Lacs
Pune
Work from Office
Job Title Lead Engineer Location Pune Corporate Title Director As a lead engineer within the Transaction Monitoring department, you will lead and drive forward critical engineering initiatives and improvements to our application landscape whilst supporting and leading the engineering teams to excel in their roles. You will be closely aligned to the architecture function and delivery leads, ensuring alignment with planning and correct design and architecture governance is followed for all implementation work. You will lead by example and drive and contribute to automation and innovation initiatives with the engineering teams. Join the fight against financial crime with us! Your key responsibilities Experienced hands-on cloud and on-premise engineer, leading by example with engineering squads Thinking analytically, with systematic and logical approach to solving complex problems, and high attention to detail Design & document complex technical solutions at varying levels in an inclusive and participatory manner with a range of stakeholders Liaise and face-off directly to stakeholders in technology, business and modelling areas Collaborating with application development teams to design and prototype solutions (both on-premises and on-cloud), supporting / presenting these via the Design Authority forum for approval and providing good practice and guidelines to the teams Ensuring engineering & architecture compliance with bank-standard processes for deploying new applications, working directly with central functions such as Group Architecture, Chief Security Office and Data Governance Innovate and think creatively, showing willingness to apply new approaches to solving problems and to learn new methods, technologies and potentially outside-of-box solution Your skills and experience Proven hands-on engineering and design experience in a delivery-focused (preferably agile) environment Solid technical/engineering background, preferably with at least two high level languages and multiple relational databases or big-data technologies Proven experience with cloud technologies, preferably GCP (GKE / DataProc / CloudSQL / BigQuery), GitHub & Terraform Competence / expertise in technical skills across a wide range of technology platforms and ability to use and learn new frameworks, libraries and technologies A deep understanding of the software development life cycle and the waterfall and agile methodologies Experience leading complex engineering initiatives and engineering teams Excellent communication skills, with demonstrable ability to interface and converse at both junior and level and with non-IT staff Line management experience including working in a matrix management configuration How well support you Training and development to help you excel in your career Flexible working to assist you balance your personal priorities Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 2 weeks ago
5.0 - 10.0 years
15 - 22 Lacs
Thane, Navi Mumbai, Mumbai (All Areas)
Work from Office
5+ yrs rel exp Exp with Integrated Supply Chain processes. Exp in SAP EHS Product Safety such as Dangerous Goods, Global Label Management (GLM), or SDS. Proficiency in (WWI)- tool for EHS doc creation. Knowledge of product labeling and barcoding.
Posted 2 weeks ago
6.0 - 10.0 years
12 - 16 Lacs
Pune
Work from Office
We are on the lookout for a hands-on DevOps / SRE expert who thrives in a dynamic, cloud-native environment! Join a high-impact project where your infrastructure and reliability skills will shine.. Key Responsibilities. Design & implement resilient deployment strategies (Blue-Green, Canary, GitOps). Manage observability tools: logs, metrics, traces, and alerts. Tune backend services & GKE workloads (Node.js, Django, Go, Java). Build & manage Terraform infra (VPC, CloudSQL, Pub/Sub, Secrets). Lead incident responses & perform root cause analyses. Standardize secrets, tagging & infra consistency across environments. Enhance CI/CD pipelines & collaborate on better rollout strategies. Must-Have Skills. 510 years in DevOps / SRE / Infra roles. Kubernetes (GKE preferred). IaC with Terraform & Helm. CI/CD: GitHub Actions + GitOps (ArgoCD / Flux). Cloud architecture expertise (IAM, VPC, Secrets). Strong scripting/coding & backend debugging skills (Node.js, Django, etc.) ?. Incident management with tools like Datadog & PagerDuty. Excellent communicator & documenter. Tech Stack. GKE, Kubernetes, Terraform, Helm. GitHub Actions, ArgoCD / Flux. Datadog, PagerDuty. CloudSQL, Cloudflare, IAM, Secrets. You're. A proactive team player & strong individual contributor. Confident yet humble. Curious, driven & always learning. Not afraid to solve deep infrastructure challenges. (ref:hirist.tech). Show more Show less
Posted 2 weeks ago
6.0 - 11.0 years
27 - 32 Lacs
Pune
Work from Office
Job Title: Service Operations Specialist AVP Location: Pune, India Role Description Private Bank Germany Service Operations - provides 2nd Level Application Support for business applications used in branches, by mobile sales or via internet. The department is responsible for the stability of the applications. Incident Management and Problem Management are the main processes that account for the required stability. In-depth application knowledge and understanding of the business processes that the applications support are our main assets. Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Experience: 10+ years Monitor production systems for performance, availability, and anomalies. Collaborate with development teams for bug fixes and enhancements. Provide application support by handling and consulting on BAU, Incidents/emails/alerts for the respective applications. Act as an escalation point for user issues and requests and from Level 1/L2 support. Report issues to management. Manage and mentor regional L2 team to ensure the team is up to speed and picks up the support duties. Gain detailed knowledge of all business flows, the application architecture, and the hardware configuration for supported applications. Define, document, and maintain procedures, SLAs, and knowledge base to support the platforms to ensure consistent service levels are achieved across the global support team. Build and maintain effective and productive relationships with the stakeholders in business, development, infrastructure, and third-party systems / data providers. Manage incidents through resolution, keeping all stakeholders abreast of the situation and working to minimize impact wherever possible. Conduct post-mortems of incidents and drive relevant feedback into Incident, Problem and Change management programs. Facilitate coordination across L1/L2 and L3/Engineering teams to investigate and resolve an ongoing infrastructure/platform or application issue impacting multiple business lines. Drive the development and implementation of the tools and best practices needed to provide effective support. Collaborate with and deliver initiatives and install these initiatives to drive stability in the environment. Assist in the process to approve all new releases and production configuration changes; ensure development includes all necessary documentation for each change and conduct post-release testing where required. Perform reviews of all open production items with the development team and push for updates and resolutions to outstanding tasks and reoccurring issues. Regularly review and analyze service requests and issues that are raised; seek to improve the process and remove reoccurring tasks where possible. Perform reviews of existing monitoring for the platform and make improvements where possible. The candidate will have to work in shifts as part of a Rota covering EMEA hours and in the event of major outages or issues we may ask for flexibility to help provide appropriate cover. Your skills and experience Business and Technical competency: Hands on experience in Banking domain and technology. Credit card business and operations knowledge is a must. Technologies: Hands-on experience with log analyser such as Splunk (mainly) Knowledge in container platforms like Kubernetes / OpenShift/GKE Knowledge in Observability tool like New Relic Hands on experience in job scheduling tools, sqls/ oracle DB etc. Strong understanding of SOAP & REST API Technologies Knowledge in IBM MQ & SFTP will be added advantage. Basic understanding on HELM, GITHUB. Incident and Operations Management: Strong knowledge in incident management processes and various ITIL concepts. Strong skills in application monitoring and performance, troubleshooting, and root cause analysis. Soft Skills: Excellent problem-solving abilities in high-pressure scenarios. Strong communication skills to work effectively with stakeholders and cross-functional teams. Ability to prioritize tasks and manage time effectively in a fast-paced environment. English language skills mandatory, German CEFR A1 level preferred (highly desirable) Education Bachelors degree from an accredited college or university with a concentration in IT or Computer Science related discipline (equivalent diploma or technical faculty)
Posted 2 weeks ago
10.0 - 15.0 years
20 - 25 Lacs
Pune
Work from Office
Role Overview:As a Senior Principal Software Engineer, you will be a key technical leader responsible for shaping the design and development of scalable, reliable, and innovative AI/GenAI solutions. You will lead high priority projects, set technical direction for teams, and ensure alignment with organizational goals. Thisrole demands a high degree of technical expertise, strategic thinking, and the ability to collaborate effectively across diverse teams while mentoring and elevating others to meet a very high technical bar. Key Responsibilities: Strategic Technical Leadership : Define and drive the technical vision and roadmap for AI/GenAI systems, aligning with company objectives and future growth. Provide architectural leadership for complex, large-scale AI systems, ensuring scalability, performance, and maintainability. Act as a thought leader in AI technologies, influencing cross-functional technical decisions and long-term strategies. Advanced AI Product Development: Lead the development of state-of-the-art generative AI solutions, leveraging advanced techniques such as transformer models, diffusion models, and multi-modal architectures. Drive innovation by exploring and integrating emerging AI technologies and best practices. Mentorship & Team Growth: Mentor senior and junior engineers, fostering a culture of continuous learning and technical excellence. Elevate the team’s capabilities through coaching, training, and providing guidance on best practices and complex problem-solving. End-to-End Ownership: Take full ownership of high-impact projects, from ideation and design to implementation, deployment, and monitoring in production. Ensure the successful delivery of projects with a focus on quality, timelines, and alignment with organizational goals. Collaboration & Influence: Collaborate with cross-functional teams, including product managers, data scientists, and engineering leadership, to deliver cohesive and impactful solutions. Act as a trusted advisor to stakeholders, clearly articulating technical decisions and their business impact. Operational Excellence: Champion best practices for software development, CI/CD, and DevOps, ensuring robust and reliable systems. Monitor and improve the health of deployed services, conducting root cause analyses and driving preventive measures for long-term reliability. Innovation & Continuous Improvement: Advocate for and lead the adoption of new tools, frameworks, and methodologies to enhance team productivity and product capabilities. Stay at the forefront of AI/GenAI research, driving thought leadership and contributing to the AI community through publications or speaking engagements. Minimum Qualifications: Educational BackgroundBachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field; Ph.D. is preferred but not required. Experience10+ years of professional software development experience, including 5+ years in AI/ML or GenAI. Proven track record of designing and deploying scalable, production-grade AI solutions. Deep expertise in Python and frameworks such as TensorFlow, PyTorch, FastAPI, and LangChain. Advanced knowledge of AI/ML algorithms, generative models, and LLMs. Proficiency with cloud platforms (e.g., GCP, AWS, Azure) and modern DevOps practices. Strong understanding of distributed systems, microservices architecture, and database systems (SQL/NoSQL). Leadership Skills: Demonstrated ability to lead complex technical initiatives, influence cross functional teams, and mentor engineers at all levels. Problem-Solving Skills: Exceptional analytical and problem-solving skills, with a proven ability to navigate ambiguity and deliver impactful solutions. CollaborationExcellent communication and interpersonal skills, with the ability to engage and inspire both technical and non-technical stakeholders. Preferred Qualifications: AI/ML ExpertiseExperience with multi-modal models, reinforcement learning, and responsible AI principles. Cloud & InfrastructureAdvanced knowledge of GCP technologies such as VertexAI, BigQuery,GKE, and DataFlow. Thought LeadershipContributions to the AI/ML community through publications, open-source projects, or speaking engagements. Agile ExperienceFamiliarity with agile methodologies and working in a DevOps model. Disability Accommodation: UKGCareers@ukg.com.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France