Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 5.0 years
3 - 4 Lacs
Jaipur
Work from Office
Job Description Position Customer Retention Officer No. of Positions 1 Department Customer Retention Team Function Operations Reporting to Manager -Collections Band 5 Location Jaipur Last date of submission Key Responsibilities Collections Sole responsibility of targets basis allocated base on Renewal collection and persistency targets through customer calling or physical visits to customer residences with prior appointment Ensure maintenance and sharing of daily customers trackers and conversion thereof, including review with Manager-Collections Convince customers and get renewal premium and policy continuation to continue insurance and giving retention tools like mode change/Partial withdrawal/PUA surrender/Loan etc. Timely depositing the cheque at the nearest MLI branch/ bank branch within 24 hours of the cheque collection Effectively handling customer escalations, 100% resolution of customer queries & concerns including tracking of service requests raised, providing clarity to required customers on special revival scheme, change in term, short premium bucket to resolve financial constraint and retain customers Ensuring follow up on critical document like Health Declaration Form / tele underwriting/home visits medical/ ECS Mandate/ complaint letters on silent grief and obtaining new contact details getting update through branch operations Measure of Success 100% completion of assigned activities by 7th of every month Completing daily face to face meetings as per goal sheet Driving Top 30 high value cases with 100% meeting Providing 100% resolution to customers query !00% Tracking and timely closure of service request if logged for any of the allocated customers Maximizing positive portfolio (Promise to Pay) by providing right advice to customers to continue insurance and giving retention tools like mode change/Partial withdrawal/PUA surrender/Loan Driving Special revival scheme and change in term to resolve financial constraint By ensuring 100% meetings on silent grief cases Collecting written grievance from the customer and getting it logged in system Percentage of customer retention from allocated base surrendering the polices by explaining insurance benefits, ulip retention tools, switching, long term growth, comparison with other financial products Conversion of non ECS base to E-NACH/ECS by explaining benefits of continuing policy on E-Nach/ ECS for huddle free payment Ensuring collection and registration along with necessary documents of auto pay mandates from customers on non auto pay Desired qualification and experience Graduation in any stream with prior experience in insurance, customer services profile. Prior experience in renewal retention team is an added advantage Knowledge and skills required Ability to handle pressure and work in a competitive environment. Open to learning, Very High Process orientation , Effective communication skills. Should have a personal vehicle and smart phone
Posted 2 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Noida
Work from Office
Job Summary: We are seeking an experienced and results-driven Senior Java Developer to join our team in Noida . The ideal candidate should have strong hands-on experience in Java, Spring Framework, JPA, Hibernate, Kubernetes, AWS, and Microservices architecture. You will play a critical role in the design, development, and deployment of scalable and high-performance applications. Key Responsibilities: Design, develop, and implement robust and scalable Java-based applications. Develop microservices using Spring Boot . Hands-on experience with Docker-based containerization and Kubernetes for application deployment. Work with JPA and Hibernate for effective data persistence and database operations. Deploy and manage services on AWS cloud infrastructure . Collaborate with architects, DevOps engineers, QA, and other developers to deliver enterprise-grade solutions. Optimize application performance and ensure responsiveness to front-end requests. Ensure code quality and maintainability through code reviews and best practices. Participate in the full software development life cycle: requirement analysis, design, development, testing, and deployment. Required Skills & Qualifications: 4 to 8 years of strong Java development experience. Hands-on experience with Spring Boot , Spring Core , and other spring modules. Strong knowledge of JPA and Hibernate ORM frameworks. Experience with Kubernetes for container orchestration and microservices management. Working knowledge of AWS services (EC2, S3, RDS, ECS, etc.). Strong understanding of RESTful APIs and Microservices architecture . Familiarity with CI/CD tools and version control systems (e.g., Git, Jenkins). Solid problem-solving skills and a strong sense of ownership. Bachelors or Masters degree in Computer Science, Engineering, or related discipline. Preferred Skills: Experience with Docker and containerization. Exposure to monitoring tools like Prometheus, Grafana, etc. Knowledge of Agile/Scrum methodologies.
Posted 2 weeks ago
0.0 - 2.0 years
3 - 4 Lacs
Thane
Work from Office
The Kiran Academy Java By Kiran is looking for Trainee Software Engineer ( Python ) to join our dynamic team and embark on a rewarding career journey A Trainee Software Engineer will be expected to learn the basic principles of software development, including coding standards, software testing, debugging, and documentation 1 Developing and testing software: A Trainee Software Engineer may be involved in the development of software products, applications, and features This includes coding, testing, and debugging software 2 Learning new technologies and tools: Software development is a constantly evolving field, and a Trainee Software Engineer will need to stay up-to-date with the latest technologies and tools in order to be effective 3 Troubleshooting and resolving technical issues: A Trainee Software Engineer may be responsible for identifying and resolving technical issues that arise during the software development process
Posted 2 weeks ago
0.0 - 2.0 years
3 - 4 Lacs
Thane
Work from Office
The Kiran Academy Java By Kiran is looking for Trainee Software Engineer ( Java ) to join our dynamic team and embark on a rewarding career journey A Trainee Software Engineer will be expected to learn the basic principles of software development, including coding standards, software testing, debugging, and documentation 1 Developing and testing software: A Trainee Software Engineer may be involved in the development of software products, applications, and features This includes coding, testing, and debugging software 2 Learning new technologies and tools: Software development is a constantly evolving field, and a Trainee Software Engineer will need to stay up-to-date with the latest technologies and tools in order to be effective 3 Troubleshooting and resolving technical issues: A Trainee Software Engineer may be responsible for identifying and resolving technical issues that arise during the software development process
Posted 2 weeks ago
3.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Why this job matters The Cloud Engineering Professional supports in the implementation of core infrastructure and security services in Amazon Web Services, delivering highly available and scalable services, producing automation solutions for time intensive processes and upholding security and compliance objectives in all aspects of the secure cloud ecosystem. What youll be doing Cloud Infrastructure Implementation : Responsible to Implement the Patterns Established by Enterprise Architecture Governance Team and Engage with them for better Collaboration Documentation: Create detailed architecture diagrams and documentation to communicate designs and implementation strategies. Cloud Deployment: Implement and manage cloud infrastructure using IaC tools Cloud Management: Automate the deployment, scaling, and management of cloud resources to ensure consistency and repeatability. Cloud Migration: Plan and execute the migration of applications, data, and services from on-premises to the cloud with Minimal Down time - ensure a seamless transition to the cloud environment. Security and Compliance: Implement and enforce cloud security best practices Performance Monitoring and Optimization: Set up and configure monitoring tools to track the performance, availability, and health of cloud resources - Analyse performance metrics and logs to identify and resolve performance bottlenecks and optimize resource utilization. Cost Management: Monitor cloud spending and implement cost-saving strategies, such as resource rightsizing and reserved instances. Governance and Policies: Develop and enforce governance policies for cloud usage, security, and compliance and Review the same Skills Required Cloud Services: AWS (ECS, EKS, Lambda , Cloud Front , Route 53, ALB , Rest API Gateway , Aurora Serverless , ECR and Transit Gateway) Containerization: Docker Orchestration: Kubernetes Observability: Prometheus/Grafana/ELK Stack(Elasticsearch, Logstash, Kibana) Protocols: SAML, OAuth, OpenID Connect Serverless Architecture : AWS Lambda Scripting : Python, Shell Tools: Proficiency with CI/CD tools like Jenkins, GitLab CI/CD, Helm Charts Webserver : Apache , Nginx
Posted 2 weeks ago
10.0 - 15.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Description: We are seeking an experienced DevOps Architect to lead the design, implementation, and management of scalable DevOps solutions and cloud infrastructure. The ideal candidate will have a strong background in CI/CD pipeline development, infrastructure automation, container orchestration, and AWS cloud services, with hands-on expertise in tools such as Jenkins, Ansible, Terraform, Kubernetes, and AWS EKS/ECS. Requirements: Job Description AWS (VPC/ECS/EC2/CloudFormation/RDS) Artifactory Some knowledge with CircleCI/Saltstack is preferred but not required Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure Must Have: Bachelors Degree, with at least 7+ year experience in DevOps Should have worked on various DevOps tools like: GitLab, Jenkins, SonarQube, Nexus, Ansible etc. Should have worked on various AWS Services -EC2, S3, RDS, CloudFront, CloudWatch, CloudTrail, Route53, ECS, ASG, Route53 etc. Well-versed with shell/python scripting & Linux Well-versed with Web-Servers (Apache, Tomcat etc) Well-versed with containerized application (Docker, Docker-compose, Docker-swarm, Kubernetes) Have worked on Configuration management tools like Puppet, Ansible etc. Have experience in CI/CD implementation (Jenkins, Bamboo, etc..) Self-starter and ability to deliver under tight timelines Good to have: Exposure to various tools like New Relic, ELK, Jira, confluence etc Prior experience in managing infrastructure for public facing web-applications. Prior experience in handling client communications Basic Networking knowledge – VLAN, Subnet, VPC, etc. Knowledge of databases (PostgreSQL). Key Skills- Must have Jenkins, Docker, Python, Groovy, Shell-Scripting, Artifactory, Gitlab, Terraform, VM Ware,PostgreSQL, AWS, Kafka Job Responsibilities: Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
URGENT HIRING(WORK FROM OFFICE-US SHIFT TIMING) Bachelor in Computer Science, Information Technology, or a related field. At least 5 years of hands-on experience in cloud engineering, with a focus on AWS. expertise in cloud architecture and design,
Posted 2 weeks ago
5.0 - 10.0 years
30 - 37 Lacs
Pune, Ahmedabad
Work from Office
Join our vibrant team at Zymr as a Senior DevOps CI/CD Engineer and become a driving force behind the exciting world of continuous integration and deployment automation. We're a dynamic group dedicated to building a high-quality product while maintaining exceptional speed and efficiency. This is a fantastic opportunity to be part of our rapidly growing team. Job Title : Sr. DevOps Engineer Shift Time: 6 pm to 2 am Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science or other engineering/technical degree Responsibilities: Deployments to Development, Staging, and Production Take charge of managing deployments to each environment with ease: Skillfully utilize Github protocols to identify and resolve root causes of merge conflicts and version mismatches. Deploy hotfixes promptly by leveraging deployment automation and scripts. Provide guidance and approval for Ruby on Rails (Ruby) scripting performed by junior engineers, ensuring smooth code deployment across various development environments. Review and approve CI/CD scripting pull requests from engineers, offering valuable feedback to enhance code quality. Ensure the smooth operation of each environment on a daily basis, promptly addressing any issues that arise: Leverage Datadog monitoring to maintain a remarkable uptime of 99.999% for each development environment. Develop strategic plans for Bash and Ruby scripting to automate health checks and enable auto-healing mechanisms in the event of errors. Implement effective auto-scaling strategies to handle higher-than-usual traffic on these development environments. Evaluate historical loads and implement autoscaling mechanisms to provide additional resources and computing power, optimizing workload performance. Collaborate with DevOps to plan capacity and monitoring using Datadog. Analyze developer workflows in close collaboration with team leads and attend squad standup meetings, providing valuable suggestions for improvement. Harness the power of Ruby and Bash to create tools that enhance engineers' development workflow. Script infrastructure using Terraform to facilitate the creation infrastructure Leverage CI/CD to add security scanning to code pipelines Develop Bash and Ruby scripts to automate code deployment while incorporating robust security checks for vulnerabilities. Enhance our CI/CD pipeline by building Canary Stages with Circle CI, Github Actions, YAML and Bash scripting. Integrate stress testing mechanisms using Ruby on Rails, Python, and Bash scripting into the pipeline's stages. Look for ways to reduce engineering toil and replace manual processes with automation! Nice to have: Terraform is required Github, AWS tooling, however pipeline outside of AWS, Rails (other scripting languages okay)
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Strong BASIS working experience in products including SAP S/4, Solman, BTP, Azure and SAP Web applications. Expertise in supporting S/4 applications and resolving critical issues related to memory management, performance tunning, etc Willing to work in SAP Basis support and projects Able to handle P1 and P2s on BASIS end Should experience in Database Administration - Hana and Sybase Should have knowledge in HA/DR setup Update /Upgrade, migration and conversion concepts for NetWeaver & Non NetWeaver Understanding and working experience in SLA/KPI tracking Working experience in a global team with 24/7 support De-escalation experience on IT incidents and requests Management of multiple activities requiring cross team coordination Prior working experience in ECS will be added advantage Implemenataion, Sap Rise, Dba, Basis, Hana, Sap, Migration
Posted 2 weeks ago
6.0 - 10.0 years
6 - 7 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Experience: 6 + years Expected Notice Period: 15 Days Shift: (GMT+05:30) Asia/Kolkata (IST) Opportunity Type: Remote,New Delhi,Bengaluru,Mumbai Technical Lead What youll own Leading the re-architecture of Zooms database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rolloutall with aggressive timelines Skills & Experience We Expect Were looking for candidates with 7-10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3-4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3-5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1-3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2-4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3-5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2-3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps skills -MAM, App integration
Posted 2 weeks ago
7.0 - 12.0 years
9 - 19 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
Technical Aspects: 1)Proficiency in Designing, Development, Testing and Deploying Node.JS applications in AWS.2)Primary Skill: Node.JS, Secondary: AWS. 3)Should have at least 5+ years of Node.JS Experience and its frameworks(Express.JS). 4)Should have at least 3+ years of AWS Experience. 5)Good knowledge on Micro-Services Architecture. 6)Good knowledge on JavaScript advanced concepts and ES6+ features. 7)Thorough understanding of Relational Databases and No-SQL Databases. 8)Excellent coding skills in Node.JS with JavaScript with good debugging techniques. 9)Excellent logical and problem-solving skills. 10)Understanding of API architecture styles and its best security practices. 11)Experience in writing unit test cases for Backend API components using Mocha or Jest frameworks. 12)Knowledge on Docker operations like Image creation, Image push to repository, containerization and etc. 13)Understanding of CI/CD pipelines process. 14)Understanding of core AWS services, uses, and basic AWS architecture best practices. 15)Proficiency in developing, deploying, and debugging cloud-based applications using AWS. 16)Should have knowledge on AWS services like AWS SNS, SQS, CloudTrail, Load Balancers, Route53. 17)Proficiency in AWS Services like ECS,EC2,S3,Lambda, API Gateway, Cloud Watch, Cloud Formation, Dynamo DB, App Config, secret Manager. 18)Ability to rapidly learn and take advantage of new concepts, business models, and technologies. 19)Working experience on dev and support tools like Jira, Confluence, ServcieNow. Role / Responsibilities / soft skills: 1)Working experience in the dev/support projects following the Agile methodology 2)Work experience with US based client in client facing abilities will be preferred 3)Excellent communication, analytical and team leading abilities 4)Proactively identify recurring issues and suggest improvements to products, services, or processes to enhance customer satisfaction. 5)Provide prompt and effective technical support to customers via various channels such as email or ticketing system 6)Collaborate with cross-functional teams including development, DevOps, Business and QA to resolve complex technical issues. 7)Documenting technical design documents, troubleshooting steps, solutions, and best practices for internal and external knowledge sharing.
Posted 2 weeks ago
6.0 - 10.0 years
8 - 13 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote What do you need for this opportunity? Must have skills required: ML, Python Looking for: Were looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI while integrating visual AI pipelines built by ML engineers. Youll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You wont be working in a silo this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What Youll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables you write clean, tested code and debug edge cases until theyre truly fixed The ability to frame problems from scratch and work without strict handoffs you build from a goal, not a ticket. Skills & Experience We Expect Core Engineering Experience 6-8 years of professional software engineering experience in production environments 2-3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5-7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) 2+ years System Optimization & Middleware (3-5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2-3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4-6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2-3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster.
Posted 3 weeks ago
3.0 - 7.0 years
15 - 22 Lacs
Gurugram, Bengaluru
Work from Office
Role & responsibilities Design, implement, and maintain AWS infrastructure using Terraform (IaC Infrastructure as Code). Manage CI/CD pipelines and automate operational tasks using tools like Jenkins, GitHub Actions, or CodePipeline. Monitor infrastructure health using CloudWatch, Prometheus, Grafana, etc., and handle alerting with PagerDuty or similar tools. Implement and maintain backup, disaster recovery, and high availability strategies in AWS. Manage VPCs, subnets, routing, security groups, and IAM roles and policies. Perform cost optimization and rightsizing of AWS resources. Ensure security compliance and apply cloud security best practices (e.g., encryption, access control). Collaborate with development and security teams to support application deployment and governance. Preferred candidate profile 3+ years of hands-on experience in AWS Cloud (EC2, S3, IAM, RDS, Lambda, EKS/ECS, VPC, etc.). 2+ years experience with Terraform and strong understanding of IaC principles. Hands-on experience with Linux system administration and scripting (Bash, Python). Experience with DevOps tools such as Git, Docker, Jenkins, or similar. Proficiency in monitoring/logging tools like CloudWatch, ELK stack, Datadog, or New Relic. Familiarity with incident management, change management, and postmortem analysis processes. Knowledge of networking, DNS, TLS/SSL, firewalls, and cloud security concepts.
Posted 3 weeks ago
6.0 - 11.0 years
20 - 22 Lacs
Hyderabad
Hybrid
Hi, We are hiring for the below requirement, for one of the top MNC also under their direct payroll. Please read the JD and interested candidate please drop your application in payel.chowdhury@in.experis.com - Job Title: DevOps Engineer Location: Only Hyderabad Employment Type: Full-Time Experience Level: 6+ Years Notice Period : Immediate Joiners only Job Summary: We are looking for a skilled and motivated DevOps Engineer to join our growing technology team. The ideal candidate will have hands-on experience with AWS cloud services , strong knowledge of CI/CD pipelines , and be proficient in tools such as GitLab, Kubernetes, Terraform, Jenkins , and scripting languages like Python, Bash, or Shell . You will play a key role in automating infrastructure, improving deployment processes, and ensuring system reliability and scalability. Key Responsibilities: Design, implement, and manage scalable and secure infrastructure on AWS . Develop and maintain CI/CD pipelines using GitLab CI, Jenkins, or similar tools . Automate infrastructure provisioning using Terraform and Infrastructure as Code (IaC) best practices. Manage and monitor Kubernetes clusters for container orchestration and workload management. Write and maintain automation scripts using Python , Shell , or Bash to streamline DevOps processes. Collaborate with development and QA teams to ensure smooth code deployments and operational efficiency. Implement monitoring, logging, and alerting solutions to proactively identify issues. Troubleshoot production issues and perform root cause analysis. Ensure system security through access controls, firewalls, and other policies. Stay current with industry trends and emerging technologies to continuously improve the DevOps toolchain. Required Skills & Qualifications: 3+ years of experience as a DevOps Engineer or similar role. Strong hands-on experience with Amazon Web Services (AWS) including EC2, S3, VPC, IAM, RDS, Lambda, CloudWatch, and ECS/EKS. Proficiency in CI/CD tools such as GitLab CI/CD , Jenkins , or similar. Experience with Kubernetes for container orchestration and Docker for containerization. Expertise in Terraform for infrastructure automation and provisioning. Strong scripting skills in Python , Shell , or Bash . Solid understanding of Git workflows and version control best practices. Experience with monitoring tools like Prometheus, Grafana, ELK Stack, or CloudWatch. Familiarity with Agile methodologies and DevOps culture. Preferred Qualifications: AWS certifications such as AWS Certified DevOps Engineer or Solutions Architect . Experience with other DevOps tools such as Ansible, Helm, or ArgoCD. Familiarity with security and compliance frameworks in cloud environments.
Posted 3 weeks ago
2.0 - 7.0 years
4 - 6 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Develop scalable microservices using Java Spring Boot Design, implement REST APIs and integrate with frontend and external services Deploy and manage services using AWS services like EC2, S3, Lambda, RDS, and ECS Required Candidate profile Use CI/CD pipelines for automated builds, deployments (e.g., Jenkins, GitHub Actions) Collaborate with frontend, QA, DevOps, and business teams Write unit, integration tests to ensure code quality Perks and benefits Perks and Benefits
Posted 3 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
Thane
Work from Office
Job Description Position Customer Retention Officer/ AM Customer Retention Officer No. of Positions 100 Department Customer Retention Team Function Operations Reporting to Manager -Collections Band 5 Location Last date of submission Key Responsibilities Collections Sole responsibility of targets basis allocated base on Renewal collection and persistency targets through customer calling or physical visits to customer residences with prior appointment Ensure maintenance and sharing of daily customers trackers and conversion thereof, including review with Manager-Collections Convince customers and get renewal premium and policy continuation to continue insurance and giving retention tools like mode change/Partial withdrawal/PUA surrender/Loan etc. Timely depositing the cheque at the nearest MLI branch/ bank branch within 24 hours of the cheque collection Effectively handling customer escalations, 100% resolution of customer queries & concerns including tracking of service requests raised, providing clarity to required customers on special revival scheme, change in term, short premium bucket to resolve financial constraint and retain customers Ensuring follow up on critical document like Health Declaration Form / tele underwriting/home visits medical/ ECS Mandate/ complaint letters on silent grief and obtaining new contact details getting update through branch operations Measure of Success 100% completion of assigned activities by 7th of every month Completing daily face to face meetings as per goal sheet Driving Top 30 high value cases with 100% meeting Providing 100% resolution to customers query !00% Tracking and timely closure of service request if logged for any of the allocated customers Maximizing positive portfolio (Promise to Pay) by providing right advice to customers to continue insurance and giving retention tools like mode change/Partial withdrawal/PUA surrender/Loan Driving Special revival scheme and change in term to resolve financial constraint By ensuring 100% meetings on silent grief cases Collecting written grievance from the customer and getting it logged in system Percentage of customer retention from allocated base surrendering the polices by explaining insurance benefits, ulip retention tools, switching, long term growth, comparison with other financial products Conversion of non ECS base to E-NACH/ECS by explaining benefits of continuing policy on E-Nach/ ECS for huddle free payment Ensuring collection and registration along with necessary documents of auto pay mandates from customers on non auto pay Desired qualification and experience Graduation in any stream with prior experience in insurance, customer services profile. Prior experience in renewal retention team is an added advantage Knowledge and skills required Ability to handle pressure and work in a competitive environment. Open to learning, Very High Process orientation , Effective communication skills. Should have a personal vehicle and smart phone
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team, you will design, build, and optimize enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. Youll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. Whats in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical : 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestrationDocker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworksCelery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practicesunit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)
Posted 3 weeks ago
2.0 - 6.0 years
4 - 8 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 09 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will build and maintain enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. Whats in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical : Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestrationCelery, Redis, Airflow Strong AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestrationDocker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)
Posted 3 weeks ago
5.0 - 10.0 years
12 - 22 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Role Summary Join our platform team to streamline deployments, manage hybrid cloud infrastructure, and automate CI/CD pipelines across GenAI and risk systems. Responsibilities Design and maintain CI/CD pipelines using GitHub Actions, GitLab, or Azure DevOps Manage Docker-based microservices and deployments on AWS and GCP Handle infrastructure as code using Terraform or Pulumi Set up observability tools (Prometheus, CloudWatch, Grafana) Manage secrets, RBAC, and secure networking across environments Work closely with data, backend, and GenAI teams to enable safe releases Required Skills Strong hands-on experience with AWS (ECS, Lambda, RDS, IAM) and/or GCP (GKE, Cloud Run, BigQuery) Expertise with Docker, Kubernetes, and container orchestration Proficient with IAC tools like Terraform, Helm Experience setting up secure CI/CD pipelines Good understanding of network security and compliance principles Preferred Experience in deploying LLM/GenAI APIs securely Exposure to hybrid on-prem/cloud setups (e.g., Exadata Cloud@Customer, Cloud SQL)
Posted 3 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
As a Senior Software Engineer I, you will be a critical member of our technology team, responsible for designing, developing, and deploying scalable software solutions. You will leverage your expertise in Java, ReactJS, AWS, and emerging AI tools to deliver innovative products and services that enhance healthcare outcomes and streamline operations. Primary Responsibilities: Design, develop, test, deploy, and maintain full-stack software solutions leveraging Java, ReactJS, and AWS cloud services Collaborate closely with cross-functional teams, including Product Managers, Designers, Data Scientists, and DevOps Engineers, to translate business requirements into technical solutions Implement responsive UI/UX designs using ReactJS, ensuring optimal performance and scalability Develop robust backend services and APIs using Java and related frameworks (e.g., Spring Boot) Leverage AWS cloud services (e.g., EC2, , S3, Postgres /DynamoDB, ECS, EKS, CloudFormation) to build scalable, secure, and highly available solutions Incorporate AI/ML tools and APIs (such as OpenAI, Claude, Gemini, Amazon AI services) into existing and new solutions to enhance product capabilities Conduct code reviews and adhere to software engineering best practices to ensure quality, security, maintainability, and performance Actively participate in agile methodologies, sprint planning, backlog grooming, retrospectives, and continuous improvement processes Troubleshoot, debug, and resolve complex technical issues and identify root causes to ensure system reliability and performance Document technical solutions, system designs, and code effectively for knowledge sharing and future reference Mentor junior team members, fostering technical growth and engineering excellence Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors Degree or higher in Computer Science, Software Engineering, or related technical discipline 6+ years of hands-on software development experience across the full stack Solid experience developing front-end applications using ReactJS, TypeScript / JavaScript, HTML5, CSS3 Familiarity with AI/ML tools and APIs (such as OpenAI, Claude, Gemini, AWS AI/ML services) and experience integrating them into software solutions Experience with relational and NoSQL databases, along with solid SQL skills Experience in agile development methodologies and CI/CD pipelines Monitoring tools experience like Splunk, Datadog, Dynatrace Solid analytical and problem-solving skills, with the ability to troubleshoot complex technical issues independently Solid proficiency in Java, J2EE, Spring/Spring Boot, and RESTful API design Demonstrable experience deploying and managing applications on AWS (e.g., EC2, S3, Postgres /DynamoDB, RDS, ECS, EKS, CloudFormation) Proven excellent written, verbal communication, and interpersonal skills Preferred Qualifications: Experience in healthcare domain and understanding of healthcare data and workflows Hands-on experience with containerization technologies (Docker, Kubernetes) Experience with performance optimization, monitoring, and logging tools Familiarity with DevOps practices, Infrastructure as Code, and tools like Jenkins, Terraform, Git, and GitHub Actions Exposure to modern architectural patterns such as microservices, serverless computing, and event-driven architecture.
Posted 3 weeks ago
6.0 - 11.0 years
25 - 40 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
-Design, build & deployment of cloud-native and hybrid solutions on AWS and GCP -Exp in Glue, Athena, PySpark & Step function, Lambda, SQL, ETL, DWH, Python, EC2, EBS/EFS, CloudFront, Cloud Functions, Cloud Run (GCP), GKE, GCE, EC2, ECS, S3, etc
Posted 3 weeks ago
0.0 - 4.0 years
1 - 9 Lacs
Hyderabad
Work from Office
Built high-perf systems w/ low-level socket optimization, Node.js internals (libuv, V8), clean architecture, Linux & network stack know-how, DB internals, AWS in prod. I design for humans & machines. Dark joke? I still debug in prod. On Fridays.
Posted 3 weeks ago
2.0 - 4.0 years
10 - 13 Lacs
Bengaluru
Work from Office
About Rippling Rippling gives businesses one place to run HR, IT, and Finance It brings together all of the workforce systems that are normally scattered across a company, like payroll, expenses, benefits, and computers For the first time ever, you can manage and automate every part of the employee lifecycle in a single system Take onboarding, for example With Rippling, you can hire a new employee anywhere in the world and set up their payroll, corporate card, computer, benefits, and even third-party apps like Slack and Microsoft 365?all within 90 seconds Based in San Francisco, CA, Rippling has raised $1 85B from the worlds top investors?including Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrock?and was named one of America's best startup employers by Forbes We prioritize candidate safety Please be aware that official communication will only be sent from @Rippling com addresses About The Role The compute infrastructure team takes care of running the application in our prod and non-prod environments, our CI pipeline infrastructure and governance for product team specific infra needs Our primary code base is a python monolith We have home-grown tools & frameworks for running background jobs, deployments and managing cloud infra You have an opportunity to work at framework level that would be used by the developers The team is split across US and India To mention a few tools AWS EKS & ECS, Python, Datadog and terraform What You Will Do Solving problems on deployment infrastructure -in unconventional ways at times Conceive and build tools that make developers efficient and push high-quality code Work on home-grown frameworks enhancements Evolve the deployment infra needs as we refactor and split the monolith Design scalable and robust systems, and make decisions that will keep pace with the rapid growth of Rippling Build relationships with developers across all of our teams to understand their needs and satisfy them with projects you develop What You Will Need 8+ years of professional work experience in cloud infrastructure Experience in build, development, or release systems (e g Git, Jenkins, Chef, Docker) Backend development experience would be a big plus It would help you to work on the homegrown frameworks written in Python You want to be part of a team of the most talented, forward-thinking engineers in the industry
Posted 3 weeks ago
5.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
Production experience on AWS (IAM, ECS, EC2, VPC, ELB, RDS, Autoscaling, Cost Optimisation, Trusted Advisors, Guard Duty, Security, etc.) Must have monitoring experience on tools like Nagios/ Prometheus/ Grafana/ Datadog/ New Relic, etc. Required Candidate profile Must have experience in Linux Administration. Must have a working knowledge of scripting (Python/Shell).
Posted 3 weeks ago
3.0 - 6.0 years
5 - 9 Lacs
Gurugram
Work from Office
Experience : 8-10 years. Job Title : Devops Engineer. Location : Gurugram. Job Summary. We are seeking a highly skilled and experienced Lead DevOps Engineer to drive the design, automation, and maintenance of secure and scalable cloud infrastructure. The ideal candidate will have deep technical expertise in cloud platforms (AWS/GCP), container orchestration, CI/CD pipelines, and DevSecOps practices.. You will be responsible for leading infrastructure initiatives, mentoring team members, and collaborating. closely with software and QA teams to enable high-quality, rapid software delivery.. Key Responsibilities. Cloud Infrastructure & Automation :. Design, deploy, and manage secure, scalable cloud environments using AWS, GCP, or similar platforms.. Develop Infrastructure-as-Code (IaC) using Terraform for consistent resource provisioning.. Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, AWS CodePipeline, or Azure DevOps.. Containerization & Orchestration :. Containerize applications using Docker for seamless development and deployment.. Manage and scale Kubernetes clusters (on-premise or cloud-managed like AWS EKS).. Monitor and optimize container environments for performance, scalability, and cost-efficiency.. Security & Compliance :. Enforce cloud security best practices including IAM policies, VPC design, and secure secrets management (e.g., AWS Secrets Manager).. Conduct regular vulnerability assessments, security scans, and implement remediation plans.. Ensure infrastructure compliance with industry standards and manage incident response protocols.. Monitoring & Optimization :. Set up and maintain monitoring/observability systems (e.g., Grafana, Prometheus, AWS CloudWatch, Datadog, New Relic).. Analyze logs and metrics to troubleshoot issues and improve system performance.. Optimize resource utilization and cloud spend through continuous review of infrastructure configurations.. Scripting & Tooling :. Develop automation scripts (Shell/Python) for environment provisioning, deployments, backups, and log management.. Maintain and enhance CI/CD workflows to ensure efficient and stable deployments.. Collaboration & Leadership :. Collaborate with engineering and QA teams to ensure infrastructure aligns with development needs.. Mentor junior DevOps engineers, fostering a culture of continuous learning and improvement.. Communicate technical concepts effectively to both technical and non-technical :. Education. Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent hands-on : AWS Certified DevOps Engineer Professional (preferred) or other relevant cloud :. 8+ years of experience in DevOps or Cloud Infrastructure roles, including at least 3 years in a leadership capacity.. Strong hands-on expertise in AWS (ECS, EKS, RDS, S3, Lambda, CodePipeline) or GCP equivalents.. Proven experience with CI/CD tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps.. Advanced knowledge of Docker and Kubernetes ecosystem.. Skilled in Infrastructure-as-Code (Terraform) and configuration management tools like Ansible.. Proficient in scripting (Shell, Python) for automation and tooling.. Experience implementing DevSecOps practices and advanced security configurations.. Exposure to data tools (e.g., Apache Superset, AWS Athena, Redshift) is a plus.. Soft Skills. Strong problem-solving abilities and capacity to work under pressure.. Excellent communication and team collaboration.. Organized with attention to detail and a commitment to Skills :. Experience with alternative cloud platforms (e.g., Oracle Cloud, DigitalOcean).. Familiarity with advanced observability stacks (Grafana, Prometheus, Loki, Datadog).. (ref:hirist.tech). Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France