Jobs
Interviews

240 Autoscaling Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

India

On-site

*Who you are* You’re the person whose fingertips know the difference between spinning up a GPU cluster and spinning down a stale inference node. You love the “infrastructure behind the magic” of LLMs. You've built CI/CD pipelines that automatically version models, log inference metrics, and alert on drift. You’ve containerized GenAI services in Docker, deployed them on Kubernetes clusters (AKS or EKS), and implemented terraform or ARM to manage infra-as-code. You monitor cloud costs like a hawk, optimize GPU workloads, and sometimes sacrifice cost for performance—but never vice versa. You’re fluent in Python and Bash, can script tests for REST endpoints, and build automated feedback loops for model retraining. You’re comfortable working in Azure — OpenAI, Azure ML, Azure DevOps Pipelines—but are cloud-agnostic enough to cover AWS or GCP if needed. You read MLOps/LLMOps blog posts or arXiv summaries on the weekend and implement improvements on Monday. You think of yourself as a self-driven engineer: no playbooks, no spoon-feeding—just solid automation, reliability, and a hunger to scale GenAI from prototype to production. --- *What you will actually do* You’ll architect and build deployment platforms for internal LLM services: start from containerizing models and building CI/CD pipelines for inference microservices. You’ll write IaC (Terraform or ARM) to spin up clusters, endpoints, GPUs, storage, and logging infrastructure. You’ll integrate Azure OpenAI and Azure ML endpoints, pushing models via pipelines, versioning them, and enabling automatic retraining triggers. You’ll build monitoring and observability around latency, cost, error rates, drift, and prompt health metrics. You’ll optimize deployments—autoscaling, use of spot/gpu nodes, invalidation policies—to balance cost and performance. You’ll set up automated QA pipelines that validate model outputs (e.g. semantic similarity, hallucination detection) before merging. You’ll collaborate with ML, backend, and frontend teams to package components into release-ready backend services. You’ll manage alerts, rollbacks on failure, and ensure 99% uptime. You'll create reusable tooling (CI templates, deployment scripts, infra modules) to make future projects plug-and-play. --- *Skills and knowledge* Strong scripting skills in Python and Bash for automation and pipelines Fluent in Docker, Kubernetes (especially AKS), containerizing LLM workloads Infrastructure-as-code expertise: Terraform (Azure provider) or ARM templates Experience with Azure DevOps or GitHub Actions for CI/CD of models and services Knowledge of Azure OpenAI, Azure ML, or equivalent cloud LLM endpoints Familiar with setting up monitoring: Azure Monitor, Prometheus/Grafana—track latency, errors, drift, costs Cost-optimization tactics: spot nodes, autoscaling, GPU utilization tracking Basic LLM understanding: inference latency/cost, deployment patterns, model versioning Ability to build lightweight QA checks or integrate with QA pipelines Cloud-agnostic awareness—experience with AWS or GCP backup systems Comfortable establishing production-grade Ops pipelines, automating deployments end-to-end Self-starter mentality: no playbooks required, ability to pick up new tools and drive infrastructure independently

Posted 4 weeks ago

Apply

2.0 - 31.0 years

2 - 4 Lacs

Amrapali Dream Valley, Greater Noida

On-site

Key Responsibilities: Diagnose and fix performance bottlenecks across backend services, WebSocket connections, and API response times. Investigate issues related to high memory usage, CPU spikes, and slow query execution. Debug and optimize database queries (PostgreSQL) and ORM (Prisma) performance. Implement and fine-tune connection pooling strategies for PostgreSQL and Redis. Configure and maintain Kafka brokers, producers, and consumers to ensure high throughput. Monitor and debug WebSocket issues like connection drops, latency, and reconnection strategies. Optimize Redis usage and troubleshoot memory leaks or blocking commands. Set up or maintain Prometheus + Grafana for service and infrastructure monitoring. Work on containerized infrastructure using Docker and Kubernetes, including load balancing and scaling services. Collaborate with developers to fix memory leaks, inefficient queries, and slow endpoints. Maintain high availability and fault tolerance across all backend components. 🧠 Requirements: Technical Skills: Strong proficiency in Node.js and TypeScript. Deep knowledge of Prisma ORM and PostgreSQL optimization. Hands-on experience with Redis (pub/sub, caching, memory tuning). Solid understanding of WebSockets performance and reconnection handling. Experience working with Kafka (event streaming, partitions, consumer groups). Familiar with Docker, container lifecycle, and multi-service orchestration. Experience with Kubernetes (deployments, pods, autoscaling, resource limits). Familiar with connection pooling strategies for DB and services. Comfortable with performance monitoring tools like Prometheus, Grafana, UptimeRobot, etc. Soft Skills: Excellent debugging and analytical skills. Able to work independently and solve complex issues. Strong communication and documentation habits. ✅ Preferred Qualifications: 3+ years of experience in backend development. Experience with CI/CD pipelines and production deployments. Prior work with large-scale distributed systems is a plus

Posted 4 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development

Posted 4 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development

Posted 4 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a talented Lead Software Engineer with expertise in AWS and Java to join our dynamic team. This role involves working on critical application modernization projects, transforming legacy systems into cloud-native solutions, and driving innovation in security, observability, and governance. You'll collaborate with self-governing engineering teams to deliver high-impact, scalable software solutions. We are looking for candidates with strong expertise in Cloud Native Development, AWS, Microservices architecture, Java/J2EE, and hands-on experience in implementing CI/CD pipelines. Responsibilities Lead end-to-end development in Java and AWS services, ensuring high-quality deliverables Design, develop, and implement REST APIs using AWS Lambda/APIGateway, JBoss, or Spring Boot Utilize AWS Java SDK to interact with various AWS services effectively Drive deployment automation through AWS Java CDK, CloudFormation, or Terraform Architect containerized applications and manage orchestrations via Kubernetes on AWS EKS or AWS ECS Apply advanced microservices concepts and adhere to best practices during development Build, test, and debug code while addressing technical setbacks effectively Expose application functionalities via APIs using Lambda and Spring Boot Manage data formatting (JSON, YAML) and handle diverse data types (String, Numbers, Arrays) Implement robust unit test cases with JUnit or equivalent testing frameworks Oversee source code management through platforms like GitLab, GitHub, or Bitbucket Ensure efficient application builds using Maven or Gradle Coordinate development requirements, schedules, and other dependencies with multiple stakeholders Requirements 5 to 12 years of experience in Java development and AWS services Expertise in AWS services including Lambda, SQS, SNS, DynamoDB, Step Functions, and API Gateway Proficiency in using Docker and managing container orchestration through Kubernetes on AWS EKS or ECS Strong understanding of AWS Core services such as EC2, VPC, RDS, EBS, and EFS Competency in deployment tools like AWS CDK, Terraform, or CloudFormation Knowledge of NoSQL databases, storage solutions, AWS Elastic Cache, and DynamoDB Understanding of AWS Orchestration tools for automation and data processing Capability to handle production workloads, automate tasks, and manage logs effectively Experience in writing scalable applications employing microservices principles Nice to have Proficiency with AWS Core Services such as Autoscaling, Load Balancers, Route 53, and IAM Skills in scripting with Linux/Shell/Python/Windows PowerShell or using Ansible/Chef/Puppet Experience with build automation tools like Jenkins, AWS CodeBuild/CodeDeploy, or GitLab CI Familiarity with collaborative tools like Jira and Confluence Knowledge of in-place deployment strategies, including Blue-Green or Canary Deployment Showcase of experience in ELK (Elasticsearch, Logstash, Kibana) stack development

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Fynd is India’s largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for a Senior Fullstack Javascript Developer responsible for the client side of our service. Your primary focus will be to implement a complete user interface in the form of a web app, with a focus on performance. Your primary duties will include creating modules and components and coupling them together into a functional app. You will work in a team with the back-end developers and communicate with the API using standard methods. A thorough understanding of all of the components of our platform and infrastructure is required. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 4+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Hands-on experience in JavaScript. Exceptions can be made if you’re really good at any other language with experience in building web/app-based tech products Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/) Good knowledge of async programming using Callbacks, Promises, and Async/Await Hands-on experience with Frontend codebases using HTML, CSS, and AJAX Working knowledge of MongoDB, Redis, MySQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources Experience in Vue.js would be plus What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We're looking for an Senior Engineer to join our Commerce Team. The Commerce Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles Fynd platform start from onboarding a seller to serving the finished products to end customers across different channels with customisation and configuration. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 4+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Solid experience in Python with experience in building web/app-based tech products Experience in at least one of the following frameworks - Sanic, Django, Flask, Falcon, web2py, Twisted, Tornado Working knowledge of MySQL, MongoDB, Redis, Aerospike Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with core AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3, Elasticache Understanding of Kafka, Docker, Kubernetes Have knowledge of Solr, Elastic search Attention to detail You can dabble in Frontend codebases using HTML, CSS, and Javascript You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Fynd is India’s largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for a Fullstack Javascript Developer responsible for the client side of our service. Your primary focus will be to implement a complete user interface in the form of a web app, with a focus on performance. Your primary duties will include creating modules and components and coupling them together into a functional app. You will work in a team with the back-end developers and communicate with the API using standard methods. A thorough understanding of all of the components of our platform and infrastructure is required. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 3+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Hands-on experience in JavaScript. Exceptions can be made if you’re really good at any other language with experience in building web/app-based tech products Expertise in Node.JS and Experience in at least one of the following frameworks - Express.js, Koa.js, Socket.io (http://socket.io/) Good knowledge of async programming using Callbacks, Promises, and Async/Await Hands-on experience with Frontend codebases using HTML, CSS, and AJAX Working knowledge of MongoDB, Redis, MySQL Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3 Experience with Frontend Stack would be added advantage (HTML, CSS) You might not have experience with all the tools that we use but you can learn those given the guidance and resources Experience in Vue.js would be plus What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!

Posted 4 weeks ago

Apply

2.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We're looking for an Engineer/Senior Engineer to join our Commerce Team. The Commerce Engineering Team forms the backbone of our core business. We build and iterate over our core platform that handles Fynd platform start from onboarding a seller to serving the finished products to end customers across different channels with customisation and configuration. Our team consists of generalist engineers who work on building REST APIs, Internal tools, and Infrastructure. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform Build bulletproof API integrations with third-party APIs for various use cases Evolve our Infrastructure and add a few more nines to our overall availability Have full autonomy and own your code, and decide on the technologies and tools to deliver as well operate large-scale applications on AWS Give back to the open-source community through contributions on code and blog posts This is a startup so everything can change as we experiment with more product improvements Some Specific Requirements Atleast 2+ years of Development Experience You have prior experience developing and working on consumer-facing web/app products Solid experience in Python with experience in building web/app-based tech products Experience in at least one of the following frameworks - Sanic, Django, Flask, Falcon, web2py, Twisted, Tornado Working knowledge of MySQL, MongoDB, Redis, Aerospike Good understanding of Data Structures, Algorithms, and Operating Systems You've worked with core AWS services in the past and have experience with EC2, ELB, AutoScaling, CloudFront, S3, Elasticache Understanding of Kafka, Docker, Kubernetes Have knowledge of Solr, Elastic search Attention to detail You can dabble in Frontend codebases using HTML, CSS, and Javascript You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves You might not have experience with all the tools that we use but you can learn those given the guidance and resources What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!

Posted 4 weeks ago

Apply

1.0 years

0 Lacs

Kochi, Kerala, India

On-site

A proactive and detail-oriented DevOps Engineer with 1 year of hands-on experience in cloud infrastructure automation, container orchestration, and CI/CD implementation. Strong practical knowledge of Linux environments, cloud-native technologies, and network security. Adept at leveraging tools like Jenkins, Ansible, and Docker to streamline deployment workflows and ensure system reliability. Key Skills & Experience Kubernetes (EKS): Experience deploying, managing, and troubleshooting applications on AWS Elastic Kubernetes Service (EKS). Familiar with Helm, autoscaling, and monitoring within Kubernetes environments. Docker: Proficient in creating, managing, and optimizing Docker containers. Skilled in writing custom Dockerfiles and troubleshooting container issues. Linux & Shell Scripting: Strong expertise in Linux system administration and daily usage of shell commands for automation, monitoring, and system diagnostics. Network Security: Hands-on experience in configuring and managing security groups, firewalls, IAM policies, and ensuring secure communication between services in AWS and Kubernetes environments. Ansible: Experience in writing and executing playbooks for configuration management and automated provisioning of infrastructure. Jenkins: Skilled in designing and managing Jenkins pipelines for CI/CD workflows, integrating with Git, Docker, and AWS services. AWS Cloud EKS: Core competency in managing containerized workloads. EC2 & S3: Provisioning, securing, and managing compute and storage resources. Security Groups & IAM: Implementing secure access policies and managing service-to-service communication. Lambda: Working knowledge of setting up serverless functions for event-driven automation. Mandatory Hands-On Experience Linux systems and advanced shell command usage Jenkins pipeline configuration and maintenance AWS services including EKS, EC2, S3, and Security Groups Network security concepts and practical enforcement A quick learner and effective problem solver, passionate about automation, scalability, and secure DevOps practices. ON-SITE KOCHI-INFOPARK IMMEDIATE JOINER Maximum CTC: ₹3 LPA (Three Lakhs per Annum) Experience Required: Up to 2 years (Candidates with more than 2 years of experience need not apply) SEND YOUR RESUME TO: hrteam@touchworldtechnology.com

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Ahmedabad

On-site

Join our vibrant team at Zymr as a Senior DevOps CI/CD Engineer and become a driving force behind the exciting world of continuous integration and deployment automation. We're a dynamic group dedicated to building a high-quality product while maintaining exceptional speed and efficiency. This is a fantastic opportunity to be part of our rapidly growing team. Job Title : Sr. DevOps Engineer Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Responsibilities: Deployments to Development, Staging, and Production Take charge of managing deployments to each environment with ease: Skillfully utilize Github protocols to identify and resolve root causes of merge conflicts and version mismatches. Deploy hotfixes promptly by leveraging deployment automation and scripts. Provide guidance and approval for Ruby on Rails (Ruby) scripting performed by junior engineers, ensuring smooth code deployment across various development environments. Review and approve CI/CD scripting pull requests from engineers, offering valuable feedback to enhance code quality. Ensure the smooth operation of each environment on a daily basis, promptly addressing any issues that arise: Leverage Datadog monitoring to maintain a remarkable uptime of 99.999% for each development environment. Develop strategic plans for Bash and Ruby scripting to automate health checks and enable auto-healing mechanisms in the event of errors. Implement effective auto-scaling strategies to handle higher-than-usual traffic on these development environments. Evaluate historical loads and implement autoscaling mechanisms to provide additional resources and computing power, optimizing workload performance. Collaborate with DevOps to plan capacity and monitoring using Datadog. Analyze developer workflows in close collaboration with team leads and attend squad standup meetings, providing valuable suggestions for improvement. Harness the power of Ruby and Bash to create tools that enhance engineers' development workflow. Script infrastructure using Terraform to facilitate the creation infrastructure Leverage CI/CD to add security scanning to code pipelines Develop Bash and Ruby scripts to automate code deployment while incorporating robust security checks for vulnerabilities. Enhance our CI/CD pipeline by building Canary Stages with Circle CI, Github Actions, YAML and Bash scripting. Integrate stress testing mechanisms using Ruby on Rails, Python, and Bash scripting into the pipeline's stages. Look for ways to reduce engineering toil and replace manual processes with automation! Nice to have: Terraform is required Github, AWS tooling, however pipeline outside of AWS, Rails (other scripting languages okay)

Posted 4 weeks ago

Apply

5.0 years

0 Lacs

Surat, Gujarat, India

On-site

Position : Technical Lead Location : Surat, Gujarat. (Onsite) ✅ Key Responsibilities 🚀 Architecture & System Design · Define scalable, secure, and modular architectures. · Implement high-availability patterns (circuit breakers, autoscaling, load balancing). · Enforce OWASP best practices, role-based access, and GDPR/PIPL compliance. 💻 Full-Stack Development · Oversee React Native & React.js codebases; mentor on state management (Redux/MobX). · Architect backend services with Node.js/Express; manage real-time layers (WebSocket, Socket.io). · Integrate third-party SDKs (streaming, ads, offerwalls, blockchain). 📈 DevOps & Reliability · Own CI/CD pipelines and Infrastructure-as-Code (Terraform/Kubernetes). · Drive observability (Grafana, Prometheus, ELK); implement SLOs and alerts. · Conduct load testing, capacity planning, and performance optimization. 👥 Team Leadership & Delivery · Mentor 5–10 engineers, lead sprint planning, code reviews, and Agile ceremonies. · Collaborate with cross-functional teams to translate roadmaps into deliverables. · Ensure on-time feature delivery and manage risk logs. 🔍 Innovation & Continuous Improvement · Evaluate emerging tech (e.g., Layer-2 blockchain, edge computing). · Improve development velocity through tools (linters, static analysis) and process optimization. 📌 What You’ll Need · 5+ years in full-stack development, 2+ years in a lead role · Proficient in: React.js, React Native, Node.js, Express, AWS, Kubernetes · Strong grasp of database systems (PostgreSQL, Redis, MongoDB) · Excellent communication and problem-solving skills · Startup or gaming experience a bonus 🎯 Bonus Skills · Blockchain (Solidity, smart contracts), streaming protocols (RTMP/HLS) · Experience with analytics tools (Redshift, Metabase, Looker) · Prior exposure to monetization SDKs (PubScale, AdX)

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Who We Are AssetPlus is a pioneering B2B2C wealth management platform based in India. Since 2016, we have empowered mutual fund distributors to seamlessly manage retail investments across diverse financial products like mutual funds, fixed deposits, and NPS. Our innovative approach simplifies wealth management and helps our partners grow with ease. Who We Are Looking For We are seeking a skilled and proactive DevOps Engineer to join our growing engineering team. You will play a critical role in ensuring smooth, secure, and efficient operations across our platform. As a DevOps Engineer at AssetPlus, you will work closely with developers, product managers, and leadership to streamline deployments, manage resources, and optimize infrastructure. If you thrive on solving complex operational challenges and enabling teams to deliver at scale, this role is for you! What Success Looks Like You will be successful in this role if you: Build and maintain robust monitoring and alerting systems, ensuring uptime and early detection of potential issues Manage developer permissions securely while balancing ease of access with compliance Drive cost and resource optimization initiatives, reducing unnecessary expenses while maintaining performance Establish and maintain secure, scalable networking and cloud infrastructure Create efficient, reliable deployment pipelines that minimize downtime and errors Take ownership of MongoDB database management, ensuring optimal performance, backups, and security Key Responsibilities Monitoring and Alerts Setup Design and implement monitoring systems to track application performance, health, and availability Configure automated alerts to proactively address potential problems Permission Management for Developers Set up and manage IAM roles, policies, and access control for developers and teams Regularly review and audit permissions to maintain compliance and security Cost and Resource Optimization Analyze cloud infrastructure costs and identify areas for optimization Implement autoscaling and right-sizing strategies for efficient resource usage Networking and Security Configure and maintain secure networking setups, including VPCs, subnets, and firewalls Ensure compliance with security best practices and implement robust systems for vulnerability management Deployment Pipelines Build and maintain CI/CD pipelines for smooth, automated deployments Optimize pipelines for speed, reliability, and rollback capabilities MongoDB Management Monitor and optimize MongoDB performance, ensuring high availability and minimal latency Manage backups, restoration, and version upgrades securely. What We Value In You Experience:Hands-on experience in DevOps roles, with a proven track record in cloud platforms AWS preferred) Technical Expertise:Proficiency in infrastructure as code (e.g., Terraform), CI/CD tools (e.g., Jenkins, GitHub Actions), and monitoring tools (e.g., Prometheus, Grafana) Problem-Solving:A proactive and resourceful mindset to tackle challenges in infrastructure, deployments, and optimization Collaboration:A team player who communicates effectively and works well with cross-functional teams Attention to Detail:A strong commitment to security, compliance, and system integrity. What We Offer A collaborative, innovative work environment where your contributions directly impact the success of our platform Opportunities for growth and professional development in a fast-paced startup Flexible work culture with the tools and resources you need to succeed. Ready to join us? If you are excited about building and maintaining the backbone of a high-impact wealth management platform, apply now and be a part of our journey to revolutionize the financial landscape in India.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

🚀 What We’re Building CodeRound AI matches top 5% tech talent to fastest growing VC funded AI startups in Silicon Valley and India. Candidates apply once and get UPTO 20 remote as well as onsite interview opportunities IF selected! Top-tier product startups in US, UAE & India have hired top engineers & ML folk using CodeRound 🧩 What You’ll Do Build and optimize our cloud infrastructure — scalable, secure, and cost-effective (mostly AWS). Set up and manage CI/CD pipelines to ensure smooth deployment across backend, AI services, and mobile. Containerize backend services (FastAPI, Rails) and optimize them for performance. Implement monitoring, alerting, and logging to catch issues before users do. Optimize database performance (Postgres, Redis) and manage backups and scaling. Collaborate with backend, AI, and product teams to deploy new features safely and quickly. Champion infra-as-code and automation wherever possible. 💥 Why this is exciting You'll own DevOps for a high-usage, real-world AI platform — not just internal tools. You’ll work on real-time, high-stakes flows — interviews, scoring, hiring decisions. You’ll work closely with founders, ship weekly, and see the direct impact of your work. ✅ You’ll Be Great At This If You Have 2–5 years of experience as a DevOps engineer, SRE, or infrastructure engineer. Are strong with AWS services (EC2, RDS, ECS/EKS, S3, CloudWatch). Can write clean, reusable Terraform or CloudFormation code. Have experience setting up CI/CD pipelines and optimizing build/release flows. Are comfortable with Docker, Linux servers, and basic networking (VPCs, security groups). Understand application and database scaling (horizontal/vertical). ⚡ Bonus If You Have experience supporting AI/ML pipelines in production (fine-tuning infra, vector DBs, etc.). Know cost optimization tricks for cloud infra (spot instances, autoscaling groups, etc.). Are excited to eventually build a small infra team

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a highly skilled and motivated Senior Kafka Infrastructure Engineer to join our platform engineering team. This role is ideal for someone who is deeply experienced with the Apache Kafka ecosystem and passionate about building scalable, reliable, and secure streaming infrastructure. Key Responsibilities: Design, deploy, manage, and scale highly available Kafka clusters in production environments. Administer Kafka components, including Brokers, ZooKeeper/KRaft, Topics, Partitions, Schema Registry, Kafka Connect, and Kafka Streams. Deploy Kafka on Kubernetes clusters using Strimzi operators, Helm charts, and Terraform. Implement autoscaling, resource optimisation, network policies, and persistent storage (PVC) configurations. Monitor Kafka health and performance using Prometheus, Grafana, JMX Exporter, and custom metrics. Secure Kafka infrastructure with TLS, SASL, ACLs, Kubernetes secrets, and RBAC. Automate Kafka provisioning and deployment in AWS, GCP, or Azure (preferably with EKS, GKE, or AKS). Integrate Kafka infrastructure management into CI/CD pipelines using ArgoCD, Jenkins, etc. Build and maintain containerised Kafka deployment workflows and release pipelines. Required Skills and Experience: Deep understanding of Kafka architecture and internals. Extensive hands-on experience managing Kafka in cloud-native environments. Proficiency with Kubernetes and container orchestration concepts. Experience with Infrastructure as Code tools like Helm and Terraform. Solid grasp of cloud-native security practices and authentication mechanisms. Proven track record in automation and incident resolution. Strong debugging, analytical, and problem-solving skills. Soft Skills: Proactive, ownership-driven, and automation-first mindset. Strong verbal and written communication skills. Comfortable working collaboratively with SREs, developers, and other cross-functional teams. Detail-oriented and documentation-focused. Willingness to mentor and share knowledge with peers. Location: Hybrid (Gurgaon) Work Hours: Aligned with USA time zones Urgency: Must be able to join within 1 month

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Overview Join our dynamic team in, Bangalore as a Backend + DevOps Engineer . You'll architect and scale document processing pipelines that handle thousands of financial documents daily, ensuring high availability and cost efficiency. What You'll Do Build scalable async processing pipelines for document classification, extraction, and validation Optimize cloud infrastructure costs while maintaining 99.9% uptime for document processing workflows Design and implement APIs for document upload, processing status, and results retrieval Manage Kubernetes deployments with autoscaling based on document processing load Implement monitoring and observability for complex multistage document workflows Optimize database performance for high-volume document metadata and processing results Build CI/CD pipelines for safe deployment of processing algorithms and business rules Technical RequirementsMust Have: 5+ years backend development (Python or Go) Strong experience with async processing (Celery, Temporal, or similar) Docker containerization and orchestration Cloud platforms (AWS/GCP/Azure) with cost optimization experience API design and development (REST/GraphQL) Database optimization (MongoDB, PostgreSQL) Production monitoring and debugging Nice to Have: Kubernetes experience Experience with document processing or ML pipelines Infrastructure as Code (Terraform/CloudFormation) Message queues (SQS, RabbitMQ, Kafka) Performance optimization for high-throughput systems

Posted 1 month ago

Apply

1.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We are looking for Backend Engineers with 1-3 years of production experience shipping and supporting backend code. You will be a part of our team, owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools, and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it's doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Requirements Proficient in Rust - comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing - have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills - you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) - able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud- have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have - exposure to blockchain or high-volume financial data streams. This job was posted by Akshay Singh from Yugen.ai.

Posted 1 month ago

Apply

2.0 years

3 - 5 Lacs

India

On-site

AWS Cloud Engineer with extensive experience of 2 years in designing available, cost-efficient, Fault-Tolerant and scalable distributed systems on AWS; exposure in AWS deployment and management services. Monitoring the deployments in environments, debugging deployment issues and resolving the same in timely manner reducing the downtime. Experience in AWS Cloud and DevOps Tools. Experienced working in AWS Infrastructure and its services like IAM, VPC, EC2, EBS, S3, ALB, NACL, Security Groups, Auto Scaling, RDS, SNS, EFS, CloudWatch, CloudFront. Good hands-on experience in IAC tool like Terraform, CloudFormation. Good Experience in source code management tool Git, Github and source control management concepts like Branches, Merges . Good Experience in automating CI CD pipeline using Jenkins tools. Good hands-on experience in Configuration Management tool like Ansible. Having experience in creating custom Docker Images using Docker file and pushing Docker Images to Docker Hub. Setting up Kubernetes Cluster using EKS and Kubeadm. Writing manifest files to create deployments and services for micro service applications. Configuring Persistent volumes (PVs), PVCs for persistent database environments. Managed Deployment, ReplicationSet, StatefullSet, AutoScaling fo r Kubernetes Clusters. Good Experience on ELK for Log Aggregation and Log monitoring. Implemented, maintained, monitored alarms and notifications for AWS services using Cloud Watch and SNS. Experienced in deploying and monitoring applications on various platforms and setting up life cycle policies to back data from AWS S3. Configured CloudWatch alarm rules for operational and performance metrics for AWS resources and applications. Provisioned AWS resources using AWS Management Console, Command line Interface (CLI) Planed, built, and configured network infrastructure within VPC and other components. Responsible for implementing and supporting of cloud-based infrastructure and its solutions. Launching and configuring EC2 Instance using AMIs (Linux) Created IAM users and Policies towards application access. Installing and configuring Apache web server in windows and Linux. Initiating alarms in CloudWatch service for monitoring the server’s performance, CPU Utilization, disk usage etc.to take recommended actions for better performance. Creating/Managing Instance Image/Snapshots/Managing Volumes. Setup/Managing VPC, Subnets, make connection between different availability zones. Monitor Access logs and Error logs in AWS Cloud watch. Configuring EFS to EC2 instances. Creating & Configuring Elastic Load Balancer to distribute the traffic. Administration of Jenkins server - Includes Setup of Jenkins, parameterized builds and Deployment automation. Experience in creating Jenkins jobs, plug-in installations, setting up distributed builds concept and other Jenkins administration activities. Experience in managing microservices application using docker and Kubernetes. Increasing EBS volume storage capacity using AWS EBS Volume features. Creating/Managing buckets on S3 and assigning access permissions Applications of software installations, troubleshooting and updating Build and release EC2 instance Amazon Linux for development and production environment. Moving EC2 logs into S3. Experience in S3 Versioning, Server access logging & Life cycle policies on S3Buckets. Creating & Maintaining user accounts, groups and permissions. Created SNS notifications for multiple services in AWS. Creating and attaching Elastic IP to EC2 instances Assigning access permissions for files and directories to users and groups. Creating and managing user accounts/groups, assigning Roles and policies using IAM Experience on AWS Cloud services like IAM, S3, VPC, EC2, CloudWatch, CloudFront, CloudTrail, Route53, EFS, AWS Auto Scaling, EBS, SNS, SES, SQS, KMS, RDS, Security groups, Lambda, ECS, EKS,Tag Editor and more. Involved in designing and developing Amazon EC2, Amazon S3, Amazon RDS, Lamnda and other services. Creating containers in docker, pulling images deployment. Creating networks, creating nodes and pods in Kubernetes. Deployments using Jenkins through CI/CD pipeline. Creating infrastructure using terraform. Responsible for designing and deploying best SCM processes and procedures. Responsible for branching , merging and resolving various conflicts arising in GIT. Setup/Created CI/CD pipeline in Jenkins and scheduling a job. Established complete Jenkins CI-CD pipeline and complete workflow of build and delivery pipelines. Involved in writing DockerFile to build customized DockerImage for creating Docker Container and pushing DockerImage to DockerHub. Creating and managed multiple containers using Kubernetes . And creating deployments using Yaml code. Used Kubernetes to Orchestrate the deployment, scaling and management of docker container. Experience with monitoring tools like Prometheus and Grafana. Responsible to establish complete pipeline work-flow starting from pulling source code from git repository till deploying end product into Kubernetes cluster. Managing infrastructure of client both Windows and Linux Creation of files and directories. Creating users and groups. Assigning access permissions for files and directories to users and group. Installing and managing Web Server. Installation of packages using YUM (HTTP, HTTPS) Monitoring system Performance of Disk utilization and CPU utilization Technical Skills Operating Systems: Linux, Cent OS, Ubuntu and Windows. AWS: EC2, VPC, S3, EBS, IAM, Load balancing, Autoscaling, CloudFormation, CloudWatch, CloudFront, SNS, EFS, Route-53 DevOps Tools: Git, Ansible, Chef, Docker, Jenkins, Kubernetes, Terraform. Scripting Languages: Shell, Python. Monitoring Tools: CloudWatch, Grafana, Prometheus. Job Types: Full-time, Permanent, Fresher Pay: ₹345,405.87 - ₹500,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Morning shift Rotational shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Speak with the employer +91 8668118196

Posted 1 month ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise a multi-year AI-platform vision and reference architecture that advances Amgen’s digital-transformation, cloud-modernisation and product-delivery objectives. Design and evolve foundational platform components —feature stores, model registry, experiment tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Establish modelling and algorithm-selection standards that span classical ML, tree-based ensembles, clustering, time-series, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques; advise product squads on choosing and operationalising the right algorithm for each use-case. Orchestrate the full delivery pipeline for AI solutions —pilot → regulated validation → production rollout → post-launch monitoring—defining stage-gates, documentation and sign-off criteria that meet GxP/CSV and global privacy requirements. Scale AI workloads globally by engineering autoscaling GPU/CPU clusters, distributed training, low-latency inference and cost-aware load-balancing, maintaining <100 ms P95 latency while optimising spend. Implement robust MLOps and release-management practices (CI/CD for models, blue-green & canary deployments, automated rollback) to ensure zero-downtime releases and auditable traceability. Embed responsible-AI and security-by-design controls —data privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Package reusable solution blueprints and APIs that enable product teams to consume AI capabilities consistently, cutting time-to-production by ≥ 50 %. Provide deep technical mentorship and architecture reviews to product squads, troubleshooting performance bottlenecks and guiding optimisation of cloud resources. Develop TCO models and FinOps practices, negotiate enterprise contracts for cloud/AI infrastructure and deliver continuous cost-efficiency improvements. Establish observability frameworks —metrics, distributed tracing, drift detection, SLA dashboards—to keep models performant, reliable and compliant at scale. Track emerging technologies and regulations (serverless GPUs, confidential compute, EU AI Act) and integrate innovations that maintain Amgen’s leadership in enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software. Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

8.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Experience: 8+ years Location: Mumbai (Onsite) About the Role: We are looking for hands-on and automation-driven Cloud Engineers to join our DevOps team. You will be responsible for managing cloud infrastructure, CI/CD pipelines, containerized deployments, and ensuring platform stability and scalability across environments. Key Responsibilities: Design, build, and maintain secure and scalable infrastructure on AWS, Azure, or GCP. Set up and manage CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins. Manage Dockerized environments, ECS, EKS, or Kubernetes clusters for microservice-based deployments. Monitor and troubleshoot production and staging environments, ensuring uptime and performance. Work closely with developers to streamline release cycles and automate testing, deployments, and rollback procedures. Maintain infrastructure as code using Terraform or CloudFormation. What We’re Looking For: Strong knowledge of Linux system administration, networking, and cloud infrastructure (preferably AWS). Experience with Docker, Kubernetes, Nginx, and monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with Git, scripting (Shell/Python), and secrets management tools. Ability to debug infrastructure issues, logs, and deployments across cloud-native stacks. Bonus Points: Certification in AWS/GCP/Azure DevOps or SysOps. Exposure to security, cost optimization, and autoscaling setups. Work Mode: Onsite – Mumbai Why Join Us? Direct ownership over production-grade infrastructure Build systems that support AI, web apps, APIs, and real products Get early visibility into architecture, security, and scalability decisions Clear growth track: ACE → CE → SCE → SPCE → Cloud Architect / DevOps Lead

Posted 1 month ago

Apply

12.0 years

0 Lacs

India

On-site

Role Overview: We're seeking a senior Azure Infrastructure Engineer with 8–12 years of deep hands-on experience in building, deploying, and operating cloud-native infrastructure. You’ll be responsible for core components like AKS, Terraform, Docker, Helm, KEDA, HPA, Istio/service mesh, CI/CD pipelines, Azure networking, and disaster recovery. Key Responsibilities: Operate and troubleshoot production AKS clusters. – [Primary Skill -Expertise] Build and deploy workloads using Docker and Helm. – [ Secondary Skill- Need to know what to do] Automate infra provisioning with Terraform. – [Primary Skill -Expertise] Configure autoscaling using KEDA and HPA. –[Primary Skill -Expertise] Manage Istio or equivalent service mesh (ingress, routing, mTLS). – [ Low Priority Skill] Maintain robust CI/CD pipelines (Azure DevOps/GitHub Actions). –– [ Secondary Skill- Need to know what to do] Handle complex Azure networking (VNet, NSG, DNS, LB, peering).- –[Primary Skill -Expertise] Support and execute disaster recovery procedures. - –[Primary Skill -Expertise] Required Skills: 8–12 years in infrastructure/DevOps roles with deep expertise in. Azure, AKS, Docker, Terraform, Helm. KEDA, HPA, Istio/service mesh. CI/CD pipelines, Linux, Bash/PowerShell scripting. Azure networking and disaster recovery.

Posted 1 month ago

Apply

1.0 years

25 Lacs

Kochi, Kerala, India

Remote

Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

1.0 years

25 Lacs

Greater Bhopal Area

Remote

Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

1.0 years

25 Lacs

Visakhapatnam, Andhra Pradesh, India

Remote

Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

1.0 years

25 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies