Jobs
Interviews

1065 Prometheus Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 12.0 years

30 - 37 Lacs

Bengaluru

Work from Office

We need immediate joiners or those who are serving notice period and can join in another 10-15 days. No other candidate i.e. who are on bench or official 3, 2 months NP. Strong working experience in design and development of RESTful APIs using Java, Spring Boot and Spring Cloud. Technical hands-on experience to support development, automated testing, infrastructure and operations Fluency with relational databases or alternatively NoSQL databases Excellent pull request review skills and attention to detail Experience with streaming platforms (real-time data at massive scale like Confluent Kafka). Working experience in AWS services like EC2, ECS, RDS, S3 etc. Understanding of DevOps as well as experience with CI/CD pipelines Industry experience in Retail domain is a plus. Exposure to Agile Methodology and project tools: Jira, Confluence, SharePoint. Working knowledge in Docker Container/Kubernetes Excellent team player, ability to work independently and as part of a team Experience in mentoring junior developers and providing technical leadership Familiarity with Monitoring & Reporting tools (Prometheus, Grafana, PagerDuty etc). Ability to learn, understand, and work quickly with new emerging technologies, methodologies, and solutions in the Cloud/IT technology space Knowledge of front-end framework using React or Angular and any other programming languages like JavaScript/TypeScript or Python is a plus

Posted 1 month ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Noida, Greater Noida

Work from Office

Site & Platform Reliability Engineer Location: Noida/Greater Noida Organization: TetrahedInc. Experience: 8+ Years Work Mode: [Onsite] Employment Type: Fulltime About TetrahedInc. TetrahedInc. is a privately held IT services and consulting firm headquartered in Hyderabad with a strong global staffing presence ambition box. They specialize in end-to-end digital transformation offering cloud computing, AI/ML, cybersecurity, data analytics, and recruitment/staffing solutions to diverse industries worldwide tetrahed.com. About the Role As a Site & Platform Reliability Engineer at TetrahedInc., you'll be responsible for designing, automating, and operating cloud-native platforms using SRE/PRE best practices. You'll be a technical leader, engaging with clients, mentoring teams, and collaborating with major cloud and open-source ecosystems (e.g., Kubernetes, CNCF). Key Responsibilities Technical & Architectural Leadership: Lead PoCs, architecture design, SRE kick start, observability, and platform modernization efforts. Engineer scalable, resilient cloud-native systems. Partner with cloud providers like Google, AWS, Microsoft, Red Hat, and VMware. Service Delivery & Automation: Implement SRE principles, automation, infrastructure-as-code (Terraform, Ansible), and CI/CD pipelines (ArgoCD, Jenkins, Tekton). Define SLOs/SLIs, perform incident management, and ensure reliability. Coach internal and client delivery teams in reliability practices. Innovation & Thought Leadership: Contribute to open-source communities or internal knowledge-sharing. Author whitepapers, blogs, or speak at industry events. Maintain hands-on technical excellence and mentor peers. Client Engagement & Trust: Conduct workshops, briefings, and strategic discussions with stakeholders. Act as a trusted advisor during modernization journeys. Mandatory Skills & Experience Proficiency in Kubernetes (Open Shift, Tanzu, or vanilla). Strong SRE knowledge, infrastructure-as-code, and automation scripting (Python, Bash, YAML). Experience with CI/CD pipeline tools (ArgoCD, Jenkins, Tekton). Deep observability experience (Prometheus, ELK/EFK, Grafana, App Dynamics, Dyna-trace). Familiarity with cloud-native networking (DNS, load balancers, reverse proxies). Expertise in micro services and container-based architectures. Excellent communication and stakeholder management. Preferred Qualifications Bachelors/Masters in Computer Science or Engineering. CKA certification or equivalent Kubernetes expertise. 8+years in SI, consulting, or enterprise organizations. Familiarity with Agile/Scrum/Domain-driven design, CNCF ecosystem. Passionate about innovation, labs environment, and open-source. Why Join Tetrahed? Engage with global clients and cloud hyperscalers. Drive opensource and SRE best practices. Contribute to a learning-rich, collaborative environment. Make an impact within a growing, innovative mid-size IT organization. Interested Candidates Lets Connect! Please share your updated CV or reach out directly: Email : manojkumar@tetrahed.com Mobile : +91-6309124068 LinkedIn (Manoj Kumar) : https://www.linkedin.com/in/manoj-kumar-54455024b/ Company Page : https://www.linkedin.com/company/tetrahedinc/

Posted 1 month ago

Apply

7.0 - 9.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

What you will do In this vital role you will be responsible for developing, and maintaining software applications, components, and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role requires a experience in and a deep understanding of both front and back-end development. The Full Stack Software Engineer will work closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. The Full Stack Software Engineer will also contribute to design discussions and provide guidance on technical feasibility and best standards. Roles & Responsibilities: Develop complex software projects from conception to deployment, including delivery scope, risk, and timeline. Conduct code reviews to ensure code quality and adherence to best practices. Contribute to both front-end and back-end development using cloud technology. Provide ongoing support and maintenance for design system and applications, ensuring reliability, reuse and scalability while meeting accessibility and best standards. Develop innovative solutions using generative AI technologies. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Identify and resolve technical challenges, software bugs and performance issues effectively. Stay updated with the latest trends and advancements. Analyze and understand the functional and technical requirements of applications, solutions, and systems and translate them into software architecture and design specifications. Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software. Work closely with cross-functional teams, including product management, stakeholders, design, and QA, to deliver high-quality software on time. Maintain detailed documentation of software designs, code, and development processes. Work on integrating with other systems and platforms to ensure seamless data flow and functionality. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of experience in Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of experience in Computer Science, IT or related field experience OR Diploma and 7 to 9 years of experience in Computer Science, IT or related field experience Must-Have Skills: Hands-on experience with various cloud services, understanding the pros and cons of various cloud services in well-architected cloud design principles. Experience with developing and maintaining design systems across teams. Hands-on experience with Full Stack software development. Proficient in programming languages such as JavaScript, Python, SQL/NoSQL. Familiarity with frameworks such as React JS visualization libraries. Strong problem-solving and analytical skills; ability to learn quickly; excellent communication and interpersonal skills. Experience with API integration, serverless, microservices architecture. Experience in SQL/NoSQL databases, vector databases for large language models. Experience with website development, understanding of website localization processes, which involve adapting content to fit cultural and linguistic contexts. Preferred Qualifications: Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk). Experience with data processing tools like Hadoop, Spark, or similar. Experience with popular large language models. Experience with Langchain or llamaIndex framework for language models; experience with prompt engineering, model fine-tuning. Professional Certifications: Relevant certifications such as CISSP, AWS Developer certification, CompTIA Network+, or MCSE (preferred). Any SAFe Agile certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.

Posted 1 month ago

Apply

6.0 - 11.0 years

20 - 25 Lacs

Hyderabad, Ahmedabad

Hybrid

Hi Aspirant, Greetings from TechBlocks - IT Software of Global Digital Product Development - Hyderabad !!! About us : TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. Job Title: Senior DevOps Site Reliability Engineer (SRE) Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Experience Required: 610 years of SRE or infrastructure engineering experience in cloud-native environments. Mandatory: Cloud : GCP (GKE, Load Balancing, VPN, IAM) Observability: Prometheus, Grafana, ELK, Datadog Containers & Orchestration : Kubernetes, Docker Incident Management: On-call, RCA, SLIs/SLOs IaC : Terraform, Helm Incident Tools: PagerDuty, OpsGenie Nice to Have : GCP Monitoring, Skywalking Service Mesh, API Gateway GCP Spanner, Scope: Drive operational excellence and platform resilience Reduce MTTR, increase service availability Own incident and RCA processes Roles and Responsibilities: Define and measure Service Level Indicators (SLIs), Service Level Objectives ( SLOs), and manage error budgets across services. Lead incident management for critical production issues drive Root Cause Analysis (RCA) and postmortems. Create and maintain runbooks and standard operating procedures for high availability services. Design and implement observability frameworks using ELK, Prometheus, and Grafana ; drive telemetry adoption. Coordinate cross-functional war-room sessions during major incidents and maintain response logs. Develop and improve automated System Recovery, Alert Suppression, and Escalation logic. Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. Analyze performance metrics and conduct regular reliability reviews with engineering leads. Participate in capacity planning, failover testing, and resilience architecture reviews. If you are interested , then please share me your updated resume to kranthikt@tblocks.com Warm Regards, Kranthi Kumar kranthikt@tblocks.com Contact: 8522804902 Senior Talent Acquisition Specialist Toronto | Ahmedabad | Hyderabad | Pune www.tblocks.com

Posted 1 month ago

Apply

12.0 - 19.0 years

17 - 30 Lacs

Hyderabad, Ahmedabad

Hybrid

Job Title: Release Manager Tools & Infrastructure Location: Ahmedabad & Hyderabad Experience Level: 12 + years Department: Engineering / Devops Reporting To: Head of Devops / Engineering Director Were looking for a hands-on Release Manager with strong Devops and Infrastructure expertise to lead software release pipelines, tooling, and automation across distributed systems. This role ensures secure, stable, and timely delivery of applications while coordinating across engineering, QA, and SRE teams. Key Responsibilities Release & Environment Management Plan and manage release schedules and cutovers Oversee environment readiness, rollback strategies, and post-deployment validations Ensure version control, CI/CD artifact management, and build integrity Toolchain Ownership Administer tools like Jenkins, GitHub Actions, Bitbucket, SonarQube, Argo CD, JFrog, and Terraform Manage Kubernetes and Helm for container orchestration Maintain secrets via Vault and related tools Infrastructure & Automation Work with Cloud & DevOps teams for secure, automated deployments Use GCP (GKE, VPC, IAM, Load Balancer, GCS) with IaC standards (Terraform, Helm) Monitoring & Stability Implement observability tools: Prometheus, Grafana, ELK, Datadog Monitor release health, manage incident responses, and improve via RCAs Compliance & Coordination Use Jira, Confluence, ServiceNow for planning and documentation Apply OWASP/WAF/GCP Cloud Armor standards Align releases with Dev, QA, CloudOps, and Security teams IF interested share resume to: sowmya.v@acesoftlabs.com

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Pune

Remote

What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will build core agent infrastructureA2A orchestration and MCP-driven tool discoveryso teams can launch secure, scalable agent workflows. You will be reporting to Senior Manager, Machine Learning What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Promote innovation by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful 6+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Experience working with technological innovations in AI & ML(esp. GenAI) and apply them. Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, Agents, A2A, MCP, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, and Grafana.

Posted 1 month ago

Apply

5.0 - 8.0 years

6 - 9 Lacs

Pune

Remote

What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will have a blend of technical skills in the fields of AI & Machine Learning especially with LLMs and a deep-seated understanding of software development practices where you'll work with a team to ensure our systems are scalable, performant and accurate. You will be reporting to Senior Manager, AI/ML. What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Inspire creativity by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration. Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You'll Need to be Successful Bachelor's/Master's degree in computer science with 5+ years of industry experience in software development, along with experience building Machine Learning models and deploying them in production environments. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Work with technological innovations in AI & ML(esp. GenAI) Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies you will work with: Python, LLMs, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, Grafana

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Noida

Work from Office

Senior Full Stack Engineer We are seeking a Senior Full Stack Engineer to design, build and scale a portfolio of cloud-native products including real-time speech-assessment tools, GenAI content services, and analytics dashboards used by customers worldwide. You will own end-to-end delivery across React/Next.js front-ends, Node/Python micro-services, and a MongoDB-centric data layer, all orchestrated in containers on Kubernetes, while championing multi-tenant SaaS best practices and modern MLOps. Role: Product & Architecture • Design multi-tenant SaaS services with isolated data planes, usage metering, and scalable tenancy patterns. • Lead MERN-driven feature work: SSR/ISR dashboards in Next.js, REST/GraphQL APIs in Node.js or FastAPI, and event-driven pipelines for AI services. • Build and integrate AI/ML & GenAI modules (speech scoring, LLM-based content generation, predictive analytics) into customer-facing workflows. DevOps & Scale • Containerise services with Docker, automate deployment via Helm/Kubernetes, and implement blue-green or canary roll-outs in CI/CD. • Establish observability for latency, throughput, model inference time, and cost-per-tenant across micro-services and ML workloads. Leadership & Collaboration • Conduct architecture reviews, mentor engineers, and promote a culture that pairs AI-generated code with rigorous human code review. • Partner with Product and Data teams to align technical designs with measurable business KPIs for AI-driven products. Required Skills & Experience • Front-End React 18, Next.js 14, TypeScript, modern CSS/Tailwind • Back-End Node 20 (Express/Nest) and Python 3.11 (FastAPI) • Databases MongoDB Atlas, aggregation pipelines, TTL/compound indexes • AI / GenAI Practical ML model integration, REST/streaming inference, prompt engineering, model fine-tuning workflows • Containerisation & Cloud Docker, Kubernetes, Helm, Terraform; production experience on AWS/GCP/Azure • SaaS at Scale Multi-tenant data isolation, per-tenant metering & rate-limits, SLA design • CI/CD & Quality GitHub Actions/GitLab CI, unit + integration testing (Jest, Pytest), E2E testing (Playwright/Cypress) Preferred Candidate Profile • Production experience with speech analytics or audio ML pipelines. • Familiarity with LLMOps (vector DBs, retrieval-augmented generation). • Terraform-driven multi-cloud deployments or FinOps optimization. • OSS contributions in MERN, Kubernetes, or AI libraries. Tech Stack & Tooling - React 18 • Next.js 14 • Node 20 • FastAPI • MongoDB Atlas • Redis • Docker • Kubernetes • Helm • Terraform • GitHub Actions • Prometheus + Grafana • OpenTelemetry • Python/Rust micro-services for ML inference

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Thane

Work from Office

Role & responsibilities : Deploy, configure, and manage infrastructure across cloud platforms like AWS, Azure, and GCP. Automate provisioning and configuration using tools such as Terraform. Design and maintain CI/CD pipelines using Jenkins, GitLab CI, or CircleCI to streamline deployments. Build, manage, and deploy containerized applications using Docker and Kubernetes. Set up and manage monitoring systems like Prometheus and Grafana to ensure performance and reliability. Write scripts in Bash or Python to automate routine tasks and improve system efficiency. Collaborate with development and operations teams to support deployments and troubleshoot issues. Investigate and resolve technical incidents, performing root cause analysis and implementing fixes. Apply security best practices across infrastructure and deployment workflows. Maintain documentation for systems, configurations, and processes to support team collaboration. Continuously explore and adopt new tools and practices to improve DevOps workflows.

Posted 1 month ago

Apply

2.0 - 5.0 years

1 - 6 Lacs

Noida, Hyderabad

Work from Office

We are currently seeking a GCP Dev Ops Engr to join our team in Ban/Hyd/Chn/Gur/Noida, Karntaka (IN-KA), India (IN). Responsibilities Design, implement, and manage GCP infrastructure using Infrastructure as Code (IaC) tools. Develop and maintain CI/CD pipelines to improve development workflows. Monitor system performance and ensure high availability of cloud resources. Collaborate with development teams to streamline application deployments. Maintain security best practices and compliance across the cloud environment. Automate repetitive tasks to enhance operational efficiency. Troubleshoot and resolve infrastructure-related issues in a timely manner. Document procedures, policies, and configurations for the infrastructure. Skills Google Cloud Platform (GCP) Terraform Ansible CI/CD Kubernetes Docker Python Bash/Shell Scripting Monitoring tools (e.g., Prometheus, Grafana) Cloud Security Jenkins Git

Posted 1 month ago

Apply

4.0 - 7.0 years

5 - 9 Lacs

Noida

Work from Office

Proficiency in Go programming language (Golang). Solid understanding of RESTful API design and microservices architecture. Experience with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Redis). Familiarity with container technologies (Docker, Kubernetes). Understanding of distributed systems and event-driven architecture. Version control with Git. Familiarity with CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Experience with message brokers (Kafka, RabbitMQ). Knowledge of GraphQL. Exposure to performance tuning and profiling. Contributions to open-source projects or personal GitHub portfolio. Familiarity with monitoring tools (Prometheus, Grafana, ELK). Roles and Responsibilities Design, develop, and maintain backend services and APIs using Go (Golang). Write efficient, scalable, and reusable code. Collaborate with front-end developers, DevOps engineers, and product teams to deliver high-quality features. Optimize applications for performance and scalability. Develop unit and integration tests to ensure software quality. Implement security and data protection best practices. Troubleshoot and debug production issues. Participate in code reviews, architecture discussions, and continuous improvement processes.

Posted 1 month ago

Apply

1.0 - 3.0 years

10 - 15 Lacs

Pune, Bengaluru

Work from Office

Must have a minimum 1 yr exp in SRE (CloudOps), Google Cloud platforms (GCP), monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty, Hands-on experience with Kubernetes for orchestration and container mgt Required Candidate profile Mandatory expreience working in B2C Product Companies. Must have Experience with CI/CD tools e.g. (Jenkins, GitLab CI/CD, CircleCI TravisCI..)

Posted 1 month ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Pune, Chennai

Work from Office

Mandatory Skills SRE, DevOps, Scripting (Python/Bash/Perl), Automation Tools (Ansible/Terraform/Puppet), AWS Cloud, Docker, Kubernetes, Observability Tools (Prometheus/Grafana/ELK Stack/Splunk), CICD pipelines using GitLab Jenkins or similar tools Please share your resume to thulasidharan.b@ltimindtree.com Note: Only 0-30 days notice

Posted 1 month ago

Apply

12.0 - 15.0 years

30 - 35 Lacs

Bengaluru

Work from Office

We are seeking a highly experienced and technically profound Cloud Application Architect to drive our cloud-first digital transformation initiatives. This pivotal role involves leading the design, development, and modernization of our enterprise application portfolio to deliver modern, scalable, secure, and business-aligned cloud-native solutions. The ideal candidate will possess a deep, hands-on technical background in application architecture, with a focus on transforming legacy systems into agile, customer-centric, and cloud-optimized experiences within either the Microsoft or Java enterprise stack. This role is critical for shaping our application landscape, ensuring robust end-to-end design, and guiding development teams through complex architectural challenges in a dynamic, cloud-first environment. Key Responsibilities As a Senior Cloud Application Architect, you will: Define Cloud-Native Application Architectures: Lead the definition, design, and implementation of comprehensive cloud-native application architectures and strategic modernization roadmaps for critical enterprise systems, primarily leveraging AWS EKS, Azure AKS, and serverless functions (e.g., AWS Lambda, Azure Functions). Own End-to-End Application Design: Hold ultimate accountability for the end-to-end application design, ensuring solutions meet stringent requirements for scalability (handling high transaction volumes), performance (low latency), robust security (integrating DevSecOps principles like SAST/DAST, Zero Trust), and high reliability (achieving stringent uptime targets). Guide Microservices/API Architecture & Containerization: Provide senior technical guidance and mentorship to multiple distributed project teams on advanced microservices and API-first design patterns, including choreography vs. orchestration, eventual consistency, and idempotent API design. Lead the adoption and implementation of Docker containerization and Kubernetes orchestration (AKS/EKS) for efficient application deployment and management. Develop Deployment & Operational Strategy: Define and enforce declarative deployment strategies (e.g., GitOps with ArgoCD/FluxCD). Design application-level disaster recovery and business continuity plans, including multi-region deployments with active-active/active-passive patterns and automated failover mechanisms. Collaborate Cross-Functionally: Collaborate extensively as a strategic partner with cross-functional teams including software developers (Java/.NET), product owners, business analysts, DevOps engineers, security specialists, and infrastructure teams. Translate complex business requirements into clear, actionable technical specifications. Lead Technical Design Sessions & Governance: Lead high-stakes technical design sessions, facilitate architecture review boards (ARB), and prepare comprehensive architectural documentation (e.g., Architecture Decision Records (ADRs), sequence diagrams, data flow diagrams) to ensure alignment, maintain architectural integrity, and govern new feature implementations. Support Build vs. Buy & Tool Selection: Actively support critical build vs. buy analyses for new functionalities. Evaluate, select, and champion various cloud services (PaaS, SaaS) and third-party tools (e.g., API Management gateways, caching solutions, message brokers) based on technical fit, business needs, and cost efficiency. Conduct and present Proof-of-Concepts (PoCs) for emerging technologies and strategic platform integrations. Drive DevSecOps & Observability Integration: Champion the integration of advanced DevSecOps practices, from "shift-left" security to automated CI/CD pipelines. Implement comprehensive application observability solutions (e.g., Prometheus, Grafana, Application Insights) to monitor SLOs/SLIs, diagnose performance issues, and proactively ensure system health. Optimize Application-Level Costs: Design and optimize application architectures to maximize cloud cost efficiency, leveraging serverless computing, right-sizing container workloads, and implementing intelligent autoscaling policies. Mentor & Foster Innovation: Mentor junior and mid-level developers and architects on cloud-native development best practices, application refactoring techniques, and effective utilization of cloud services. Explore and prototype the integration of emerging technologies (e.g., AI/ML, Generative AI) for intelligent features and digital workflow automation. Qualifications: Education: Bachelors or Masters degree in Computer Science, Engineering, Information Technology, or a related field. Experience: 1 12+ years of progressive experience in application architecture, with a significant and demonstrable focus on cloud-native application design, digital-first transformations, and modernizing enterprise software. Application Development Background: Strong application background with hands-on experience in either the Microsoft (.NET Core, ASP.NET) or Java (Spring Boot, J2EE) enterprise/product software architecture. Cloud Platform Expertise: Proven experience delivering cloud-first solutions using public cloud platforms (AWS, Azure are preferred; GCP experience is a plus), with a deep understanding of their PaaS and IaaS offerings relevant to application development. Modern Application Design Principles: Deep knowledge and hands-on experience with microservices, API-driven development, event-driven architecture, serverless computing, and domain-driven design. Containerization & Orchestration: Expertise in Docker and Kubernetes (EKS, AKS), including deployment strategies and operational best practices for containerized applications. Agile, DevOps, & CI/CD: Strong understanding and practical experience with agile delivery models, comprehensive DevOps practices, and continuous integration/deployment (CI/CD) pipelines. Communication & Stakeholder Management: Excellent communication, presentation, and stakeholder management skills, with a proven ability to bridge technical and business perspectives, and advise senior leadership. Leadership & Governance: Extensive experience in leading cross-functional development and architecture teams, managing architectural governance, and mentoring engineers in large-scale programs. Preferred Skills: Cloud Certifications: Relevant cloud certifications (e.g., AWS Certified Solutions Architect – Professional, Azure Solutions Architect Expert, Certified Kubernetes Application Developer - CKAD). Enterprise Architecture Frameworks: Knowledge of enterprise architecture frameworks (e.g., TOGAF) in the context of digital transformation. Observability Tools: Experience with comprehensive observability solutions for applications (e.g., Prometheus, Grafana, Datadog, Application Insights, distributed tracing tools like Jaeger). Security by Design: Direct experience implementing security best practices at the application architecture level (e.g., OWASP, threat modeling, secure coding standards). AI/ML Integration: Experience with integrating analytics, personalization, and AI/ML capabilities into application architectures. Low-Code/No-Code Platforms: Exposure to low-code/no-code development tools and digital workflow automation platforms.

Posted 1 month ago

Apply

5.0 - 7.0 years

25 - 40 Lacs

Pune

Work from Office

Our world is transforming, and PTC is leading the way.Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. PTC is looking for hands-on engineer, experienced with site reliability and operations , for a leading CAD SaaS solution. As part of your job at PTC, you will: Collaborate with multiple teams, to monitor & observe their cloud-deployed services Implement automated pipelines for deployment into cloud environment Implement monitoring & observability solutions Handle incidents and changes Troubleshoot and resolve production issues Conduct post-mortems Handle security incidents Job requirements: Proven experience working in Cloud DevOps and Site Reliability Engineering Ability to develop observability solutions using DataDog, or ELK, Prometheus and Grafana Great communication skills, written and verbal Strong hands-on skills to support Security in Cloud environment Experience and knowledge in cloud architecture reviews, SaaS processes and handling security incidences Advantage – knowledge and experience with Azure Why PTC? Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? Website: https://www.ptc.com LinkedIn: https://www.linkedin.com/company/ptcinc/ Facebook Page: https://www.facebook.com/ptc.inc/ Twitter Handle: @LifeatPTC '@PTC Instagram: ptc_inc Hashtag: #lifeatPTC Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here ."

Posted 1 month ago

Apply

5.0 - 9.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Key skills and expertise: 5+yrs of experience working with Microsoft Power Platform and Dynamics and working with highly scalable and reliable servers Spend considerable time on Production management, including incident and problem management, capacity management, monitoring, event management, change management and plant hygiene. Troubleshooting issues across the entire Technology stack: hardware, software, application and network. • Participating in on-call rotation, periodic maintenance calls with other specialists from other timezones. Proactively identifying and addressing system reliability risks. Working closely with development teams to design, build and maintain systems from a reliability, stability and resiliency perspective. Identifying and driving opportunities to improve automation for our platforms, scope and create automation for deployment, management, and visibility of our services. Representing the RPE organization in the design reviews, operations readiness exercises for the new and existing products/services Technical Skills: Enterprise tools like Promethus, Grafana, Splunk and Apica UNIX/Linux Support and Cloud based services Ansible, Github, or any automation/configuration/release management tools Automation experience scripting languages such as Python, Bash, Perl and Ruby (one of the languages sufficient) Awareness of, ability to reason about modern software and systems architecture, including load balancing, databased, queueing, caching, distributed systems failure modes, microservices, cloud etc. Experience of Azure networks, ServiceBus, Azure Virtual machines, and AzureSQL will be an advantage.

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

About The Role : Job Title:ELK & Grafana Architect Design, implement, and optimize ELK solutions to meet data analytics and search requirements. Collaborate with development and operations teams to enhance logging capabilities. Implement and configure components of the Elastic Stack, including Filebeat, Metricbeat, Winlogbeat, Logstash, and Kibana. Create and maintain comprehensive documentation for Elastic Stack configurations and processes. Ensure seamless integration between various Elastic Stack components. Develop and maintain advanced Kibana dashboards and visualizations. Design and implement solutions for centralized logs, infrastructure health metrics, and distributed tracing for different applications. Implement Grafana for visualization and monitoring, including Prometheus and Loki for metrics and logs management. Build detailed technical designs related to monitoring as part of complex projects. Ensure engagement with customers and deliver business value. Requirements: 6+ years of experience as an ELK Architect/Elastic Search Architect. Hands-on experience with Prometheus, Loki, OpenTelemetry, and Azure Monitor. Experience with data pipelines and redirecting Prometheus metrics. Proficiency in scripting and programming languages such as Python, Ansible, and Bash. Familiarity with CI/CD deployment pipelines (Ansible, GIT). Strong knowledge of performance monitoring, metrics, capacity planning, and management. Excellent communication skills with the ability to articulate technical details to different audiences. Experience with application onboarding, capturing requirements, understanding data sources, and architecture diagrams. Experience with OpenTelemetry monitoring and logging solutions. 3.Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc

Posted 1 month ago

Apply

6.0 - 10.0 years

13 - 17 Lacs

Mumbai, Pune

Work from Office

Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX) Location: Pune / Mumbai [Work from Office]

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. At least 3 years of experience in Java/J2EE, REST API, RDBMS At least 2 years of work experience in ReactJs. Ready for Individual contributor role and have done something similar in last 6 months. Fair understanding of Microservices, Cloud (Azure preferred) Practical knowledge of Object-Oriented Programming concepts and design patterns. Experience in implementation of Microservices, Service-oriented-architecture and multi-tier application platforms Good knowledge of JPA and SQL (preferable Oracle SQL) Experience working with RESTful Web Services Hands-on experience in tracing applications in distributed/microservices environment with usage of modern tools (Grafana, Prometheus, Splunk, Zipkin, Elasticsearch, Kibana, Logstash or similar) Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver No Performance Parameter Measure 1 Process No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: .NET.

Posted 1 month ago

Apply

12.0 - 17.0 years

16 - 20 Lacs

Chennai

Work from Office

As a Principal Technical Specialist, you will design, develop, implement software solutions, working with technologies such as Spring Boot, MongoDB, RabbitMQ, and Kafka. You will troubleshoot and debug existing applications, optimize performance, and collaborate with cross-functional teams to deliver high-quality solutions in IoT, Telecom, or WiFi domains. You have: Graduate or Postgraduate in Engineering stream with 12+ years of experience in high-end Java-based applications in a cloud environment. Expertise in coding with Java 8, Spring Boot, and its advanced features. Hands-on experience with MongoDB or NoSQL databases. Proficiency with Docker, Kubernetes, and cloud environments such as RHEL and OpenShift. Expertise in troubleshooting JVM issues and optimizing performance at the application level. Experience with tools like Hazelcast, Eureka, Grafana, Prometheus, and Minio for scalability and monitoring. Experience in designing and modeling application performance and scalability, particularly in distributed systems. Familiarity with emergency case handling and swift problem restoration within the SW Care process. It would be nice if you also had: Understanding of cloud-native application design principles, microservices, and service orchestration. Knowledge of Agile practices to foster collaboration and continuous integration. Design and implement software solutions, enhancing knowledge of Spring Boot, Java 8 features, and NoSQL databases like MongoDB. Improve expertise in messaging queues and distributed systems, including RabbitMQ, Kafka, and VerneMQ. Optimize application performance and scalability through troubleshooting and performance modeling. Learn advanced troubleshooting techniques for JVM, Docker, Kubernetes, and cloud environments such as RHEL/OpenShift. Work with a variety of tools, such as Hazelcast, Eureka, Grafana, and Prometheus, gaining insights into monitoring and cloud infrastructure. Collaborate with developers, product owners, and QA teams to improve communication and teamwork skills. Enhance the ability to solve complex problems by applying analytical thinking and judgment in a production environment. Stay up-to-date with emerging software trends and technologies to continuously innovate and improve solutions.

Posted 1 month ago

Apply

6.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 7 Lacs

Bengaluru

Work from Office

[Qualifications] BS in Computer Science or equivalent work experience 5+ years of relevant development experience[Primary Skills] Experienced in Java Experience in NoSQL, Docker, Kubernetes, Prometheus, Consul, ElasticSearch, Kibana, and any other CNCF technologies Knowledge of Linux and Bash Skilled in crafting distributed systems Understanding of concepts such as concurrency, parallelism, and event driven architecture Knowledge of Web technologies including REST and gRPC Intermediate knowledge of version control tools such as git and GitLaKeywords Java, NoSQL, Docker, Kubernetes, Prometheus, Consul, Elasticsearch, Kibana Python Nice to have.

Posted 1 month ago

Apply

12.0 - 17.0 years

10 - 15 Lacs

Hyderabad

Work from Office

The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies