Lead AI Engineer

5 - 10 years

15 - 20 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

YOUR IMPACT

AI Systems Engineer

What The Role Offers

  • Be part of an

    enterprise AI transformation team

    shaping the future of

    LLM-driven applications

    .
  • Work with cutting-edge technologies in

    AI orchestration, RAG, and multi-agent systems

    .
  • Opportunity to architect

    scalable, secure, and context-aware AI systems

    deployed across global enterprise environments.
  • Collaborative environment fostering

    continuous learning and innovation

    in Generative AI systems engineering.
  • Architect, implement, and optimize

    enterprise-grade

    RAG pipelines

    covering data ingestion, embedding creation, and vector-based retrieval.
  • Design, build, and orchestrate

    multi-agent workflows

    using frameworks such as

    LangGraph

    ,

    Crew AI

    , or

    AI Development Kit (ADK)

    for collaborative task automation.
  • Engineer prompts and contextual templates

    to enhance LLM performance, accuracy, and domain adaptability.
  • Integrate and manage vector databases

    (pgvector, Milvus, Weaviate, Pinecone) for semantic search and hybrid retrieval.
  • Develop and maintain data pipelines

    for structured and unstructured data using

    SQL

    and

    NoSQL

    systems.
  • Expose RAG workflows through APIs

    using

    FastAPI

    or

    Flask

    , ensuring high reliability and performance.
  • Containerize, deploy, and scale

    AI microservices using

    Docker

    ,

    Kubernetes

    , and

    Helm

    within enterprise-grade environments.
  • Implement CI/CD automation pipelines

    via

    GitLab

    or similar tools to streamline builds, testing, and deployments.
  • Collaborate with cross-functional teams

    (Data, ML, DevOps, Product) to integrate retrieval, reasoning, and generation into end-to-end enterprise systems.
  • Monitor and enhance AI system observability

    using

    Prometheus

    ,

    Grafana

    , and

    OpenTelemetry

    for real-time performance and reliability tracking.
  • Integrate LLMs with enterprise data sources

    and knowledge graphs to deliver contextually rich, domain-specific outputs.

What You Need To Succeed

  • Education:

    Bachelors or Masters degree in

    Computer Science

    ,

    Artificial Intelligence

    , or related technical discipline.
  • Experience:

    5 - 10 years in

    AI/ML system development

    ,

    deployment

    , and

    optimization

    within enterprise or large-scale environments.
  • Deep understanding of

    Retrieval-Augmented Generation (RAG)

    architecture and

    hybrid retrieval mechanisms

    .
  • Proficiency in Python

    with hands-on expertise in

    FastAPI

    ,

    Flask

    , and

    REST API

    design.
  • Strong experience with

    vector databases

    (pgvector, Milvus, Weaviate, Pinecone).
  • Proficiency in

    prompt engineering

    and

    context engineering

    for LLMs.
  • Hands-on experience with

    containerization (Docker)

    and

    orchestration (Kubernetes, Helm)

    in production-grade deployments.
  • Experience with

    CI/CD automation

    using

    GitLab

    ,

    Jenkins

    , or equivalent tools.
  • Familiarity with

    LangChain

    ,

    LangGraph

    ,

    Google ADK

    , or similar frameworks for LLM-based orchestration.
  • Knowledge of

    AI observability

    ,

    logging

    , and

    reliability engineering

    principles.
  • Understanding of

    enterprise data governance

    ,

    security

    , and

    scalability

    in AI systems.
  • Proven track record of building and maintaining

    production-grade AI applications

    with measurable business impact.
  • Experience in

    fine-tuning or parameter-efficient tuning (PEFT/LoRA)

    of open-source LLMs.
  • Familiarity with

    open-source model hosting

    ,

    LLM governance frameworks

    , and

    model evaluation practices

    .
  • Knowledge of

    multi-agent system design

    and

    Agent-to-Agent (A2A)

    communication frameworks.
  • Exposure to

    LLMOps

    platforms such as

    LangSmith

    ,

    Weights & Biases

    , or

    Kubeflow

    .
  • Experience with

    cloud-based AI infrastructure

    (AWS Sagemaker, Azure OpenAI, GCP Vertex AI).
  • Working understanding of

    distributed systems

    ,

    API gateway management

    , and

    service mesh architectures

    .
  • Strong analytical and problem-solving mindset with attention to detail.
  • Effective communicator with the ability to collaborate across technical and business teams.
  • Self-motivated, proactive, and capable of driving end-to-end ownership of AI system delivery.
  • Passion for innovation in

    LLM orchestration

    ,

    retrieval systems

    , and

    enterprise AI solutions

    .

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Opentext logo
Opentext

Software Development

Waterloo ON

RecommendedJobs for You

chennai, tamil nadu, india

hyderabad, bengaluru

hyderabad, telangana, india

chennai, tamil nadu, india

bengaluru, karnataka, india