Home
Jobs

4 Llm Model Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

12 - 22 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Gen AI+AWS Skill-Gen AI,AWS,AI Platform,Python,Tensorflow,Langchain,LLM Model,ML Development, AWS Lambda,AWS Bedrock,Data Extraction Exp-3-9YRS In Gen AI AWS PKG Upto-25LPA Loc- Bang, Pune NP-Imm-30Days Ritika-8587970773 ritikab.imaginators@gmail.com Required Candidate profile Gen AI + AWS Mandatory Skill-Gen AI, AWS,AI Platform, Python, Tensor flow, Lang chain, LLM Model, ML Development, AWS Lambda, AWS Bedrock, AWS AI, Data Extraction, GEN AI Solution, RPA Tool

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Gen AI+AWS Skill-Gen AI,AWS,AI Platform,Python,Tensorflow,Langchain,LLM Model,ML Development, AWS Lambda,AWS Bedrock,Data Extraction Exp-4-12YRS In Gen AI AWS PKG Upto-25LPA Loc-Bang, Pune NP-Imm-30Days Ritika-8587970773 ritikab.imaginators@gmail.com Required Candidate profile Gen AI + AWS Mandatory Skill-Gen AI, AWS,AI Platform, Python, Tensor flow, Lang chain, LLM Model, ML Development, AWS Lambda, AWS Bedrock, AWS AI, Data Extraction, GEN AI Solution, RPA Tool

Posted 2 weeks ago

Apply

4.0 - 9.0 years

18 - 33 Lacs

Bengaluru

Hybrid

Naukri logo

Role Brief: We are seeking a skilled and experienced MLOps Engineer to join our team and drive the operationalization of machine learning models and pipelines at scale. The ideal candidate will be responsible for automating, deploying, monitoring, and maintaining AI/ML solutions. Turning prototypes into robust, customer- ready solutions while mitigating risks like production pipeline failures, will be primary. This role requires expertise in infrastructure management, CI/CD pipelines, cloud services, model orchestration and collaboration with cross- functional teams to ensure seamless deployment into diverse customer environments. Primary Responsibilities: Strategizing and implementing scalable infrastructure for ML or LLM model pipelines using tools like Kubernetes, Docker, and cloud services such as AWS (e.g., AWS Batch, Fargate, Bedrock) Manage auto-scaling mechanisms to handle varying workloads and ensure high availability of RestAPIs Automate CI/CD pipelines and Lambda functions for model testing, deployment, and updates, reducing manual errors and improving efficiency. Amazon SageMaker Pipelines for end-to-end ML workflow automation. Optimize utilizing step-functions Set up reproducible workflows for data preparation, model training, and deployment. Provision and optimize cloud resources (e.g., GPUs, memory) to meet computational demands of large models like those used in RAG systems Use Infrastructure-as-Code (IaC) tools like Terraform to standardize provisioning C deployments Automate retraining workflows to keep models updated as data evolves Work closely with data scientists, ML engineers, and DevOps teams to integrate models into production environments. Implement monitoring tools to track model performance and detect issues like drift or degradation in real- time. Monitoring dashboards with real-time alerts for pipeline failures or performance issues C Implementing Model Observability frameworks. Required Skills: Education Any Engineering (BE/Btech/ME/Mtech) Min 4 years of experience with AWS services such as Lambda, Bedrock, Batch with Fargate, RDS (PostgreSQL), DynamoDB, SQS, CloudWatch, API Gateway, SageMaker Expertise in containerization (Docker C Kubernetes) for consistent deployments C orchestration tools like Airflow, ArgoCD, Kubeflow etc. Experience with CI/CD tools (e.g., Jenkins, GitLab CI/CD) and IaC tools like Terraform Knowledge of ML frameworks (e.g., PyTorch, TensorFlow) to understand model requirements during deployment Experience with RestAPI Frameworks like FastAPIs, Flask Familiarity with model observability like Evidently, NannyML, Phoenix and monitoring tools (Grafana etc)

Posted 3 weeks ago

Apply

1 - 6 years

7 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Position - AI Engineer As an AI Engineer, you will design, implement, and optimize machine learning models and AI systems to solve complex problems. You will work closely with cross-functional teams to integrate AI solutions into our products and services, ensuring scalability and efficiency. Key Responsibilities: Application Development: Design and develop AI-powered applications using state-of-the-art LLM models and generative AI techniques. Implement scalable solutions that integrate LLM-powered tools into existing workflows or standalone products. Model Optimization: Fine-tune pre-trained LLM models to meet specific application requirements. Optimize model performance for real-time and high-throughput environments. LLMOps Implementation: Develop and maintain pipelines for model deployment, monitoring, and retraining. Set up robust systems for model performance monitoring and diagnostics. Ensure reliable operations through analytics and insights into model behavior. Vector Databases and Data Management: Utilize vector databases for efficient storage and retrieval of embeddings. Integrate databases with LLM applications to enhance query and recommendation systems. Collaboration and Innovation: Work closely with cross-functional teams, including product managers, data scientists, and software engineers. Stay up-to-date with advancements in generative AI and LLM technologies to drive innovation. Skills and Experience 3+ years of experience in AI/ML development, with a focus on generative AI and LLMs. Proficiency in programming languages such as Python and frameworks like PyTorch or TensorFlow. Hands-on experience in fine-tuning and deploying LLM models (e.g., GPT, BERT, etc.). Familiarity with LLMOps practices, including pipeline automation, monitoring, and analytics. Experience with vector databases (e.g., Pinecone, Weaviate, or similar). Strong knowledge of natural language processing (NLP) and machine learning principles. You should certainly apply if: Understanding of MLOps principles and cloud platforms (AWS, GCP, Azure). Familiarity with prompt engineering and reinforcement learning from human feedback (RLHF). Experience in building real-time applications powered by generative AI. Knowledge of distributed systems and scalable architectures.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies