Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 4.0 years
12 - 15 Lacs
Pune
Work from Office
Lead and scale Django backend features, mentor 2 juniors, manage deployments, and ensure best practices. Expert in Django, PostgreSQL, Celery, Redis, Docker, CI/CD, and vector DBs. Own architecture, code quality, and production stability.
Posted 16 hours ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You are a talented and passionate RAG (Retrieval-Augmented Generation) Engineer with strong Python development skills, joining our AI/ML team in Bengaluru, India. Your role involves working on cutting-edge NLP solutions that integrate information retrieval techniques with large language models (LLMs). The ideal candidate will have experience with vector databases, LLM frameworks, and Python-based backend development. In this position, your responsibilities will include designing and implementing RAG pipelines that combine retrieval mechanisms with language models, developing efficient and scalable Python code for LLM-based applications, integrating with vector databases like Pinecone, FAISS, Weaviate, and more. You will fine-tune and evaluate the performance of LLMs using various prompt engineering and retrieval strategies, collaborating with ML engineers, data scientists, and product teams to deliver high-quality AI-powered features. Additionally, you will optimize system performance and ensure the reliability of RAG-based applications. To excel in this role, you must possess a strong proficiency in Python and experience in building backend services/APIs, along with a solid understanding of NLP concepts, information retrieval, and LLMs. Hands-on experience with at least one vector database, familiarity with Hugging Face Transformers, LangChain, LLM APIs, and experience in prompt engineering, document chunking, and embedding techniques are essential. Good knowledge of working with REST APIs, JSON, and data pipelines is required. Preferred qualifications for this position include a Bachelors or Masters degree in Computer Science, Data Science, or a related field, experience with cloud platforms like AWS, GCP, or Azure, exposure to tools like Docker, FastAPI, or Flask, and an understanding of data security and privacy in AI applications.,
Posted 23 hours ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Python Developer within our Information Technology department, your primary responsibility will be to leverage your expertise in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. We are seeking a candidate who possesses hands-on experience with GPT-4, transformer models, and deep learning frameworks, along with a profound comprehension of model fine-tuning, deployment, and inference. Your key responsibilities will include designing, developing, and maintaining Python applications that are specifically tailored towards AI/ML and generative AI. You will also be involved in building and refining transformer-based models such as GPT, BERT, and T5 for various NLP and generative tasks. Working with extensive datasets for training and evaluation purposes will be a crucial aspect of your role. Moreover, you will be tasked with implementing model inference pipelines and scalable APIs utilizing FastAPI, Flask, or similar technologies. Collaborating closely with data scientists and ML engineers will be essential in creating end-to-end AI solutions. Staying updated with the latest research and advancements in the realms of generative AI and ML is imperative for this position. From a technical standpoint, you should demonstrate a strong proficiency in Python and its relevant libraries like NumPy, Pandas, and Scikit-learn. With at least 7+ years of experience in AI/ML development, hands-on familiarity with transformer-based models, particularly GPT-4, LLMs, or diffusion models, is required. Experience with frameworks like Hugging Face Transformers, OpenAI API, TensorFlow, PyTorch, or JAX is highly desirable. Additionally, expertise in deploying models using Docker, Kubernetes, or cloud platforms like AWS, GCP, or Azure will be advantageous. Having a knack for problem-solving and algorithmic thinking is crucial for this role. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) would be a valuable asset. Any contributions to open-source AI/ML projects, experience with vector databases, building AI chatbots, copilots, or creative content generators, and knowledge of MLOps and model monitoring will be considered as added advantages. In terms of educational qualifications, a Bachelor's degree in Science (B.Sc), Technology (B.Tech), or Computer Applications (BCA) is required. A Master's degree in Science (M.Sc), Technology (M.Tech), or Computer Applications (MCA) would be an added benefit for this role.,
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Senior Manager - Senior Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are seeking a highly experienced Senior data scientist with 8+ years of expertise in machine learning, focusing on NLP, Generative AI, and advanced LLM ecosystems. This role demands leadership in designing and deploying scalable AI systems leveraging the latest advancements such as Google ADK, Agent Engine, and Gemini LLM. You will spearhead building real-time inference pipelines and agentic AI solutions that power complex, multi-user applications with cutting-edge technology. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 7+ years in ML engineering, applied AI, or senior data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Practical knowledge of LLM inference scaling with tools like vLLM, Groq, Triton Inference Server, and Google ADK. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization. Exposure to event-driven architectures or streaming pipelines (Kafka, Redis).
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Lead Assistant Manager - Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are looking for a motivated Data Scientist with 5+ years of experience in machine learning and data science, focusing on NLP and Generative AI. You will contribute to the design, development, and deployment of AI solutions centered on Large Language Models (LLMs) and agentic AI technologies, including Google ADK, Agent Engine, and Gemini. This role involves working closely with senior leadership to build scalable, real-time inference systems and intelligent applications. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 5+ years in ML engineering, applied AI, or data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization.
Posted 1 week ago
7.0 - 12.0 years
0 - 0 Lacs
Indore, Bengaluru
Work from Office
Required Skills & Experience: 4+ years of experience in penetration testing, red teaming or offensive security. 1+ years working with AI/ML or LLM-based systems. Deep familiarity with LLM architectures (e.g., GPT, Claude, Mistral, LLaMA) and pipelines (e.g., LangChain, Haystack, RAG-as-a-Service). Strong understanding of embedding models, vector databases (Pinecone, Weaviate, FAISS), and API-based model deployments. Experience with adversarial ML, secure inference, and data integrity in training pipelines. Experience with red team infrastructure and tooling such as Cobalt Strike, Mythic, Sliver, Covenant, and custom payload development. Proficient in scripting languages such as Python, PowerShell, Bash or Go.
Posted 1 week ago
3.0 - 8.0 years
8 - 18 Lacs
Mumbai, Mumbai (All Areas)
Work from Office
About Us We are a cutting-edge AI innovation company developing intelligent agents that transform the way businesses operate. Our mission is to push the boundaries of what's possible with large language models (LLMs), retrieval-augmented generation (RAG), and multi-agent orchestration. We work with top-tier clients across industries to deliver agentic systems that solve real-world business problems with speed, scale, and intelligence. Role Overview Were looking for an AI Agent Engineer to design, build, and refine advanced autonomous agents that operate across complex environments. Youll work at the intersection of technical architecture, AI-driven solution consulting, and system designcollaborating closely with business leaders, developers, and product teams to create production-ready, intelligent systems. This is an ideal role for engineers who are passionate about the future of AI, thrive on experimentation, and want to shape the next generation of automation through LLMs and multi-agent workflows. Key Responsibilities Core Engineering Architect and implement end-to-end AI agents using LangGraph, AutoGen, CrewAI, and other multi-agent frameworks (e.g., MCP, A2A). Design, iterate, and optimize prompts to support reliable and accurate agent performance in business-critical use cases. Integrate leading LLMs (GPT-4, Claude, Gemini, open-source alternatives) into real-time workflows and internal systems. Implement RAG architectures using vector databases such as Qdrant, Chroma, or Weaviate to ground agent responses in relevant context. Connect agents to APIs, external tools, document stores, and operational platforms to support intelligent decision-making. Collaboration & Consulting Translate client and internal requirements into AI-first system designs and architecture. Consult with product managers, sales engineers, and data teams to align AI solutions with business priorities and feasibility. Support implementation through technical documentation, design reviews, and hands-on problem-solving. Innovation & Enablement Lead test-and-learn initiatives and proof-of-concepts to validate agent performance and business value. Stay current on rapid developments in the LLM and multi-agent ecosystem and drive adoption of new capabilities. Contribute to the internal AI platform with tools, patterns, and reusable components to accelerate development. Provide training and support to technical and non-technical stakeholders to drive adoption and governance. Qualifications Must-Have: 3+ years of experience in AI/ML engineering or intelligent system development. Strong Python programming skills, with hands-on experience in prompt engineering and LLM workflows. Experience with frameworks such as LangGraph, CrewAI, AutoGen, LangChain, or similar agent development tools. Proficiency in implementing RAG architectures and working with vector databases (e.g., Qdrant, Chroma, Weaviate). Integration experience with APIs, databases, and frontend or workflow tools. Demonstrated success in consulting, technical sales, or AI solution architecture. Awareness of AI safety, compliance, and responsible development practices. Nice-to-Have: Familiarity with orchestration tools like n8n, Replit, or low-code AI automation platforms. Experience in enterprise domains such as insurance, healthcare, legal tech, or customer service. Exposure to multi-modal systems (text + vision) or knowledge graphs. MLOps or AI infrastructure experience in cloud environments (AWS, Azure, GCP Youll Thrive In This Role If You: Are energized by building systems that operate autonomously and adaptively in real-world scenarios. Can quickly move from ideation to implementation with a test-and-learn mindset. Stay at the forefront of LLM advancements and understand how to apply them to business problems. Communicate effectively across disciplines and help bridge product, engineering, and customer value. Thrive in a fast-paced, experimental environment that balances deep technical rigor with user impact. Why Join Us Work on frontier problems in AI agent design and autonomous systems. Collaborate with top-tier clients and industry-leading experts. Flexible work culture built around autonomy, innovation, and continuous learning. Competitive compensation and opportunities for high-impact contributions.
Posted 1 week ago
10.0 - 17.0 years
35 - 65 Lacs
Bengaluru
Hybrid
Role Overview: As Principal Data Engineer, you will drive the architecture and technical direction for MontyClouds next-generation data and knowledge platforms, enabling intelligent automation, advanced analytics, and AI-driven products for a wide range of users. You will play a pivotal role in shaping the data foundation for AI-driven systems, ensuring our platform is robust, scalable, and ready to support state-of-the-art AI workflows. You will also lead the efforts in maintaining stringent data security standards, safeguarding sensitive information throughout data pipelines and platforms. Key Responsibilities: Architect and optimize scalable data platforms that support advanced analytics, AI/ML capabilities, and unified knowledge access. Lead the design and implementation of high-throughput data pipelines and data lakes for both batch and real-time workloads. Set technical standards for data modeling, data quality, metadata management, and lineage tracking, with a strong focus on AI-readiness. Design and implement secure, extensible data connectors and frameworks for integrating customer-provided data streams. Build robust systems for processing and contextualizing data, including reconstructing event timelines and enabling higher-order intelligence. Partner with data scientists, ML engineers, and cross-functional stakeholders to operationalize data for machine learning and AI-driven insights. Evaluate and adopt best-in-class tools from the modern AI data stack (e.g., feature stores, orchestration frameworks, vector databases, ML pipelines). Drive innovation and continuous improvement in data engineering practices, data governance, and automation. Provide mentorship and technical leadership to the broader engineering team. Champion security, compliance, and privacy best practices in multi-tenant, cloud-native environments. Desired Skills Must Have Deep expertise in cloud-native data engineering (AWS preferred), including large-scale data lakes, warehouses, and event-driven/data streaming architectures. Hands-on experience building and maintaining data pipelines with modern frameworks (e.g., Spark, Kafka, Airflow, dbt). Strong track record of enabling AI/ML workflows, including data preparation, feature engineering, and ML pipeline operationalization. Familiarity with modern AI/ML data stack components such as feature stores (e.g., Feast), vector databases (e.g., Pinecone, Weaviate), orchestration tools (e.g., Airflow, Prefect), and ML ops tools (e.g., MLflow, Tecton). Experience working with modern open table formats such as Apache Iceberg, Delta Lake, or Hudi for scalable data lake and lakehouse architectures. Experience implementing data privacy frameworks such as GDPR and supporting data anonymization for diverse use cases. Strong understanding of data privacy, RBAC, encryption, and compliance in multi-tenant platforms. Good to Have Experience with metadata management, semantic layers, or knowledge of graph architectures. Exposure to SaaS and multi-cloud environments serving both internal and external consumers. Background in supporting AI Agents or AI-driven automation in production environments. Experience processing high-volume cloud infrastructure telemetry, including AWS CloudTrail, CloudWatch logs, and other event-driven data sources, to support real-time monitoring, anomaly detection, and operational analytics. Experience 10+ years of experience in data engineering, distributed systems, or related fields. Education Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (preferred).
Posted 1 week ago
2.0 - 7.0 years
15 - 25 Lacs
Pune
Hybrid
Key Responsibilities : Develop, train, and fine-tune large language models and generative architectures (LLMs, VAEs, Transformers, GANs) Integrate models with applications using LangChain, LlamaIndex, RAG, and other frameworks Design LLM-based agents for specific use cases: summarization, classification, scoring, Q&A, translation Build prompt templates and semantic memory flows using vector databases like Pinecone or FAISS Collaborate with backend and data teams to ingest data from PDFs, APIs, structured databases, and JSON files Benchmark model outputs and run experiments to optimize cost, performance, and quality Stay on top of AI research and rapidly implement useful techniques in production environments Write clear, modular, reusable code with documentation and test coverage Troubleshoot model-related deployment or inference issues Technical Skills Required : Strong Python programming skills Experience with Transformers, Hugging Face, OpenAI/Anthropic APIs, Med-GEMMA, or similar foundation models Experience with agentic frameworks: LangChain, LlamaIndex, LangGraph, semantic RAG Familiarity with Vector database experience (Pinecone, FAISS, Weaviate, or similar) Comfortable with prompt engineering, few-shot learning, fine-tuning basics Ability to process and clean unstructured data (PDFs, notes, research papers, etc.) Understanding of NLP metrics and model evaluation techniques Bonus: experience with biomedical or clinical data (PubMed, ClinicalTrials.gov, etc.) Bonus: experience with deploying models via FastAPI, Docker, or Streamlit Personal Attributes : Curiosity and willingness to learn new models and tools quickly Attention to detail and commitment to quality Ownership mindsetyou care about the outcome, not just the code Ability to work independently and push through ambiguity Passion for building usable AI, not just research prototypes Strong communication and collaboration skills across tech and domain teams
Posted 3 weeks ago
7.0 - 8.0 years
7 - 8 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Role Senior Developer Experience:7 to 8 years. Skills Good to have GenAI experience of 1-2 years 1.Python with experience in AI/ML libraries such as TensorFlow, Pytorch, NumPy, pypdf 2. GenAI Skills - RAG, Prompt Engineering, Vector DB (Pinecone, Weaviate) 3. Familiarity with AI/ML workloads in Azure/Amazon
Posted 3 weeks ago
7.0 - 8.0 years
7 - 8 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Role Senior Developer Experience:7 to 8 years. Skills Good to have GenAI experience of 1-2 years 1.Python with experience in AI/ML libraries such as TensorFlow, Pytorch, NumPy, pypdf 2. GenAI Skills - RAG, Prompt Engineering, Vector DB (Pinecone, Weaviate) 3. Familiarity with AI/ML workloads in Azure/Amazon
Posted 3 weeks ago
7.0 - 8.0 years
7 - 8 Lacs
Delhi, India
On-site
Role Senior Developer Experience:7 to 8 years. Skills Good to have GenAI experience of 1-2 years 1.Python with experience in AI/ML libraries such as TensorFlow, Pytorch, NumPy, pypdf 2. GenAI Skills - RAG, Prompt Engineering, Vector DB (Pinecone, Weaviate) 3. Familiarity with AI/ML workloads in Azure/Amazon
Posted 3 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
New Delhi, Chennai, Bengaluru
Hybrid
Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs
Posted 1 month ago
6.0 - 11.0 years
40 - 60 Lacs
Kolkata
Work from Office
We're looking for an experienced AI/ML Technical Lead to architect and drive the development of our intelligent conversation engine. Youll lead model selection, integration, training workflows (RAG/fine-tuning), and scalable deployment of natural language and voice AI components. This is a foundational hire for a technically ambitious platform. Key Responsibilities AI System Architecture: Design the architecture of the AI-powered agent including LLM-based conversation workflows, voice bots, and follow-up orchestration. Model Integration & Prompt Engineering: Leverage APIs from OpenAI, Anthropic, or deploy open models (e.g., LLaMA 3, Mistral). Implement effective prompt strategies and retrieval-augmented generation (RAG) pipelines for contextual responses. Data Pipelines & Knowledge Management: Build secure data pipelines to ingest, embed, and serve tenant-specific knowledge bases (FAQs, scripts, product docs) using vector databases (e.g., Pinecone, Weaviate). Voice & Text Interfaces: Implement and optimize multimodal agents (text + voice) using ASR (e.g., Whisper), TTS (e.g., Polly), and NLP for automated qualification and call handling. Conversational Flow Orchestration: Design dynamic, stateful conversations that can take actions (e.g., book meetings, update CRM records) using tools like LangChain, Temporal, or n8n. Platform Scalability: Ensure models and agent workflows scale across tenants with strong data isolation, caching, and secure API access. Lead a Cross-Functional Team: Collaborate with backend, frontend, and DevOps engineers to ship intelligent, production-ready features. Monitoring & Feedback Loops: Define and monitor conversation analytics (drop-offs, booking rates, escalation triggers), and create pipelines to improve AI quality continuously. Qualifications Must-Haves: 5+ years of experience in ML/AI, with at least 2 years leading conversational AI or LLM projects. Strong background in NLP, dialog systems, or voice AI preferably with production experience. Experience with OpenAI, or open-source LLMs (e.g. LLaMA, Mistral, Falcon) and orchestration tools (LangChain, etc.). Proficiency with Python and ML frameworks (Hugging Face, PyTorch, TensorFlow). Experience deploying RAG pipelines, vector DBs (e.g. Pinecone, Weaviate), and managing LLM-agent logic. Familiarity with voice processing (ASR, TTS, IVR design). Solid understanding of API-based integration and microservices. Deep care for data privacy, multi-tenancy security, and ethical AI practices. Nice-to-Haves: Experience with CRM ecosystems (e.g. Salesforce, HubSpot) and how AI agents sync actions to CRMs. Knowledge of sales pipelines and marketing automation tools. Exposure to calendar integrations (Google Calendar API, Microsoft Graph). Knowledge of Twilio APIs (SMS, Voice, WhatsApp) and channel orchestration logic. Familiarity with Docker, Kubernetes, CI/CD, and scalable cloud infrastructure (AWS/GCP/Azure). What We Offer Founding team role with strong ownership and autonomy Opportunity to shape the future of AI-powered sales Flexible work environment Competitive salary Access to cutting-edge AI tools and training resources Post your resume and any relevant project links (GitHub, blog, portfolio) to career@sourcedeskglobal.com. Include a short note on your most interesting AI project or voicebot/conversational AI experience.
Posted 1 month ago
4.0 - 5.0 years
8 - 12 Lacs
Vadodara
Hybrid
Job Type: Full Time Job Description: We are seeking an experienced AI Engineer with 4-5 years of hands-on experience in designing and implementing AI solutions. The ideal candidate should have a strong foundation in developing AI/ML-based solutions, including expertise in Computer Vision (OpenCV). Additionally, proficiency in developing, fine-tuning, and deploying Large Language Models (LLMs) is essential. As an AI Engineer, candidate will work on cutting-edge AI applications, using LLMs like GPT, LLaMA, or custom fine-tuned models to build intelligent, scalable, and impactful solutions. candidate will collaborate closely with Product, Data Science, and Engineering teams to define, develop, and optimize AI/ML models for real-world business applications. Key Responsibilities: Research, design, and develop AI/ML solutions for real-world business applications, RAG is must. Collaborate with Product & Data Science teams to define core AI/ML platform features. Analyze business requirements and identify pre-trained models that align with use cases. Work with multi-agent AI frameworks like LangChain, LangGraph, and LlamaIndex. Train and fine-tune LLMs (GPT, LLaMA, Gemini, etc.) for domain-specific tasks. Implement Retrieval-Augmented Generation (RAG) workflows and optimize LLM inference. Develop NLP-based GenAI applications, including chatbots, document automation, and AI agents. Preprocess, clean, and analyze large datasets to train and improve AI models. Optimize LLM inference speed, memory efficiency, and resource utilization. Deploy AI models in cloud environments (AWS, Azure, GCP) or on-premises infrastructure. Develop APIs, pipelines, and frameworks for integrating AI solutions into products. Conduct performance evaluations and fine-tune models for accuracy, latency, and scalability. Stay updated with advancements in AI, ML, and GenAI technologies. Required Skills & Experience: AI & Machine Learning: Strong experience in developing & deploying AI/ML models. Generative AI & LLMs: Expertise in LLM pretraining, fine-tuning, and optimization. NLP & Computer Vision: Hands-on experience in NLP, Transformers, OpenCV, YOLO, R-CNN. AI Agents & Multi-Agent Frameworks: Experience with LangChain, LangGraph, LlamaIndex. Deep Learning & Frameworks: Proficiency in TensorFlow, PyTorch, Keras. Cloud & Infrastructure: Strong knowledge of AWS, Azure, or GCP for AI deployment. Model Optimization: Experience in LLM inference optimization for speed & memory efficiency. Programming & Development: Proficiency in Python and experience in API development. Statistical & ML Techniques: Knowledge of Regression, Classification, Clustering, SVMs, Decision Trees, Neural Networks. Debugging & Performance Tuning: Strong skills in unit testing, debugging, and model evaluation. Hands-on experience with Vector Databases (FAISS, ChromaDB, Weaviate, Pinecone). Good to Have: Experience with multi-modal AI (text, image, video, speech processing). Familiarity with containerization (Docker, Kubernetes) and model serving (FastAPI, Flask, Triton).
Posted 1 month ago
8.0 - 13.0 years
14 - 24 Lacs
Pune, Ahmedabad
Hybrid
Senior Technical Architect Machine Learning Solutions We are looking for a Senior Technical Architect with deep expertise in Machine Learning (ML), Artificial Intelligence (AI) , and scalable ML system design . This role will focus on leading the end-to-end architecture of advanced ML-driven platforms, delivering impactful, production-grade AI solutions across the enterprise. Key Responsibilities Lead the architecture and design of enterprise-grade ML platforms , including data pipelines, model training pipelines, model inference services, and monitoring frameworks. Architect and optimize ML lifecycle management systems (MLOps) to support scalable, reproducible, and secure deployment of ML models in production. Design and implement retrieval-augmented generation (RAG) systems, vector databases , semantic search , and LLM orchestration frameworks (e.g., LangChain, Autogen). Define and enforce best practices in model development, versioning, CI/CD pipelines , model drift detection, retraining, and rollback mechanisms. Build robust pipelines for data ingestion, preprocessing, feature engineering , and model training at scale , using batch and real-time streaming architectures. Architect multi-modal ML solutions involving NLP, computer vision, time-series, or structured data use cases. Collaborate with data scientists, ML engineers, DevOps, and product teams to convert research prototypes into scalable production services . Implement observability for ML models including custom metrics, performance monitoring, and explainability (XAI) tooling. Evaluate and integrate third-party LLMs (e.g., OpenAI, Claude, Cohere) or open-source models (e.g., LLaMA, Mistral) as part of intelligent application design. Create architectural blueprints and reference implementations for LLM APIs, model hosting, fine-tuning, and embedding pipelines . Guide the selection of compute frameworks (GPUs, TPUs), model serving frameworks (e.g., TorchServe, Triton, BentoML) , and scalable inference strategies (batch, real-time, streaming). Drive AI governance and responsible AI practices including auditability, compliance, bias mitigation, and data protection. Stay up to date on the latest developments in ML frameworks, foundation models, model compression, distillation, and efficient inference . 14. Ability to coach and lead technical teams , fostering growth, knowledge sharing, and technical excellence in AI/ML domains. Experience managing the technical roadmap for AI-powered products , documentations ensuring timely delivery, performance optimization, and stakeholder alignment. Required Qualifications Bachelors or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 8+ years of experience in software architecture , with 5+ years focused specifically on machine learning systems and 2 years in leading team. Proven expertise in designing and deploying ML systems at scale , across cloud and hybrid environments. Strong hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face, Scikit-learn). Experience with vector databases (e.g., FAISS, Pinecone, Weaviate, Qdrant) and embedding models (e.g., SBERT, OpenAI, Cohere). Demonstrated proficiency in MLOps tools and platforms : MLflow, Kubeflow, SageMaker, Vertex AI, DataBricks, Airflow, etc. In-depth knowledge of cloud AI/ML services on AWS, Azure, or GCP – including certification(s) in one or more platforms. Experience with containerization and orchestration (Docker, Kubernetes) for model packaging and deployment. Ability to design LLM-based systems , including hybrid models (open-source + proprietary), fine-tuning strategies, and prompt engineering. Solid understanding of security, compliance , and AI risk management in ML deployments. Preferred Skills Experience with AutoML , hyperparameter tuning, model selection, and experiment tracking. Knowledge of LLM tuning techniques : LoRA, PEFT, quantization, distillation, and RLHF. Knowledge of privacy-preserving ML techniques , federated learning, and homomorphic encryption Familiarity with zero-shot, few-shot learning , and retrieval-enhanced inference pipelines. Contributions to open-source ML tools or libraries. Experience deploying AI copilots, agents, or assistants using orchestration frameworks.
Posted 1 month ago
5.0 - 10.0 years
40 - 60 Lacs
Kolkata
Work from Office
We're looking for an experienced AI/ML Technical Lead to architect and drive the development of our intelligent conversation engine. Youll lead model selection, integration, training workflows (RAG/fine-tuning), and scalable deployment of natural language and voice AI components. This is a foundational hire for a technically ambitious platform. Role & responsibilities AI System Architecture: Design the architecture of the AI-powered agent including LLM-based conversation workflows, voice bots, and follow-up orchestration. Model Integration & Prompt Engineering: Leverage APIs from OpenAI, Anthropic, or deploy open models (e.g., LLaMA 3, Mistral). Implement effective prompt strategies and retrieval-augmented generation (RAG) pipelines for contextual responses. Data Pipelines & Knowledge Management: Build secure data pipelines to ingest, embed, and serve tenant-specific knowledge bases (FAQs, scripts, product docs) using vector databases (e.g., Pinecone, Weaviate). Voice & Text Interfaces: Implement and optimize multimodal agents (text + voice) using ASR (e.g., Whisper), TTS (e.g., Polly), and NLP for automated qualification and call handling. Conversational Flow Orchestration: Design dynamic, stateful conversations that can take actions (e.g., book meetings, update CRM records) using tools like LangChain, Temporal, or n8n. Platform Scalability: Ensure models and agent workflows scale across tenants with strong data isolation, caching, and secure API access. Lead a Cross-Functional Team: Collaborate with backend, frontend, and DevOps engineers to ship intelligent, production-ready features. Monitoring & Feedback Loops: Define and monitor conversation analytics (drop-offs, booking rates, escalation triggers), and create pipelines to improve AI quality continuously. Preferred candidate profile Qualifications Must-Haves: 5+ years of experience in ML/AI, with at least 2 years leading conversational AI or LLM projects. Strong background in NLP, dialog systems, or voice AI preferably with production experience. Experience with OpenAI, or open-source LLMs (e.g. LLaMA, Mistral, Falcon) and orchestration tools (LangChain, etc.). Proficiency with Python and ML frameworks (Hugging Face, PyTorch, TensorFlow). Experience deploying RAG pipelines, vector DBs (e.g. Pinecone, Weaviate), and managing LLM-agent logic. Familiarity with voice processing (ASR, TTS, IVR design). Solid understanding of API-based integration and microservices. Deep care for data privacy, multi-tenancy security, and ethical AI practices. Nice-to-Haves: Experience with CRM ecosystems (e.g. Salesforce, HubSpot) and how AI agents sync actions to CRMs. Knowledge of sales pipelines and marketing automation tools. Exposure to calendar integrations (Google Calendar API, Microsoft Graph). Knowledge of Twilio APIs (SMS, Voice, WhatsApp) and channel orchestration logic. Familiarity with Docker, Kubernetes, CI/CD, and scalable cloud infrastructure (AWS/GCP/Azure). What We Offer Founding team role with strong ownership and autonomy Opportunity to shape the future of AI-powered sales Flexible work environment Competitive salary Access to cutting-edge AI tools and training resources Post your resume and any relevant project links (GitHub, blog, portfolio) to career@sourcdeskglobal.com. Include a short note on your most interesting AI project or voicebot/conversational AI experience.
Posted 1 month ago
5 - 10 years
25 - 30 Lacs
Mumbai, Navi Mumbai, Chennai
Work from Office
We are looking for an AI Engineer (Senior Software Engineer). Interested candidates email me resumes on mayura.joshi@lionbridge.com OR WhatsApp on 9987538863 Responsibilities: Design, develop, and optimize AI solutions using LLMs (e.g., GPT-4, LLaMA, Falcon) and RAG frameworks. Implement and fine-tune models to improve response relevance and contextual accuracy. Develop pipelines for data retrieval, indexing, and augmentation to improve knowledge grounding. Work with vector databases (e.g., Pinecone, FAISS, Weaviate) to enhance retrieval capabilities. Integrate AI models with enterprise applications and APIs. Optimize model inference for performance and scalability. Collaborate with data scientists, ML engineers, and software developers to align AI models with business objectives. Ensure ethical AI implementation, addressing bias, explainability, and data security. Stay updated with the latest advancements in generative AI, deep learning, and RAG techniques. Requirements: 8+ years experience in software development according to development standards. Strong experience in training and deploying LLMs using frameworks like Hugging Face Transformers, OpenAI API, or LangChain. Proficiency in Retrieval-Augmented Generation (RAG) techniques and vector search methodologies. Hands-on experience with vector databases such as FAISS, Pinecone, ChromaDB, or Weaviate. Solid understanding of NLP, deep learning, and transformer architectures. Proficiency in Python and ML libraries (TensorFlow, PyTorch, LangChain, etc.). Experience with cloud platforms (AWS, GCP, Azure) and MLOps workflows. Familiarity with containerization (Docker, Kubernetes) for scalable AI deployments. Strong problem-solving and debugging skills. Excellent communication and teamwork abilities Bachelors or Masters degree in computer science, AI, Machine Learning, or a related field.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31300 Jobs | Dublin
Wipro
16502 Jobs | Bengaluru
EY
10539 Jobs | London
Accenture in India
10399 Jobs | Dublin 2
Uplers
8481 Jobs | Ahmedabad
Amazon
8475 Jobs | Seattle,WA
IBM
7957 Jobs | Armonk
Oracle
7438 Jobs | Redwood City
Muthoot FinCorp (MFL)
6169 Jobs | New Delhi
Capgemini
5811 Jobs | Paris,France