Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
16 - 31 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Role Overview We are seeking a highly skilled AI Engineer with 8-12 years of experience to lead the development of Generative AI (GenAI) and Machine learning solutions for internal projects. This role requires a self-driven leader with exceptional communication, strategic thinking, and expertise in data analytics and visualization to deliver innovative GenAI tools tailored to internal team needs while driving cross-functional collaboration. Key Responsibilities - GenAI Development: Design and develop advanced GenAI models (e.g., LLMs, DALL E models) and AI Agents to automate internal tasks and workflows. Exposure to LLMs: Utilize Azure Open AI APIs, experience on models like GPT4o, O3 , llama3 Enhance the existing RAG based application: In depth understanding of stages of RAG - chunking, retrieval etc. - Cloud Deployment: Deploy and scale GenAI solutions on Azure Cloud services (e.g., Azure Function App) for optimal performance. In depth understanding of ML models like linear regression, random forest, decision trees. In depth understanding on clustering and supervised models. - AI Agent Development: Build AI agents using frameworks like LangChain to streamline internal processes and boost efficiency. - Data Analytics: Perform advanced data analytics to preprocess datasets, evaluate model performance, and derive actionable insights for GenAI solutions. - Data Visualization: Create compelling visualizations (e.g., dashboards, charts) to communicate model outputs, performance metrics, and business insights to stakeholders. Stakeholder Collaboration: Partner with departments to gather requirements, align on goals, and present technical solutions and insights effectively to non-technical stakeholders. - Model Optimization: Fine-tune GenAI models for efficiency and accuracy using techniques like prompt engineering, quantization, and RAG (Retrieval-Augmented Generation). LLMOps Best Practices: Implement GenAI-specific MLOps, including CI/CD pipelines (Git, Azure DevOps) - Leadership: Guide cross-functional teams, mentor junior engineers, and drive project execution with strategic vision and ownership. Helicopters, strategic Thinking**: Develop innovative GenAI strategies to address business challenges, leveraging data insights to align solutions with organizational goals. - Self-Driven Execution: Independently lead projects to completion with minimal supervision, proactively resolving challenges and seeking collaboration when needed. - Continuous Learning: Stay ahead of GenAI, analytics, and visualization advancements, self-learning new techniques to enhance project outcomes. Required Skills & Experience - Experience: 8-12 years in AI/ML development, with at least 4 years focused on Generative AI and AI agent frameworks. - Education: BTech/BE in Computer Science, Engineering, or equivalent (Masters or PhD in AI/ML is a plus). - Programming: Expert-level Python proficiency, with deep expertise in GenAI libraries (e.g., LangChain, Hugging Face Transformers, PyTorch, Open AI SDK) and data analytics libraries (e.g., Pandas, NumPy), sk-learn. Mac - Data Analytics: Strong experience in data preprocessing, statistical analysis, and model evaluation to support GenAI development and business insights. - Data Visualization: Proficiency in visualization tools (e.g., Matplotlib, Seaborn, Plotly, Power BI, or Tableau) to create dashboards and reports for stakeholders. - Azure Cloud Expertise: Strong experience with Azure Cloud services (e.g., Azure Function App, Azure ML, serverless) for model training and deployment. - GenAI Methodologies: Deep expertise in LLMs, AI agent frameworks, prompt engineering, and RAG for internal workflow automation. - Deployment: Proficiency in Docker, Kubernetes, and CI/CD pipelines (e.g., Azure DevOps, GitHub Actions) for production-grade GenAI systems. - LLMOps: Expertise in GenAI MLOps, including experiment tracking (e.g., Weights & Biases), automated evaluation metrics (e.g., BLEU, ROUGE), and monitoring. Soft Skills: - Communication: Exceptional verbal and written skills to articulate complex GenAI concepts, analytics, and visualizations to technical and non-technical stakeholders. - Strategic Thinking: Ability to align AI solutions with business objectives, using data-driven insights to anticipate challenges and propose long-term strategies. - Problem-Solving: Strong analytical skills with a proactive, self-starter mindset to independently resolve complex issues. - Collaboration: Collaborative mindset to work effectively across departments and engage colleagues for solutions when needed. Preferred Skills - Experience deploying GenAI models in production environments, preferably on Azure Familiarity with multi-agent systems, reinforcement learning, or distributed training (e.g., DeepSpeek). - Knowledge of DevOps practices, including Git, CI/CD, and infrastructure-as-code. Advanced data analytics techniques (e.g., time-series analysis, A/B testing) for GenAI applications. - Experience with interactive visualization frameworks (e.g., Dash, Streamlit) for real-time dashboards. - Contributions to GenAI or data analytics open-source projects or publications in NLP, generative modeling, or data scien
Posted 1 week ago
6.0 - 10.0 years
16 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Curious about the role ? Generative AI & NLP Development : Design, develop, and deploy advanced applications and solutions using Generative AI models (e.g., GPT, LLaMA, Mistral) and NLP algorithms to solve business challenges and unlock new opportunities for our clients. Model Customization & Fine-Tuning: Apply state-of-the-art techniques like LoRA, PEFT, and fine-tuning of large language models to adapt solutions to specific use cases, ensuring high relevance and impact. Innovative Problem Solving : Leverage advanced AI methodologies to tackle real-world business problems, providing creative and scalable AI-powered solutions that drive measurable results. Data-Driven Insights: Conduct deep analysis of large datasets, uncovering insights and trends that guide decisionmaking, improve operational efficiencies, and fuel innovation. Cross-Functional Collaboration: Work closely with Consulting, Engineering, and other teams to integrate AI solutions into broader business strategies, ensuring the seamless deployment of AI-powered applications. Client Engagement: Collaborate with clients to understand their unique business needs, provide tailored AI solutions, and educate them on the potential of Generative AI to drive business transformation. What do we expect? Generative AI & NLP Expertise: Extensive experience in developing and deploying Generative AI applications and NLP frameworks, with hands-on knowledge of LLM fine-tuning, model customization, and AI-powered automation. Hands-On Data Science Experience: 6+ years of experience in data science, with a proven ability to build and operationalize machine learning and NLP models in real-world environments. AI Innovation: Deep knowledge of the latest developments in Generative AI and NLP, with a passion for experimenting with cutting-edge research and incorporating it into practical solutions. Problem-Solving Mindset: Strong analytical skills and a solution-oriented approach to applying data science techniques to complex business problems. Communication Skills: Exceptional ability to translate technical AI concepts into business insights and recommendations for non-technical stakeholders.
Posted 1 week ago
4.0 - 9.0 years
7 - 17 Lacs
Bengaluru
Work from Office
In this role, you will: Build Large Language Model (LLM) powered applications that support powerful capabilities for internal Wells Fargo users, with a focus on agentic workflows, prompt engineering, evaluations, model feedback and tuning mechanisms, and safety guardrails. Build systems utilizing Large Language Models (LLM) involving data ingestion pipelines, reliable service orchestration, backend application state management, and data layer implementation. Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors. Provide engineering support for AI driven solutions that enhance and streamline work processes, ensuring reliable, secure, and scalable applications aligned with business priorities. Act as a thought leader in developing standards and enterprise best practices for engineering complex and large-scale technology solutions for technology engineering disciplines. Design, code, test, and document solutions that align with both product specific and enterprise-wide goals. Lead the development of frameworks and solutions that provide both vertical support to specific business-aligned products and horizontal capabilities that can scale across multiple platforms. Collaborate and consult with key technical experts, senior technology teams, and external industry groups to resolve complex technical issues and achieve goals. Lead projects, teams, and service as a peer mentor. Stay current on emerging technologies in AI to continue to the strategy discussion for future capability development. Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 4+ years of engineering experience building and supporting large scale customer facing products using modern technologies. 2+ years building and delivering products using AI technologies. Desired Qualifications: Experience in backend application software development, with ability to quickly adapt to C# and Python code bases. Strong understanding of Retrieval-Augmented Generation (RAG), prompt engineering and agentic workflows. Deep knowledge of implementing guardrails and advanced techniques for query enrichment and re-writing Expertise in test or eval driven development including data and error analysis ensuring robust and scalable AI software. Experience architecting and implementing agentic frameworks for autonomous multi-step reasoning and planning Solid grasp of parsing, chunking, indexing and re-ranking of multiple file formats Experience with Generative AI Operations, and enterprise-scale AI adoption strategies. Familiarity with enterprise-scale software systems and their integration within large organizations. Experience in enterprise AI model lifecycle management, AI compliance, and risk mitigation strategies. Strong understanding of human centered AI design for workplace applications. Proven ability to work across multiple teams and functions to drive alignment and execution. Demonstrated success in designing usability and delivering seamless user experience. Excellent collaboration, communication, and problem-solving skills.
Posted 1 week ago
5.0 - 10.0 years
22 - 37 Lacs
Noida, Hyderabad/Secunderabad, Bangalore/Bengaluru
Hybrid
Dear candidate, We found your profile suitable for our current opening, please go through the below JD for better understanding of the role, Job Description : Role : Data Scientist / Senior Data scientist/ Lead Data scientist Exp : 6 - 12years Mode of work : Hybrid Model Work Location : Hyderabad/Bangalore/Noida/Pune/Kolkata Role Overview: We are seeking a highly skilled Generative AI Engineer with a strong foundation in data science and a passion for building cutting-edge AI solutions. The ideal candidate will have 6 - 10years of overall experience, including at least 2-3 years of hands-on work in Generative AI and Large Language Models (LLMs). You will play a key role in developing and deploying advanced AI models to drive innovation and impact across our platform. Responsibilities: Design and develop advanced generative models/large language models Collaborate with other teams to integrate AI solutions into our marketing platform Manage the project life cycle from research and development to deployment and optimization. Stay updated on and contribute to the latest advancements in AI research, applying new findings to ongoing projects. Ensure ethical AI development practices, prioritizing fairness, transparency, and privacy. Qualifications and Education: 2-3 years experience in GenAI and LLM with overall 6- 12 years of experience building advanced analytics products. Strong proficiency in Python and AI frameworks. Experience in conceptualizing, developing, and leading large-scale, end-to-end projects in Generative AI, MLOps, and LLMOps Working knowledge of generative models (GANs, VAEs) and NLP. Solid understanding of machine learning algorithms and data pre-processing techniques. Knowledge of cloud computing and AI deployment (Google Cloud preferred) Excellent problem-solving, analytical, and creative thinking skills Bachelors degree in computer science or engineering discipline Experience with MySQL and/or PostgreSQL databases. Strong inter-personal communication and collaboration skills. The candidate must be able to work independently & must possess strong communication skills. A good team player, high level of personal commitment & 'can do' attitude. Demonstrable ownership, commitment and willingness to learn new technologies and frameworks Please check below link for organisation details, https://www.tavant.com/ If interested , please drop your resume to dasari.gowri@tavant.com Regards Dasari Krishna Gowri Associate Manager - HR www.tavant.com
Posted 1 week ago
3.0 - 6.0 years
17 - 32 Lacs
Gurugram
Work from Office
We are looking for a Senior AI/ML Engineer to join our team and help design, build, and deploy the product. About Us Zonka Feedback is a fast-growing, bootstrapped SaaS company building an AI-first platform that combines machine learning, GenAI, and large-scale analytics. We are working on cutting-edge applications of AI including LLMs , NLP , unsupervised clustering , vector databases , and retrieval-augmented generation (RAG) systems . Key Responsibilities Design and build scalable NLP pipelines , unsupervised clustering models , and semantic search solutions . Develop vectorization pipelines and implement RAG (Retrieval-Augmented Generation) architectures for efficient information retrieval. Fine-tune large language models (LLMs) and craft effective prompt engineering strategies for real-world performance. Evaluate and optimize AI/ML models using tools like LangChain , Evals , Hugging Face , and custom evaluation frameworks. Work closely with backend and product engineering teams to integrate AI capabilities into production systems. Continuously research and implement advancements in AI, ML, and LLMs to keep the product innovative and competitive. Requirements 3 to 6 years of hands-on experience in Machine Learning and NLP . Solid experience in unsupervised learning techniques (clustering, dimensionality reduction, topic modeling, etc.). Strong understanding and experience in vector databases (e.g., FAISS, Pinecone, ChromaDB) and RAG system design . Experience with LLM fine-tuning , prompt engineering , and deployment of AI systems in production. Proficiency in Python and relevant ML/AI libraries such as TensorFlow, PyTorch, Hugging Face, LangChain, and OpenAI APIs. Strong analytical and problem-solving skills with an ability to work in a fast-paced environment. Prior experience building end-to-end AI features that are used in real products is a strong plus. If this sounds like you, we are here to have the next talk. Share your resume on hr@zonkafeedback.com
Posted 1 week ago
5.0 - 8.0 years
14 - 18 Lacs
Mumbai
Work from Office
Role & responsibilities Develop, test, and maintain backend services using Python and frameworks such as FastAPI or Flask . Implement and optimize LLM-based applications using models like OpenAI (GPT-4o) , Gemini , LLaMA , etc. Work on RAG implementations , including integration with vector databases and prompt engineering strategies. Design, build, and maintain database connectivity using SQL for infrastructure-level applications. Develop and deploy containerized applications using Docker , Git , GitHub , and integrate with CI/CD pipelines . Deploy and manage applications on cloud platforms ( AWS , Azure , GCP ). Ensure clean code practices including unit testing , error handling , Python best practices , and design patterns . Collaborate with cross-functional teams including Product, Data Science, and DevOps for end-to-end solution delivery. Preferred candidate profile Strong proficiency in Python programming . Familiarity with FastAPI , Flask , or similar web frameworks. Experience with SQL databases and connection management. Hands-on with LLMs and RAG workflows (OpenAI GPT, Gemini, LLaMA, etc.). Understanding of vector databases such as FAISS, Pinecone, or similar. Proficiency with Git , GitHub , and CI/CD pipelines . Experience with Docker for containerization. Exposure to any of the major cloud platforms AWS , Azure , or GCP . Ability to write unit test cases , implement robust error handling , and follow design patterns . Clear understanding of enterprise software development best practices . Someone who have experience working with Banking or Financial Services projects/clients. Immediate Joiner or someone serving notice period.
Posted 1 week ago
2.0 - 6.0 years
4 - 8 Lacs
Pune
Work from Office
Responsibilities Implement Generative AI based solutions using Infosys and Industry standard tools and platforms Implement prompt engineering techniques to evaluate model behavior under various input conditions and refining prompts for better accuracy and desired results. Perform AI assurance to test AI infused applications Work closely with cross-functional teams to ensure prompts align with company objectives and user experience. Identify and report bugs, performance bottlenecks, and safety issues. Stay up-to-date with the latest advancements in Generative AI and testing methodologies. Identify and report bugs, performance bottlenecks, and safety issues. Technical and Professional Requirements: Hands-on experience in Python LangChain programming Experience in using Traditional AI/ML Models in SDLC Experience in leveraging Azure AI Services in projects Experience in Generative AI Concepts such as Advanced Prompt Engg, RAG, leveraging LLMs etc. Excellent communication skills & Analytical skills Knowledge about Agentic AI Frameworks, Commercial GenAI Platforms/Tools (Optional) Open AI , models knowledge is important Preferred Skills: Technology->Machine Learning->Generative AI Technology->Machine Learning->AI/ML Solution Architecture and Design->generative ai Educational Requirements Bachelor of Engineering Service Line Infosys Quality Engineering
Posted 1 week ago
5.0 - 10.0 years
0 - 2 Lacs
Hyderabad
Hybrid
Role: ML Engineer. Exp : 5 Years to 10 Years Location : Hyderabad. Job Overview: Were seeking a ML Engineer / Data Scientist to architect agentic AI solutions and own the full ML lifecycle—from proof-of-concept to production. You’ll operationalize LLMs, build agentic workflows, implement MLOps best practices, and design multi-agent systems for cybersecurity tasks. Key Responsibilities: Operationalize large language models and agentic workflows (LangChain, LangGraph, LlamaIndex) to automate security decision-making and threat response. Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerized with Docker and Kubernetes. Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyze performance and optimize costs and SLA compliance. Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: Bachelor’s or master’s in computer science, Data Science, AI or related quantitative discipline. 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI or equivalent agentic frameworks. Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). Infrastructure as Code skills with Terraform. Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. Solid data science fundamentals: feature engineering, model evaluation, data ingestion. Understanding of cybersecurity principles, SIEM data, and incident response. Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: AWS certifications (Solutions Architect, Developer Associate). Nice to have Experience with Model Context Protocol (MCP) and RAG integrations. Nice to have Experience in Crew.AI Familiarity with workflow orchestration tools (Apache Airflow). Experience with time series analysis, anomaly detection, and machine learning.
Posted 1 week ago
7.0 - 12.0 years
20 - 25 Lacs
Hyderabad
Remote
Job Title: Generative AI Specialist Experience: Total 7+ Years | Relevant - 3 to 5 Years Location: Hyderabad Employment Type: Full-Time Job Description We are seeking a highly skilled and innovative Generative AI Engineer to join our team. The ideal candidate will have a strong background in designing, developing, and deploying AI models, particularly in the domain of generative AI and large language models (LLMs). Key Responsibilities Design & Development: Architect and build generative AI models, algorithms, and frameworks. Model Implementation: Integrate AI models into existing systems and applications. LLM Expertise: Work with tools like LangChain, Haystack, and apply prompt engineering techniques. Data Handling: Preprocess and analyze data for model training and evaluation. Cross-functional Collaboration: Partner with data scientists, product managers, and other stakeholders. Testing & Deployment: Evaluate model performance and deploy models to production. Monitoring & Optimization: Track model performance and continuously improve results. Research & Innovation: Stay updated with the latest advancements in generative AI. Required Skills Proficiency in Python for AI development. Strong understanding of Generative AI , NLP , and LLMs . Experience with RAG pipelines and vector databases . Familiarity with frameworks like LangChain , Haystack , and other open-source libraries. Knowledge of prompt engineering and tokenization . Experience in fine-tuning and integrating AI models in production. Excellent communication and problem-solving skills. Optional Skills Experience with cloud platforms (GCP, AWS, Azure). Familiarity with MLOps and DevOps practices. Why Join Us? Work on cutting-edge AI technologies. Collaborate with a passionate and talented team. Opportunity to innovate and shape the future of AI applications. How to Apply Interested candidates can apply directly through Naukri or send your updated resume to shilpa.shapur@Excelra.com
Posted 1 week ago
1.0 - 5.0 years
3 - 4 Lacs
Gurugram
Work from Office
Job Title: Executive Human Resource Business Partner (HRBP) Location: Gurgaon Company: IGT Solutions Pvt. Ltd. Industry: IT & BPM – Travel, Transportation, and Hospitality Domain Company Overview: IGT Solutions Pvt. Ltd. is a global leader in IT and Business Process Management (BPM) services, dedicated to delivering innovation and operational excellence across the Travel, Transportation, and Hospitality sectors. With over 10,000+ travel industry experts and 15 state-of-the-art delivery centers worldwide, IGT offers comprehensive solutions in Application Development, Mobility, Testing, Analytics, Contact Center Services, Back Office Operations, and Consulting. IGT is committed to a diverse and inclusive workplace and provides equal employment opportunities without regard to age, gender, race, religion, disability, or other protected statuses. Job Summary: We are seeking a dynamic and experienced Executive – HR Business Partner (HRBP) to join our team in Gurgaon . This role will lead HR operations for the assigned vertical/process, support business functions, drive employee engagement and retention initiatives, and ensure policy compliance. Key Responsibilities: Employee Relations: Address and resolve employee queries and concerns efficiently; track and report resolution Turnaround Time (TAT). Compliance & Policy Adherence: Enforce labor laws, company discipline, and the Code of Conduct. Attrition Management: Maintain attrition at or below 5%. Employee Engagement: Lead engagement activities, facilitate action planning, record meeting outcomes, and ensure timely follow-ups. Performance Management: Ensure timely KRA sign-offs for new joiners and during internal movements; monitor half-yearly and annual appraisals. Training Compliance: Track training plan adherence for the assigned vertical/process. Exit Management: Conduct exit interviews, analyze survey data, and present actionable insights and trends. Retention Strategies: Implement effective strategies to enhance employee retention and workplace satisfaction. Branding & Market Intelligence: Support employer branding and monitor industry HR best practices. Policy Compliance: Ensure adherence to internal policies including Security, Privacy, Zero Tolerance, Disciplinary, and Learning Agreements. Qualifications: Education: Graduate in any field (preferably with a degree in Psychology, Industrial Relations, or Human Resource Management). Experience: Proven experience in HR Generalist or Specialist roles, especially in labor relations and employee engagement in a BPO/Call Center environment. Skills & Competencies: Proficiency in MS Office tools (Excel, PowerPoint, Word) Strong analytical and problem-solving skills Basic understanding of labor laws and HR practices Ability to multitask and manage deadlines Excellent verbal and written communication skills Strong interpersonal and conflict-resolution abilities Additional Information: Work Environment: Onsite role (Gurgaon office) Work Schedule: Flexibility to work in rotational shifts or 24/7 schedules is mandatory Why Join IGT Solutions? Become part of a global leader that drives meaningful transformation in the travel and hospitality domain. At IGT, you’ll have the opportunity to shape HR practices, contribute to a culture of excellence, and grow your career in a dynamic, people-first environment.
Posted 1 week ago
5.0 - 7.0 years
27 - 30 Lacs
Hyderabad, Chennai
Work from Office
Experience required: 7+ years Core Generative AI & LLM Skills: * 5+ years in Software Engineering, 1+ year in Generative AI. * Strong understanding of LLMs, prompt engineering, and RAG. * Experience with multi-agent system design (planning, delegation, feedback). * Hands-on with LangChain (tools, memory, callbacks) and LangGraph (multi-agent orchestration). * Proficient in using vector DBs (OpenSearch, Pinecone, FAISS, Weaviate). * Skilled in Amazon Bedrock and integrating LLMs like Claude, Titan, Llama. * Strong Python (LangChain, LangGraph, FastAPI, boto3). * Experience building MCP servers/tools. * Designed robust APIs, integrated external tools with agents. * AWS proficiency: Lambda, API Gateway, DynamoDB, S3, Neptune, Bedrock Agents * Knowledge of data privacy, output filtering, audit logging * Familiar with AWS IAM, VPCs, and KMS encryption Desired Skills: * Integration with Confluence, CRMs, knowledge bases, etc. * Observability with Langfuse, OpenTelemetry, Prompt Catalog * Understanding of model alignment & bias mitigation
Posted 1 week ago
7.0 - 12.0 years
18 - 25 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role & responsibilities Required Skills: Strong Python programming experience, especially with pandas, numpy, matplotlib, seaborn. Experience in building monitoring dashboards or visualizations (e.g., Plotly, Dash, Streamlit). Understanding of ML model evaluation metrics (e.g., precision, recall, drift, AUC). Familiarity with model risk management concepts or frameworks. Ability to write clean, well-documented code for reproducibility and audit-readiness. Comfortable interpreting and working with structured model output and log files. Excellent attention to detail and communication skills. Should have Experience in Banking Domain Preferred candidate profile Notice Period: Immediate to 30 Days
Posted 1 week ago
0.0 - 1.0 years
0 Lacs
Ernakulam
Hybrid
Role & responsibilities IT developer OJT(On The JOB Training) with AI, Web, Mobile, AI, Android, Flutter, PHP, REACT-NODE, python-django & ML/DL Internships are available For NON IT graduates, we will train you in live AI/IOT/Web/Mobile projects of USA startups. Full time opportunity after Successful Internships. Additional training will be provided for freshers along with 2 year Master degree & 3 year degree enrollments. Preferred candidate profile Fast & SLOW self learner programs. Good understanding of Accounting, healthcare and business application exposure is required.
Posted 1 week ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
About the Team and Our Scope We are a forward-thinking tech organization within Swiss Re, delivering transformative AI/ML solutions that redefine how businesses operate. Our mission is to build intelligent, secure, and scalable systems that deliver real-time insights, automation, and high-impact user experiences to clients globally. You'll join a high-velocity AI/ML team working closely with product managers, architects, and engineers to create next-gen enterprise-grade solutions. Our team is built on a startup mindset - bias to action, fast iterations, and ruthless focus on value delivery. We're not only shaping the future of AI in business - we're shaping the future of talent. This role is ideal for someone passionate about advanced AI engineering today and curious about evolving into a product leadership role tomorrow. You'll get exposure to customer discovery, roadmap planning, and strategic decision-making alongside your technical contributions. Role Overview As an AI/ML Engineer, you will play a pivotal role in the research, development, and deployment of next-generation GenAI and machine learning solutions . Your scope will go beyond retrieval-augmented generation (RAG) to include areas such as prompt engineering, long-context LLM orchestration, multi-modal model integration (voice, text, image, PDF), and agent-based workflows. You will help assess trade-offs between RAG and context-native strategies, explore hybrid techniques, and build intelligent pipelines that blend structured and unstructured data. You'll work with technologies such as LLMs, vector databases, orchestration frameworks, prompt chaining libraries, and embedding models, embedding intelligence into complex, business-critical systems. This role sits at the intersection of rapid GenAI prototyping and rigorous enterprise deployment, giving you hands-on influence over both the technical stack and the emerging product direction. Key Responsibilities Build Next-Gen GenAI Pipelines : Design, implement, and optimize pipelines across RAG, prompt engineering, long-context input handling, and multi-modal processing. Prototype, Validate, Deploy : Rapidly test ideas through PoCs, validate performance against real-world business use cases, and industrialize successful patterns. Ingest, Enrich, Embed: Construct ingestion workflows including OCR, chunking, embeddings, and indexing into vector databases to unlock unstructured data. Integrate Seamlessly: Embed GenAI services into critical business workflows, balancing scalability, compliance, latency, and observability. Explore Hybrid Strategies: Combine RAG with context-native models, retrieval mechanisms, and agentic reasoning to build robust hybrid architectures. Drive Impact with Product Thinking : Collaborate with product managers and UX designers to shape user-centric solutions and understand business context. Ensure Enterprise-Grade Quality: Deliver solutions that are secure, compliant (e.g., GDPR), explainable, and resilient - especially in regulated environments. What Makes You a Fit Must-Have Technical Expertise Proven experience with GenAI techniques and LLMs , including RAG, long-context inference, prompt tuning, and multi-modal integration. Strong hands-on skills with Python , embedding models, and orchestration libraries (e.g., LangChain, Semantic Kernel, or equivalents). Comfort with MLOps practices , including version control, CI/CD pipelines, model monitoring, and reproducibility. Ability to operate independently, deliver iteratively, and challenge assumptions with data-driven insight. Understanding of vector search optimization and retrieval tuning. Exposure to multi-modal models Nice-To-Have Qualifications Experience building and operating AI systems in regulated industries (e.g., insurance, finance, healthcare). Familiarity with Azure AI ecosystem (e.g., Azure OpenAI, Azure AI Document Intelligence, Azure Cognitive Search) and deployment practices in cloud-native environments. Experience with agentic AI architectures , tools like AutoGen, or prompt chaining frameworks. Familiarity with data privacy and auditability principles in enterprise AI. Bonus: You Think Like a Product Manager While this role is technical at its core, we highly value candidates who are curious about how AI features become products . If you're excited by the idea of influencing roadmaps, shaping requirements, or owning end-to-end value delivery - we'll give you space to grow into it. This is a role where engineering and product are not silos . If you're keen to move in that direction, we'll mentor and support your evolution. Why Join Us You'll be part of a team that's pushing AI/ML into uncharted, high-value territory. We operate with urgency, autonomy, and deep collaboration. You'll prototype fast, deliver often, and see your work shape real-world outcomes - whether in underwriting, claims, or data orchestration. And if you're looking to transition from deep tech to product leadership , this role is a launchpad. Swiss Re is an equal opportunity employer . We celebrate diversity and are committed to creating an inclusive environment for all employees. Keywords: Reference Code: 134317
Posted 1 week ago
10.0 - 20.0 years
40 - 60 Lacs
Hyderabad, Bengaluru
Hybrid
Primary Skills (Must have hands on Exp): Gen AI, RAG, LLM, Agentic AI, Data Science, Python, AI/ML, API Integration GenAI Expertise: Extensive experience in building Generative AI applications, including RAG, LLM chaining using Langchain, prompt engineering. • Develop Python-based backend services, APIs, and orchestrators that manage tool invocation, context handling, and task decomposition. • Develop goal-oriented autonomous agents capable of planning, decomposing tasks, and invoking tools via APIs and actions. • Performance Optimization: Proven ability to optimize Azure OpenAI models for enhanced performance and scalability. • API Integration: Experience in integrating OpenAI models into software applications using OpenAI APIs. • Documentation: Excellent documentation and communication skills. Role & responsibilities As a Technical Architect with expertise in OpenAI technology, you will play a pivotal role in the design, implementation, and optimization of AI applications and systems. You will collaborate closely with cross-functional teams, software engineers, data scientists, and product managers to ensure the successful deployment of OpenAI technology. Your responsibilities will include: Primary Responsibilities: • AI Solution Design: Lead the design of AI solutions powered by OpenAI technology, defining system architecture and components. • Technical Leadership: Provide technical leadership and guidance to development teams in implementing OpenAI solutions. • OpenAI Model Selection: Identify and recommend the appropriate OpenAI models, such as GPT-3, GPT-4, or specific language models, based on project requirements. • Scalability and Performance: Ensure that AI solutions are scalable and optimized for performance, especially when dealing with large datasets. • API Integration: Integrate OpenAI APIs and SDKs into applications and services, ensuring seamless functionality. • Model Fine-Tuning: Collaborate with data scientists to fine-tune OpenAI models for specific use cases or domains. • Data Processing: Oversee data preprocessing and feature engineering to prepare data for input into OpenAI models. • Security and Compliance: Address security and compliance considerations in OpenAI-powered solutions, particularly when handling sensitive data. • Documentation: Create and maintain technical documentation for OpenAI architectures, model usage, and best practices. • Training and Mentorship: Mentor team members and facilitate knowledge sharing and skill development. • Conduct code reviews and refactoring wherever possible Preferred candidate profile
Posted 1 week ago
5.0 - 10.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Job Description: We are seeking a talented and experienced Data Scientist to join our dynamic team. The ideal candidate will have a strong background in data analysis, machine learning, statistical modeling, and artificial intelligence. Experience with Natural Language Processing (NLP) is desirable. Experience delivering products that incorporate AI/ML, familiarity with Cloud Services such as AWS highly desirable. Key Responsibilities: Clean, prepare, and explore data to find trends and patterns Build, validate, and implement AI/ML models Extensively document all aspects of the work including data analysis, model development, results Collaborate with other team members teams to incorporate AI/ML models into software applications Stay updated with the latest advancements in AI/ML domain and incorporate into day-to-day work Required Skills/Qualifications: 3-5 years of experience in AI/ML related work Extensive experience in Python Familiarity with Statistical models such as Linear/Logistic regression, Bayesian Models, Classification/Clustering models, Time Series analysis Experience with deep learning models such as CNNs, RNNs, LSTM, Transformers Experience with machine learning frameworks such as TensorFlow, PyTorch, Scikit- learn, Keras Experience with GenAI, LLMs, RAG architecture would be a plus Familiarity with cloud services such as AWS, Azure Familiarity with version control systems (e.g., Git), JIRA, Confluence Familiarity with MLOPs concepts, AI/ML pipeline tooling such as Kedro Knowledge of CI/CD pipelines and DevOps practices Experience delivering customer facing AI Solutions delivered as SaaS would be a plus Bachelors degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Strong problem-solving skills and attention to detail Excellent verbal and written communication and teamwork skills Benefits: Competitive salary and benefits package Opportunity to work on cutting-edge technologies and innovative projects Collaborative and inclusive work environment Professional development and growth opportunities
Posted 1 week ago
3.0 - 8.0 years
10 - 15 Lacs
Gurugram, Bengaluru, Delhi / NCR
Work from Office
Role & Responsibility Develop and maintain microservice architecture and API management solutions using REST and gRPC for seamless deployment of AI solutions. • Collaborate with cross-functional teams, including data scientists and product managers, to acquire, process, and manage data for AI/ML model integration and optimization. • Design and implement robust, scalable, and enterprise-grade data pipelines to support state-of-the-art AI/ML models. • Debug, optimize, and enhance machine learning models, ensuring quality assurance and performance improvements. • Familiarity with tools like Terraform, CloudFormation, and Pulumi for efficient infrastructure management. • Create and manage CI/CD pipelines using Git-based platforms (e.g., GitHub Actions, Jenkins) to ensure streamlined development workflows. • Operate container orchestration platforms like Kubernetes, with advanced configurations and service mesh implementations, for scalable ML workload deployments. • Design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. • Engage in advanced prompt engineering and fine-tuning of large language models (LLMs), focusing on semantic retrieval and chatbot development. • Document model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. • Research and implement cutting-edge LLM optimization techniques, such as quantization and knowledge distillation, ensuring efficient model performance and reduced computational costs. • Collaborate closely with stakeholders to develop innovative and effective natural language processing solutions, specializing in text classification, sentiment analysis, and topic modeling. • Design and execute rigorous A/B tests for machine learning models, analyzing results to drive strategic improvements and decisions. • Stay up-to-date with industry trends and advancements in AI technologies, integrating new methodologies and frameworks to continually enhance the AI engineering function. • Contribute to creating specialized AI solutions in healthcare, leveraging domain-specific knowledge for task adaptation and deployment. Technical Skills: • Advanced proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) • Extensive experience with LLM frameworks (Hugging Face Transformers, LangChain) and prompt engineering techniques • Experience with big data processing using Spark for large-scale data analytics • Version control and experiment tracking using Git and MLflow • Software Engineering & Development: Advanced proficiency in Python, familiarity with Go or Rust, expertise in microservices, test-driven development, and concurrency processing. • DevOps & Infrastructure: Experience with Infrastructure as Code (Terraform, CloudFormation), CI/CD pipelines (GitHub Actions, Jenkins), and container orchestration (Kubernetes) with Helm and service mesh implementations. • LLM Infrastructure & Deployment: Proficiency in LLM serving platforms such as vLLM and FastAPI, model quantization techniques, and vector database management. • MLOps & Deployment: Utilization of containerization strategies for ML workloads, experience with model serving tools like TorchServe or TF Serving, and automated model retraining. • Cloud & Infrastructure: Strong grasp of advanced cloud services (AWS, GCP, Azure) and network security for ML systems. • LLM Project Experience: Expertise in developing chatbots, recommendation systems, translation services, and optimizing LLMs for performance and security. • General Skills: Python, SQL, knowledge of machine learning frameworks (Hugging Face, TensorFlow, PyTorch), and experience with cloud platforms like AWS or GCP. • Experience in creating LLD for the provided architecture. • Experience working in microservices based architecture.
Posted 1 week ago
3.0 - 8.0 years
14 - 16 Lacs
Gurugram, Bengaluru
Hybrid
Roles and Responsibilities Develop and maintain Microservice architecture and API management solutions using REST and gRPC for seamless deployment of AI solutions. Collaborate with cross-functional teams, including data scientists and product managers, to acquire, process, and manage data for AI/ML model integration and optimization. Design and implement robust, scalable, and enterprise-grade data pipelines to support state-of-the-art AI/ML models. Debug, optimize, and enhance machine learning models, ensuring quality assurance and performance improvements. Familiarity with tools like Terraform, CloudFormation, and Pulumi for efficient infrastructure management. Create and manage CI/CD pipelines using Git-based platforms (e.g., GitHub Actions, Jenkins) to ensure streamlined development workflows. Operate container orchestration platforms like Kubernetes, with advanced configurations and service mesh implementations, for scalable ML workload deployments. Design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. Engage in advanced prompt engineering and fine-tuning of large language models (LLMs), focusing on semantic retrieval and chatbot development. Document model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. Research and implement cutting-edge LLM optimization techniques, such as quantization and knowledge distillation, ensuring efficient model performance and reduced computational costs. Collaborate closely with stakeholders to develop innovative and effective natural language processing solutions, specializing in text classification, sentiment analysis, and topic modeling. Design and execute rigorous A/B tests for machine learning models, analyzing results to drive strategic improvements and decisions. Stay up-to-date with industry trends and advancements in AI technologies, integrating new methodologies and frameworks to continually enhance the AI engineering function. Contribute to creating specialized AI solutions in healthcare, leveraging domain-specific knowledge for task adaptation and deployment. Technical Skills: Advanced proficiency in Python . Extensive experience with LLM frameworks (Hugging Face Transformers, LangChain) and prompt engineering techniques Experience with big data processing using Spark for large-scale data analytics Version control and experiment tracking using Git and MLflow Software Engineering & Development: Advanced proficiency in Python, familiarity with Go or Rust, expertise in microservices, test-driven development, and concurrency processing. DevOps & Infrastructure: Experience with Infrastructure as Code (Terraform, CloudFormation), CI/CD pipelines (GitHub Actions, Jenkins), and container orchestration (Kubernetes) with Helm and service mesh implementations. LLM Infrastructure & Deployment: Proficiency in LLM serving platforms such as vLLM and FastAPI, model quantization techniques, and vector database management. MLOps & Deployment: Utilization of containerization strategies for ML workloads, experience with model serving tools like TorchServe or TF Serving, and automated model retraining. Cloud & Infrastructure: Strong grasp of advanced cloud services (AWS, GCP, Azure) and network security for ML systems. LLM Project Experience: Expertise in developing chatbots, recommendation systems, translation services, and optimizing LLMs for performance and security. General Skills: Python, SQL, knowledge of machine learning frameworks (Hugging Face, TensorFlow, PyTorch), and experience with cloud platforms like AWS or GCP. Experience in creating LLD for the provided architecture. Experience working in microservices based architecture. Domain Expertise: Deep understanding of ML and LLM development lifecycle, including fine-tuning and evaluation Expertise in feature engineering, embedding optimization, and dimensionality reduction Advanced knowledge of A/B testing, experimental design, and statistical hypothesis testing Experience with RAG systems, vector databases, and semantic search implementation Proficiency in LLM optimization techniques including quantization and knowledge distillation Understanding of MLOps practices for model deployment and monitoring Professional Competencies: Strong analytical thinking with ability to solve complex ML challenges Excellent communication skills for presenting technical findings to diverse audiences Experience translating business requirements into data science solutions Project management skills for coordinating ML experiments and deployments Strong collaboration abilities for working with cross-functional teams Dedication to staying current with latest ML research and best practices Ability to mentor and share knowledge with team members
Posted 1 week ago
0.0 - 4.0 years
10 - 20 Lacs
Pune
Hybrid
About the role PubMatic is looking for engineers with expertise in Generative AI and AI agent development. You will be responsible for building and optimizing advanced AI agents that leverage the latest technologies in Retrieval-Augmented Generation (RAG), vector databases, and large language models (LLMs). You will work on developing state-of-the-art solutions that enhance Generative AI capabilities and enable our platform to handle complex information retrieval, contextual generation, and adaptive interactions. What You'll do Collaborate with engineers, architects, product managers, and UX designers to develop innovative AI solutions for new customer use-cases. Work independently and iterate quickly based on customer feedback to make product tweaks. Implement and optimize LLMs for specific use cases, including fine-tuning models, deploying pre-trained models, and evaluating their performance. Develop AI agents powered by RAG systems, integrating external knowledge sources to improve the accuracy and relevance of generated content. Design, implement, and optimize vector databases (e.g., FAISS, Pinecone, Weaviate) for efficient and scalable vector search, and work on various vector indexing algorithms. Create sophisticated prompts and fine-tune them to improve the performance of LLMs in generating precise and contextually relevant responses. Utilize evaluation frameworks and metrics (e.g., Evals) to assess and improve the performance of generative models and AI systems. Work with data scientists, engineers, and product teams to integrate AI-driven capabilities into customer-facing products and internal tools. Stay up to date with the latest research and trends in LLMs, RAG, and generative AI technologies to drive innovation in the companys offerings. Continuously monitor and optimize models to improve their performance, scalability, and cost efficiency We'd Love for You to Have Must Have Strong understanding of large language models (GPT, BERT, T5, etc.) and their underlying principles, including transformer architecture and attention mechanisms. Proven experience building AI agents with Retrieval-Augmented Generation to enhance model performance using external data sources (documents, databases). In-depth knowledge of vector databases, vector indexing algorithms, and experience with technologies like FAISS, Pinecone, Weaviate, or Milvus. Ability to craft complex prompts to guide the output of LLMs for specific use cases, enhancing model understanding and contextuality. Familiarity with Evals and other performance evaluation tools for measuring model quality, relevance, and efficiency. Proficiency in Python and experience with machine learning libraries such as TensorFlow, PyTorch, and Hugging Face Transformers. Experience with data preprocessing, vectorization, and handling large-scale datasets. Ability to present complex technical ideas and results to both technical and non-technical stakeholders. Nice-to-Have Experience in building AI agents using graph-based architectures, including knowledge graph embeddings and graph neural networks (GNNs). Experience with training small base models using custom data, including data collection, pre-processing, and fine-tuning models to specific domains or tasks. Familiarity with deploying AI models on cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). Publication or contributions to research in AI, LLMs, or related fields. Qualifications Should have a bachelors degree in engineering (CS / IT) or equivalent degree from a well-known Institutes / Universities. If interested, please visit the link below to apply: https://pubmatic.com/job/?gh_jid=4615220008
Posted 1 week ago
5.0 - 8.0 years
20 - 32 Lacs
Pune
Hybrid
About the role PubMatic is looking for engineers with expertise in Generative AI and AI agent development. You will be responsible for building and optimizing advanced AI agents that leverage the latest technologies in Retrieval-Augmented Generation (RAG), vector databases, and large language models (LLMs). You will work on developing state-of-the-art solutions that enhance Generative AI capabilities and enable our platform to handle complex information retrieval, contextual generation, and adaptive interactions. What You'll Do Provide technical leadership and mentorship to engineering teams while collaborating with architects, product managers, and UX designers to create innovative AI solutions that address complex customer challenges. Lead the design, development, and deployment of AI-driven features. Drive end-to-end ownershipfrom feasibility analysis and design specifications to execution and releasewhile ensuring quick iterations based on customer feedback in a fast-paced Agile environment. Spearhead technical design meetings and produce detailed design documents that outline scalable, secure, and robust AI architectures. Ensure that the solutions are aligned with long-term product strategy and technical roadmaps. Implement and optimize LLMs for specific use cases, including fine-tuning models, deploying pre-trained models, and evaluating their performance. Develop AI agents powered by RAG systems, integrating external knowledge sources to improve the accuracy and relevance of generated content. Design, implement, and optimize vector databases (e.g., FAISS, Pinecone, Weaviate) for efficient and scalable vector search, and work on various vector indexing algorithms. Create sophisticated prompts and fine-tune them to improve the performance of LLMs in generating precise and contextually relevant responses. Utilize evaluation frameworks and metrics (e.g., Evals) to assess and improve the performance of generative models and AI systems. Work with data scientists, engineers, and product teams to integrate AI-driven capabilities into customer-facing products and internal tools. Stay up to date with the latest research and trends in LLMs, RAG, and generative AI technologies to drive innovation in the companys offerings. Continuously monitor and optimize models to improve their performance, scalability, and cost efficiency We'd Love for You to Have Must Have Strong understanding of large language models (GPT, BERT, T5, etc.) and their underlying principles, including transformer architecture and attention mechanisms. Proven experience building AI agents with Retrieval-Augmented Generation to enhance model performance using external data sources (documents, databases). In-depth knowledge of vector databases, vector indexing algorithms, and experience with technologies like FAISS, Pinecone, Weaviate, or Milvus. Ability to craft complex prompts to guide the output of LLMs for specific use cases, enhancing model understanding and contextuality. Familiarity with Evals and other performance evaluation tools for measuring model quality, relevance, and efficiency. Proficiency in Python and experience with machine learning libraries such as TensorFlow, PyTorch, and Hugging Face Transformers. Experience with data preprocessing, vectorization, and handling large-scale datasets. Ability to present complex technical ideas and results to both technical and non-technical stakeholders. Nice-to-Have Experience in building AI agents using graph-based architectures, including knowledge graph embeddings and graph neural networks (GNNs). Experience with training small base models using custom data, including data collection, pre-processing, and fine-tuning models to specific domains or tasks. Familiarity with deploying AI models on cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). Publication or contributions to research in AI, LLMs, or related fields. Qualifications Should have a bachelors degree in engineering (CS / IT) or equivalent degree from a well-known Institutes / Universities. If interested, please visit the link below to apply: https://pubmatic.com/job/?gh_jid=4615235008
Posted 1 week ago
3.0 - 8.0 years
10 - 20 Lacs
Chennai
Hybrid
JD: Responsibilities: Strong understanding of ML algorithms, techniques, and best practices. Strong understanding of Databricks, Azure AI services and other ML platforms and cloud computing platforms (e.g., AWS, Azure, GCP) and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of Mlflow or Kubeflow frameworks Strong programming skills in python and Data analytical expertise Experience in building Gen AI based solutions like chatbots using RAG approaches Expertise in any of the gen ai frameworks such as Langchain/ Langgraph, autogen, crewai, etc. Requirements: Proven experience as a Machine Learning Engineer, Data Scientist, or similar role, with a focus on product matching, image matching, and LLM. Solid understanding of machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with product matching algorithms and image recognition techniques. Experience with natural language processing and large language models (LLMs) such as GPT, BERT, or similar architectures. Optimize and fine-tune models for performance and scalability. Collaborate with cross-functional teams to integrate ML solutions into products. Stay updated with the latest advancements in AI and machine learning. Role & responsibilities Preferred candidate profile
Posted 1 week ago
3.0 - 8.0 years
3 - 18 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities: LLM Integration & Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering: Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation: Deliver GenAI-powered solutions such as chatbots, summarizers, document Q&A systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration: Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning: Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration & Documentation: Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications: 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications: Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.
Posted 1 week ago
3.0 - 8.0 years
3 - 18 Lacs
Thane, Maharashtra, India
On-site
Key Responsibilities: LLM Integration & Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering: Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation: Deliver GenAI-powered solutions such as chatbots, summarizers, document Q&A systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration: Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning: Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration & Documentation: Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications: 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications: Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.
Posted 1 week ago
3.0 - 8.0 years
3 - 18 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Key Responsibilities: LLM Integration & Development: Build and fine-tune LLMs for task-specific applications using techniques like prompt engineering, retrieval-augmented generation (RAG), fine-tuning, and model adaptation. AI Agent Engineering: Design, develop, and orchestrate AI agents capable of reasoning, planning, tool use (e.g., APIs, plugins), and autonomous execution for user-defined goals. GenAI Use Case Implementation: Deliver GenAI-powered solutions such as chatbots, summarizers, document Q&A systems, assistants, and co-pilot tools using frameworks like LangChain or LlamaIndex. System Integration: Connect LLM-based agents to external tools, APIs, databases, and knowledge sources for real-time, contextualized task execution. Performance Tuning: Optimize model performance, cost-efficiency, safety, and latency using caching, batching, evaluation tools, and monitoring systems. Collaboration & Documentation: Work closely with AI researchers, product teams, and engineers to iterate quickly. Maintain well-structured, reusable, and documented codebases. Required Qualifications: 35 years of experience in AI/ML, with at least 12 years hands-on with GenAI or LLMs. Strong Python development skills and experience with ML frameworks (e.g., Hugging Face, LangChain, OpenAI API, Transformers). Familiarity with LLM orchestration, vector databases (e.g., FAISS, Pinecone, Weaviate), and embedding models. Understanding of prompt engineering, agent architectures, and conversational AI flows. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or related field. Preferred Qualifications: Experience deploying AI systems in cloud environments (AWS/GCP/Azure) or with containerized setups (Docker/Kubernetes). Familiarity with open-source LLMs (LLaMA, Mistral, Mixtral, etc.) and open-weight tuning methods (LoRA, QLoRA). Exposure to RAG pipelines, autonomous agents (e.g., Auto-GPT, BabyAGI), and multi-agent systems. Knowledge of model safety, evaluation, and compliance standards in GenAI.
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant-Data Scientists with AI and Generative Model experience! We are currently looking for a talented and experienced Data Scientist with a strong background in AI, specifically in building generative AI models using large language models, to join our team. This individual will play a crucial role in developing and implementing data-driven solutions, AI-powered applications, and generative models that will help us stay ahead of the competition and achieve our ambitious goals. Responsibilities . Collaborate with cross-functional teams to identify , analyze , and interpret complex datasets to develop actionable insights and drive data-driven decision-making. . Design, develop, and implement advanced statistical models, machine learning algorithms, AI applications, and generative models using large language models such as GPT-3, BERT and also frameworks like RAG, Knowledge Graphs etc. . Communicate findings and insights to both technical and non-technical stakeholders through clear and concise presentations, reports, and visualizations. . Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed. . Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones. . Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth. Qualifications we seek in you: Minimum Qualifications . Bachelor%27s or Master%27s degree in Data Science , Computer Science, Statistics, or a related field. . Experience in data science, machine learning, AI applications, and generative AI modelling. . Strong expertise in Python, R, or other programming languages commonly used in data science and AI, with experience in implementing large language models and generative AI frameworks. . Proficient in statistical modelling, machine learning techniques, AI algorithms, and generative model development using large language models such as GPT-3, BERT, or similar frameworks like RAG, Knowledge Graphs etc. . Experience working with large datasets and using various data storage and processing technologies such as SQL, NoSQL, Hadoop, and Spark. . Strong analytical, problem-solving, and critical thinking skills, with the ability to draw insights from complex data and develop actionable recommendations. . Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and explain complex concepts to non-technical stakeholders. Preferred Qualifications/ skills . Experience in deploying AI models, generative models, and applications in a production environment using cloud platforms such as AWS, Azure, or GCP. . Knowledge of industry-specific data sources, challenges, and opportunities relevant to Insurance . Demonstrated experience in leading data science projects from inception to completion, including project management and team collaboration skills. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane