Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be joining Ushur as an AI Application Engineer, contributing to the development of AI-powered language intelligence, conversational automation, and document intelligence stacks. In this role, you will collaborate with a high-impact team to create ML applications, work with AI/ML stacks, and engage with customers regularly. Your responsibilities will include designing, developing, and supporting AI/ML solutions for enterprises using Ushur's ExperienceOS micro-engagement Platform. Working closely with the Ushur AI Lab team, you will design and implement AI agents focused on intelligent document processing and conversational AI applications. Collaboration with a diverse team of engineers, architects, developers, testers, and machine learning engineers will be essential to deliver innovative digital experiences at scale. Additionally, you will engage with clients to understand their requirements, translate them into detailed solutions, and leverage ML models within the Ushur LLM stack. To excel in this role, you must possess a Bachelor's or Master's degree in Computer Science, Statistics, or a related technical field. You should have at least 2 years of experience in Natural Language Processing (NLP) and direct customer interaction. Proficiency in Python or Java programming languages, along with experience in contemporary ML frameworks such as Pytorch and Tensorflow, is required. Strong communication skills, both verbal and written, are essential for effectively engaging with customers and internal teams. While not mandatory, having a Master's degree in Applied Math/Computer Science or prior experience in training customers on developed features would be advantageous. Ushur offers a vibrant company culture that values respect, inclusion, and collaboration, providing an environment where individuals can thrive and make a meaningful impact. We support work-life balance with flexible paid time off, comprehensive health benefits, and opportunities for professional growth and development. Join us at Ushur and be part of a dynamic team that celebrates diversity, encourages innovation, and fosters personal and professional growth.,
Posted 2 days ago
1.0 - 5.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Product Manager - Enterprise AI, you will play a crucial role in leading the transformation by defining and delivering AI tools that enhance voice intelligence, agent copilots, decision support systems, and risk engines. Your focus will be on supercharging our teams and establishing a new benchmark in fintech productivity. Your responsibilities will include defining the product vision and roadmap for our internal GenAI suite, which comprises voice copilots, risk intelligence agents, ops automation tools, regulatory AI assistants, and more. You will lead cross-functional teams to develop AI-powered workflows that reduce turnaround time, increase productivity, and enhance accuracy. Additionally, you will be responsible for making build vs. buy decisions utilizing vendors like OpenAI, Whisper, AssemblyAI, LangChain, among others. In this role, you will work hands-on with various AI technologies such as LLMs, embeddings, RAG pipelines, vector databases, and prompt tuning. It will be essential to design with explainability, fallback logic, and hallucination control as defaults. Monitoring success metrics like adoption rates, efficiency gains, and time saved will be crucial, and you will iterate quickly based on feedback to drive continuous improvement. Ensuring full alignment with regulatory bodies and internal compliance when deploying GenAI systems in critical workflows will be a key aspect of your role. You will also be responsible for maintaining cybersecurity standards as an integral part of product development, ensuring that products are built with security in mind from the outset. To be successful in this role, you should have at least 4+ years of product management experience, with at least 1 year in GenAI/ML/NLP-focused roles. A background in fintech, particularly in lending, collections, KYC, or regulatory operations, will be advantageous. A strong understanding of GenAI architecture, technical education background (in CS, Engineering, Data Science, or Applied Math), and experience working with data scientists, backend engineers, and legal/compliance teams are essential. An analytical mindset, the ability to define product KPIs, track outcomes, and excellent communication and stakeholder management skills are also crucial for excelling in this role.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
delhi
On-site
You are a highly experienced Lead Machine Learning Engineer specializing in Speech AI, Natural Language Processing (NLP), and Generative AI (GenAI). Your role is crucial in designing and expanding a production-grade speech-based virtual assistant powered by Large Language Models (LLMs), advanced audio signal processing, and multimodal intelligence. Collaborating closely with product, research, and DevOps teams, you will lead a group of ML engineers to create and implement cutting-edge AI solutions. In this role, your responsibilities include architecting and implementing advanced machine learning models for speech recognition (ASR), text-to-speech (TTS), NLP, and multimodal tasks. You will lead the development and fine-tuning of Transformer-based LLMs, including encoder-decoder architectures for audio and text tasks. Additionally, you will build custom audio-LLM interaction frameworks incorporating modality fusion, speech understanding, and language generation techniques. Your duties also involve designing and deploying LLM-powered virtual assistants with real-time speech interfaces for dialog, voice commands, and assistive technologies. You will integrate speech models with backend NLP pipelines to handle complex user intents, contextual understanding, and response generation effectively. Furthermore, you will design and implement end-to-end ML pipelines covering data ingestion, preprocessing, feature extraction, model training, evaluation, and deployment. Developing reproducible and scalable training pipelines using MLOps tools such as MLflow, Kubeflow, and Airflow with robust monitoring and model versioning will be part of your responsibilities. You will also drive CI/CD for ML workflows, model containerization (Docker), and orchestration using Kubernetes/serverless infrastructure. To stay updated with the latest advancements in Speech AI, LLMs, and GenAI, you will evaluate and drive the adoption of novel techniques. Experimenting with self-supervised learning, prompt tuning, parameter-efficient fine-tuning (PEFT), and zero-shot/multilingual speech models will be essential for innovation and progress. The required technical skills for this role include: - 10+ years of hands-on experience in machine learning, with a deep focus on audio (speech) and NLP applications. - Expertise in Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) systems, including tools like Wav2Vec, Whisper, Tacotron, FastSpeech, etc. - Strong knowledge of Transformer architectures like BERT, GPT, T5, and encoder-decoder LLM variants, including training/fine-tuning at scale. - Proficiency in Python programming and deep learning frameworks like PyTorch and TensorFlow. - In-depth understanding of audio signal processing concepts such as MFCCs, spectrograms, wavelets, sampling, filtering, etc. - Experience with multimodal machine learning, including the fusion of speech, text, and contextual signals. - Deployment of ML services with Docker, Kubernetes, and experience with distributed training setups on GPU clusters or cloud platforms (AWS, GCP, Azure). - Proven experience in building production-grade MLOps frameworks and maintaining model lifecycle management. - Experience with real-time inference, latency optimization, and efficient decoding techniques for audio/NLP systems. Preferred qualifications for this role include: - Master's or Ph.D. in Computer Science, Machine Learning, Signal Processing, or related technical discipline. - Publications or open-source contributions in speech, NLP, or GenAI. - Familiarity with LLM alignment techniques, RLHF, prompt engineering, and fine-tuning using LoRA, QLoRA, or adapters. - Previous experience in deploying voice-based conversational AI products at scale.,
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: We are looking for a Lead Generative AI Engineer with 35 years of experience to spearhead development of cutting-edge AI systems involving Large Language Models (LLMs) , Vision-Language Models (VLMs) , and Computer Vision (CV) . You will lead model development, fine-tuning, and optimization for text, image, and multi-modal use cases. This is a hands-on leadership role that requires a deep understanding of transformer architectures, generative model fine-tuning, prompt engineering, and deployment in production environments. Roles and Responsibilities: Lead the design, development, and fine-tuning of LLMs for tasks such as text generation, summarization, classification, Q&A, and dialogue systems. Develop and apply Vision-Language Models (VLMs) for tasks like image captioning, VQA, multi-modal retrieval, and grounding. Work on Computer Vision tasks including image generation, detection, segmentation, and manipulation using SOTA deep learning techniques. Leverage frameworks like Transformers, Diffusion Models, and CLIP to build and fine-tune multi-modal models. Fine-tune open-source LLMs and VLMs (e.g., LLaMA, Mistral, Gemma, Qwen, MiniGPT, Kosmos, etc.) using task-specific or domain-specific datasets. Design data pipelines , model training loops, and evaluation metrics for generative and multi-modal AI tasks. Optimize model performance for inference using techniques like quantization, LoRA, and efficient transformer variants. Collaborate cross-functionally with product, backend, and ML ops teams to ship models into production. Stay current with the latest research and incorporate emerging techniques into product pipelines. Requirements: Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 35 years of hands-on experience in building, training, and deploying deep learning models, especially in LLM, VLM , and/or CV domains. Strong proficiency with Python , PyTorch (or TensorFlow), and libraries like Hugging Face Transformers, OpenCV, Datasets, LangChain, etc. Deep understanding of transformer architecture , self-attention mechanisms , tokenization , embedding , and diffusion models . Experience with LoRA , PEFT , RLHF , prompt tuning , and transfer learning techniques. Experience with multi-modal datasets and fine-tuning vision-language models (e.g., BLIP, Flamingo, MiniGPT, Kosmos, etc.). Familiarity with MLOps tools , containerization (Docker), and model deployment workflows (e.g., Triton Inference Server, TorchServe). Strong problem-solving, architectural thinking, and team mentorship skills. Show more Show less
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: We are looking for a Lead Generative AI Engineer with 35 years of experience to spearhead development of cutting-edge AI systems involving Large Language Models (LLMs) , Vision-Language Models (VLMs) , and Computer Vision (CV) . You will lead model development, fine-tuning, and optimization for text, image, and multi-modal use cases. This is a hands-on leadership role that requires a deep understanding of transformer architectures, generative model fine-tuning, prompt engineering, and deployment in production environments. Roles and Responsibilities: Lead the design, development, and fine-tuning of LLMs for tasks such as text generation, summarization, classification, Q&A, and dialogue systems. Develop and apply Vision-Language Models (VLMs) for tasks like image captioning, VQA, multi-modal retrieval, and grounding. Work on Computer Vision tasks including image generation, detection, segmentation, and manipulation using SOTA deep learning techniques. Leverage frameworks like Transformers, Diffusion Models, and CLIP to build and fine-tune multi-modal models. Fine-tune open-source LLMs and VLMs (e.g., LLaMA, Mistral, Gemma, Qwen, MiniGPT, Kosmos, etc.) using task-specific or domain-specific datasets. Design data pipelines , model training loops, and evaluation metrics for generative and multi-modal AI tasks. Optimize model performance for inference using techniques like quantization, LoRA, and efficient transformer variants. Collaborate cross-functionally with product, backend, and ML ops teams to ship models into production. Stay current with the latest research and incorporate emerging techniques into product pipelines. Requirements: Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 35 years of hands-on experience in building, training, and deploying deep learning models, especially in LLM, VLM , and/or CV domains. Strong proficiency with Python , PyTorch (or TensorFlow), and libraries like Hugging Face Transformers, OpenCV, Datasets, LangChain, etc. Deep understanding of transformer architecture , self-attention mechanisms , tokenization , embedding , and diffusion models . Experience with LoRA , PEFT , RLHF , prompt tuning , and transfer learning techniques. Experience with multi-modal datasets and fine-tuning vision-language models (e.g., BLIP, Flamingo, MiniGPT, Kosmos, etc.). Familiarity with MLOps tools , containerization (Docker), and model deployment workflows (e.g., Triton Inference Server, TorchServe). Strong problem-solving, architectural thinking, and team mentorship skills. Show more Show less
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
delhi
On-site
As a Prompt Engineer in the Artificial Intelligence/R&D department, you will play a vital role in building, optimizing, and testing natural language prompts to enhance the performance of large language models like OpenAI GPT, Claude, and Gemini. Your primary responsibilities will involve designing, writing, and optimizing natural language prompts to address specific business or product challenges. Working alongside data scientists, product managers, and engineering teams, you will contribute to the development of high-quality AI prompt systems for real-world applications. Your role will require you to analyze and test the impact of different prompts on model output quality, collaborating closely with product and development teams to create deployable prompt templates and interface logic. Additionally, you will be involved in establishing prompt libraries and prompt engineering frameworks to enhance prompt reusability and scalability. Staying updated on the latest prompt engineering technologies and research papers will be crucial, as you transform these insights into practical applications. To excel in this role, you must demonstrate proficiency in utilizing mainstream large-scale language models such as GPT-4, Claude, or Gemini. Strong language expression and logical thinking skills, along with expertise in prompt tuning, few-shot learning, and chain-of-thought prompt engineering methods, are essential. A bachelor's degree in a relevant field (computer science, linguistics, cognitive science) is required, with a preference for candidates with strong English reading abilities to understand model documents and research papers. Candidates with experience in actual Large Language Model (LLM) application projects, mastery of Python programming for rapid prototyping, and familiarity with dialogue systems, search engines, knowledge graphs, and RAG technologies will be given bonus points. Understanding model fine-tuning, LoRA, and system prompt structure, as well as familiarity with tool chains like LangChain, LlamaIndex, and OpenAI Function Calling, will be advantageous. In return, we offer a competitive salary and bonus system, the chance to work with cutting-edge AI technology, flexible working hours, and remote work options. Join our team of technology enthusiasts and innovators to embark on a rewarding career journey.,
Posted 1 month ago
1.0 - 3.0 years
3 - 6 Lacs
Bengaluru
Remote
We're hiring a passionate Data Scientist / GenAI Engineer to join our AI-first team working on LLMs, RAG pipelines, NLP features, and GenAI use cases like chatbots, recommendation engines, and smart automation.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Bengaluru
Hybrid
Job Description: Python Gen AI Developer Seeking an experienced Python Developer with expertise in Generative AI to join our AI & ML Engineering team. The ideal candidate will work on building intelligent applications leveraging large language models (LLMs) and deep learning to enhance customer experience and streamline business processes. Key Responsibilities: Design, develop, and deploy GenAI solutions using Python and modern AI frameworks. Work with LLMs (e.g., GPT, LLaMA, Claude) and fine-tune them for enterprise use cases. Develop prompt engineering strategies and support prompt optimization. Integrate AI models with backend services and data pipelines. Implement scalable and secure APIs and services for GenAI use cases. Collaborate with data scientists, ML engineers, and product teams. Ensure compliance with enterprise security, data governance, and regulatory standards. Required Skills: 5+ years of experience with Python in enterprise application development. Strong knowledge of GenAI tools (OpenAI, HuggingFace, LangChain, etc.). Experience with LLMs, prompt tuning, RAG (Retrieval Augmented Generation). Experience with Flask/FastAPI or similar frameworks. Familiarity with vector databases (e.g., FAISS, Pinecone). Understanding of NLP and transformer models. Good knowledge of Git, Docker, CI/CD. Nice to Have: Knowledge of Azure OpenAI or AWS Bedrock. Experience working in the finance or banking domain. Exposure to Kubernetes and model deployment pipelines
Posted 2 months ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
About the Team and Our Scope We are a forward-thinking tech organization within Swiss Re, delivering transformative AI/ML solutions that redefine how businesses operate. Our mission is to build intelligent, secure, and scalable systems that deliver real-time insights, automation, and high-impact user experiences to clients globally. You'll join a high-velocity AI/ML team working closely with product managers, architects, and engineers to create next-gen enterprise-grade solutions. Our team is built on a startup mindset - bias to action, fast iterations, and ruthless focus on value delivery. We're not only shaping the future of AI in business - we're shaping the future of talent. This role is ideal for someone passionate about advanced AI engineering today and curious about evolving into a product leadership role tomorrow. You'll get exposure to customer discovery, roadmap planning, and strategic decision-making alongside your technical contributions. Role Overview As an AI/ML Engineer, you will play a pivotal role in the research, development, and deployment of next-generation GenAI and machine learning solutions . Your scope will go beyond retrieval-augmented generation (RAG) to include areas such as prompt engineering, long-context LLM orchestration, multi-modal model integration (voice, text, image, PDF), and agent-based workflows. You will help assess trade-offs between RAG and context-native strategies, explore hybrid techniques, and build intelligent pipelines that blend structured and unstructured data. You'll work with technologies such as LLMs, vector databases, orchestration frameworks, prompt chaining libraries, and embedding models, embedding intelligence into complex, business-critical systems. This role sits at the intersection of rapid GenAI prototyping and rigorous enterprise deployment, giving you hands-on influence over both the technical stack and the emerging product direction. Key Responsibilities Build Next-Gen GenAI Pipelines : Design, implement, and optimize pipelines across RAG, prompt engineering, long-context input handling, and multi-modal processing. Prototype, Validate, Deploy : Rapidly test ideas through PoCs, validate performance against real-world business use cases, and industrialize successful patterns. Ingest, Enrich, Embed: Construct ingestion workflows including OCR, chunking, embeddings, and indexing into vector databases to unlock unstructured data. Integrate Seamlessly: Embed GenAI services into critical business workflows, balancing scalability, compliance, latency, and observability. Explore Hybrid Strategies: Combine RAG with context-native models, retrieval mechanisms, and agentic reasoning to build robust hybrid architectures. Drive Impact with Product Thinking : Collaborate with product managers and UX designers to shape user-centric solutions and understand business context. Ensure Enterprise-Grade Quality: Deliver solutions that are secure, compliant (e.g., GDPR), explainable, and resilient - especially in regulated environments. What Makes You a Fit Must-Have Technical Expertise Proven experience with GenAI techniques and LLMs , including RAG, long-context inference, prompt tuning, and multi-modal integration. Strong hands-on skills with Python , embedding models, and orchestration libraries (e.g., LangChain, Semantic Kernel, or equivalents). Comfort with MLOps practices , including version control, CI/CD pipelines, model monitoring, and reproducibility. Ability to operate independently, deliver iteratively, and challenge assumptions with data-driven insight. Understanding of vector search optimization and retrieval tuning. Exposure to multi-modal models Nice-To-Have Qualifications Experience building and operating AI systems in regulated industries (e.g., insurance, finance, healthcare). Familiarity with Azure AI ecosystem (e.g., Azure OpenAI, Azure AI Document Intelligence, Azure Cognitive Search) and deployment practices in cloud-native environments. Experience with agentic AI architectures , tools like AutoGen, or prompt chaining frameworks. Familiarity with data privacy and auditability principles in enterprise AI. Bonus: You Think Like a Product Manager While this role is technical at its core, we highly value candidates who are curious about how AI features become products . If you're excited by the idea of influencing roadmaps, shaping requirements, or owning end-to-end value delivery - we'll give you space to grow into it. This is a role where engineering and product are not silos . If you're keen to move in that direction, we'll mentor and support your evolution. Why Join Us You'll be part of a team that's pushing AI/ML into uncharted, high-value territory. We operate with urgency, autonomy, and deep collaboration. You'll prototype fast, deliver often, and see your work shape real-world outcomes - whether in underwriting, claims, or data orchestration. And if you're looking to transition from deep tech to product leadership , this role is a launchpad. Swiss Re is an equal opportunity employer . We celebrate diversity and are committed to creating an inclusive environment for all employees. Keywords: Reference Code: 134317
Posted 2 months ago
5.0 - 8.0 years
0 - 2 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
AI/ML engineer or GenAI developer who will leverage Amazon Bedrock to build and optimize intelligent systems for automated email organization, with a strong emphasis on prompt engineering, collaboration, testing, and adherence to best practices. Develops and fine-tunes Bedrock-based GenAI models for email categorization Implements prompt engineering techniques for accurate email processing Collaborates with data engineers to optimize email categorization workflows Conducts testing to refine prompt structures and improve model responses Enables compliance with AI governance and security best practices Works with business users to align AI-generated responses with organizational needs
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
57101 Jobs | Dublin
Wipro
24505 Jobs | Bengaluru
Accenture in India
19467 Jobs | Dublin 2
EY
17463 Jobs | London
Uplers
12745 Jobs | Ahmedabad
IBM
12087 Jobs | Armonk
Bajaj Finserv
11514 Jobs |
Amazon
11498 Jobs | Seattle,WA
Accenture services Pvt Ltd
10993 Jobs |
Oracle
10696 Jobs | Redwood City