Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bhuvanagiri, Tamil Nadu, India
On-site
Job Description ๐ฐ Compensation Note: The budget for this role is fixed at INR 50โ55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. ๐ Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, India . ๐ Interview Process: The process consists of 6 stages , including a technical assessment, code review, code discussion , and panel interviews . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.
Posted 3 days ago
3.0 years
0 Lacs
India
Remote
Job Title: Voice Processing Specialist Location: Remote /Jaipur Job Type: Full-time / Contract Experience: 3+ years expertise in voice cloning, transformation, and synthesis technologies Job Summary We are seeking a talented and motivated Voice Processing Specialist to join our team and lead the development of innovative voice technologies. The ideal candidate will have a deep understanding of speech synthesis, voice cloning, and transformation techniques. You will play a critical role in designing, implementing, and deploying state-of-the-art voice models that enhance naturalness, personalization, and flexibility of speech in AI-powered applications. This role is perfect for someone passionate about advancing human-computer voice interaction and creating lifelike, adaptive voice systems. Key Responsibilities Design, develop, and optimize advanced deep learning models for voice cloning, text-to-speech (TTS), voice conversion, and real-time voice transformation. Implement speaker embedding and voice identity preservation techniques to support accurate and high-fidelity voice replication. Work with large-scale and diverse audio datasets, including preprocessing, segmentation, normalization, and data augmentation to improve model generalization and robustness. Collaborate closely with data scientists, ML engineers, and product teams to integrate developed voice models into production pipelines. Fine-tune neural vocoders and synthesis architectures for better voice naturalness and emotional range. Stay current with the latest advancements in speech processing, AI voice synthesis, and deep generative models through academic literature and open-source projects. Contribute to the development of tools and APIs for deploying models on cloud and edge environments with high efficiency and low latency. Required Skills Strong understanding of speech signal processing, speech synthesis, and automatic speech recognition (ASR) systems. Hands-on experience with voice cloning frameworks such as Descript Overdub, Coqui TTS, SV2TTS, Tacotron, FastSpeech, or similar. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Experience working with speech libraries and toolkits such as ESPnet, Kaldi, Librosa, or SpeechBrain. In-depth knowledge of mel spectrograms, vocoder architectures (e.g., WaveNet, HiFi-GAN, WaveGlow), and their role in speech synthesis. Familiarity with REST APIs, model deployment, and cloud-based inference systems using platforms like AWS, Azure, or GCP. Ability to optimize models for performance in real-time or low-latency environments. Preferred Qualifications Experience in real-time voice transformation, including pitch shifting, timing modification, or emotion modulation. Exposure to emotion-aware speech synthesis, multilingual voice models, or prosody modeling. Design, develop, and optimize advanced deep learning models for voice cloning, text-to-speech (TTS), voice conversion, and real-time voice transformation Background in audio DSP (Digital Signal Processing) and speech analysis techniques. Previous contributions to open-source speech AI projects or publications in relevant domains. Why Join Us You will be part of a fast-moving, collaborative team working at the forefront of voice AI innovation. This role offers the opportunity to make a significant impact on products that reach millions of users, helping to shape the future of interactive voice experiences. Skills: automatic speech recognition (asr),vocoder architectures,voice cloning,voice processing,data,real-time voice transformation,speech synthesis,pytorch,tensorflow,voice conversion,speech signal processing,audio dsp,rest apis,python,cloud deployment,transformation,mel spectrograms,deep learning
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description ๐ฐ Compensation Note: The budget for this role is fixed at INR 50โ55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. ๐ Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, India . ๐ Interview Process: The process consists of 6 stages , including a technical assessment, code review, code discussion , and panel interviews . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.
Posted 5 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
๐ Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level ๐ About Darwix AI Darwix AI is one of Indiaโs fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interactionโacross voice, video, and chatโin real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. ๐ง Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. ๐ง Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. ๐ ๏ธ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) ๐ฏ Requirements & Qualifications ๐จโ๐ป Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. ๐ Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). โ๏ธ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. ๐ก Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. ๐ What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2โ3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. ๐ผ What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months โ ๏ธ This Role is NOT for Everyone ๐ซ If you're looking for a slow, abstract research roleโthis is NOT for you. ๐ซ If you're used to months of ideation before shippingโyou won't enjoy our speed. ๐ซ If you're not comfortable being hands-on and diving into scrappy buildsโyou may struggle. โ But if youโre a builder , architect , and visionary โwho loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. ๐ฉ How to Apply Send your CV, GitHub/portfolio, and a brief note on โWhy AI at Darwix?โ to: ๐ง careers@cur8.in Subject Line: Application โ AI Engineer โ [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on ๐ Final Thought This is not just a job. This is your opportunity to build the worldโs most scalable AI sales intelligence platform โfrom India, for the world. Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Guindy, Tamil Nadu, India
On-site
Company Description Bytezera is a data services provider that specialise in AI and data solutions to help businesses maximise their data potential. With expertise in data-driven solution design, machine learning, AI, data engineering, and analytics, we empower organizations to make informed decisions and drive innovation. Our focus is on using data to achieve competitive advantage and transformation. About the Role We are seeking a highly skilled and hands-on AI Engineer to drive the development of cutting-edge AI applications using the latest in Computer vision, STT, Large Language Models (LLMs) , agentic frameworks , and Generative AI technologies . This role covers the full AI development lifecycleโfrom data preparation and model training to deployment and optimizationโwith a strong focus on NLP and open-source foundation models . You will be directly involved in building and deploying goal-driven, autonomous AI agents and scalable AI systems for real-world use cases. Key Responsibilities Computer Vision Development Design and implement advanced computer vision models for object detection, image segmentation, tracking, facial recognition, OCR, and video analysis. Fine-tune and deploy vision models using frameworks like PyTorch, TensorFlow, OpenCV, Detectron2, YOLO, MMDetection , etc. Optimize inference pipelines for real-time vision processing across edge devices, GPUs, or cloud-based systems. Speech-to-Text (STT) System Development Build and fine-tune ASR (Automatic Speech Recognition) models using toolkits such as Whisper, NVIDIA NeMo, DeepSpeech, Kaldi, or wav2vec 2.0 . Develop multilingual and domain-specific STT pipelines optimized for real-time transcription and high accuracy. Integrate STT into downstream NLP pipelines or agentic systems for transcription, summarization, or intent recognition. LLM and Agentic AI Design & Development Build and deploy advanced LLM-based AI agents using frameworks such as LangGraph , CrewAI , AutoGen , and OpenAgents . Fine-tune and optimize open-source LLMs (e.g., GPT-4 , LLaMA 3 , Mistral , T5 ) for domain-specific applications. Design and implement retrieval-augmented generation (RAG) pipelines with vector databases like FAISS , Weaviate , or Pinecone . Develop NLP pipelines using Hugging Face Transformers , spaCy , and LangChain for various text understanding and generation tasks. Leverage Python with PyTorch and TensorFlow for training, fine-tuning, and evaluating models. Prepare and manage high-quality datasets for model training and evaluation. Experience & Qualifications 2+ years of hands-on experience in AI engineering , machine learning , or data science roles. Proven track record in building and deploying computer vision and STT AI application . Experience with agentic workflows or autonomous AI agents is highly desirable. Technical Skills Languages & Libraries:Python, PyTorch, TensorFlow, Hugging Face Transformers, LangChain, spaCy LLMs & Generative AI:GPT, LLaMA 3, Mistral, T5, Claude, and other open-source or commercial models Agentic Tooling:LangGraph, CrewAI, AutoGen, OpenAgents Vector databases (Pinecone or ChromaDB) DevOps & Deployment: Docker, Kubernetes, AWS (SageMaker, Lambda, Bedrock, S3) Core ML Skills: Data preprocessing, feature engineering, model evaluation, and optimization Qualifications:Education: Bachelorโs or Masterโs degree in Computer Science, Data Science, AI/ML, or a related field. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description ๐ Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level ๐ About Darwix AI Darwix AI is one of Indiaโs fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interactionโacross voice, video, and chatโin real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. ๐ง Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. ๐ง Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. ๐ ๏ธ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) ๐ฏ Requirements & Qualifications ๐จโ๐ป Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. ๐ Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). โ๏ธ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. ๐ก Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. ๐ What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2โ3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. ๐ผ What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months โ ๏ธ This Role is NOT for Everyone ๐ซ If you're looking for a slow, abstract research roleโthis is NOT for you. ๐ซ If you're used to months of ideation before shippingโyou won't enjoy our speed. ๐ซ If you're not comfortable being hands-on and diving into scrappy buildsโyou may struggle. โ But if youโre a builder , architect , and visionary โwho loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. ๐ฉ How to Apply Send your CV, GitHub/portfolio, and a brief note on โWhy AI at Darwix?โ to: ๐ง careers@cur8.in Subject Line: Application โ AI Engineer โ [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on ๐ Final Thought This is not just a job. This is your opportunity to build the worldโs most scalable AI sales intelligence platform โfrom India, for the world. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description ๐ Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level ๐ About Darwix AI Darwix AI is one of Indiaโs fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interactionโacross voice, video, and chatโin real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. ๐ง Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. ๐ง Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. ๐ ๏ธ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) ๐ฏ Requirements & Qualifications ๐จโ๐ป Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. ๐ Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). โ๏ธ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. ๐ก Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. ๐ What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2โ3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. ๐ผ What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months โ ๏ธ This Role is NOT for Everyone ๐ซ If you're looking for a slow, abstract research roleโthis is NOT for you. ๐ซ If you're used to months of ideation before shippingโyou won't enjoy our speed. ๐ซ If you're not comfortable being hands-on and diving into scrappy buildsโyou may struggle. โ But if youโre a builder , architect , and visionary โwho loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. ๐ฉ How to Apply Send your CV, GitHub/portfolio, and a brief note on โWhy AI at Darwix?โ to: ๐ง careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application โ ML Engineer โ [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on ๐ Final Thought This is not just a job. This is your opportunity to build the worldโs most scalable AI sales intelligence platform โfrom India, for the world. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Weโre on the lookout for a Data Science Manager with deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), and Generative AI to lead a high-impact Conversational AI initiative for one of our premier EMEA-based clients. Youโll not only guide a team of data scientists and ML engineers but also work hands-on to build cutting-edge systems for real-time transcription, sentiment analysis, summarization, and intelligent decision-making . Your solutions will enable smarter engagement strategies, unlock valuable insights, and directly impact client success. What You'll Do: Strategic Leadership & Delivery: Lead the end-to-end delivery of AI solutions for transcription and conversation analytics. Collaborate with client stakeholders to understand business problems and translate them into AI strategies. Provide mentorship to team members, foster best practices, and ensure high-quality technical delivery. Conversational AI Development: Oversee development and tuning of ASR models using tools like Whisper, DeepSpeech, Kaldi, AWS/GCP STT. Guide implementation of speaker diarization for multi-speaker conversations. Ensure solutions are domain-tuned and accurate in real-world conditions. Generative AI & NLP Applications: Architect LLM-based pipelines for summarization, topic extraction, and conversation analytics. Design and implement custom RAG pipelines to enrich conversational insights using external knowledge bases. Apply prompt engineering and NER techniques for context-aware interactions. Decision Intelligence & Sentiment Analysis: Drive the development of models for sentiment detection, intent classification , and predictive recommendations . Enable intelligent workflows that suggest next-best actions and enhance customer experiences. AI at Scale: Oversee deployment pipelines using Docker, Kubernetes, FastAPI , and cloud-native tools (AWS/GCP/Azure AI). Champion cost-effective model serving using ONNX, TensorRT, or Triton. Implement and monitor MLOps workflows to support continuous learning and model evolution. What You'll Bring to the Table: Technical Excellence 8+ Years of proven experience leading teams in Speech-to-Text, NLP, LLMs, and Conversational AI domains. Strong Python skills and experience with PyTorch, TensorFlow, Hugging Face, LangChain . Deep understanding of RAG architectures , vector DBs (FAISS, Pinecone, Weaviate), and cloud deployment practices. Hands-on experience with real-time applications and inference optimization. Leadership & Communication Ability to balance strategic thinking with hands-on execution. Strong mentorship and team management skills. Exceptional communication and stakeholder engagement capabilities. A passion for transforming business needs into scalable AI systems. Bonus Points For: Experience in healthcare, pharma, or life sciences conversational use cases. Exposure to knowledge graphs, RLHF , or multimodal AI . Demonstrated impact through cross-functional leadership and client-facing solutioning. What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: Weโre all about your growth groove! Level up your skills with our support as we cover the cost of your certifications . Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
๐ Job Title: Senior AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level ๐ About Darwix AI Darwix AI is one of Indiaโs fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interactionโacross voice, video, and chatโin real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. ๐ง Role Overview As the Senior AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. ๐ง Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. ๐ ๏ธ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) ๐ฏ Requirements & Qualifications ๐จโ๐ป Experience 2-4 years of experience in building and deploying AI/ML systems, with at least 1-2 years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. ๐ Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). โ๏ธ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. ๐ก Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. ๐ What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2โ3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. ๐ผ What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months โ ๏ธ This Role is NOT for Everyone ๐ซ If you're looking for a slow, abstract research roleโthis is NOT for you. ๐ซ If you're used to months of ideation before shippingโyou won't enjoy our speed. ๐ซ If you're not comfortable being hands-on and diving into scrappy buildsโyou may struggle. โ But if youโre a builder , architect , and visionary โwho loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. ๐ฉ How to Apply Send your CV, GitHub/portfolio, and a brief note on โWhy AI at Darwix?โ to: ๐ง careers@cur8.in Subject Line: Application โ Senior AI Engineer โ [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on ๐ Final Thought This is not just a job. This is your opportunity to build the worldโs most scalable AI sales intelligence platform โfrom India, for the world. Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
๐ Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level ๐ About Darwix AI Darwix AI is one of Indiaโs fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interactionโacross voice, video, and chatโin real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. ๐ง Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. ๐ง Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. ๐ ๏ธ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) ๐ฏ Requirements & Qualifications ๐จโ๐ป Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. ๐ Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). โ๏ธ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. ๐ก Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. ๐ What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2โ3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. ๐ผ What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months โ ๏ธ This Role is NOT for Everyone ๐ซ If you're looking for a slow, abstract research roleโthis is NOT for you. ๐ซ If you're used to months of ideation before shippingโyou won't enjoy our speed. ๐ซ If you're not comfortable being hands-on and diving into scrappy buildsโyou may struggle. โ But if youโre a builder , architect , and visionary โwho loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. ๐ฉ How to Apply Send your CV, GitHub/portfolio, and a brief note on โWhy AI at Darwix?โ to: ๐ง careers@cur8.in Subject Line: Application โ Lead AI Engineer โ [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on ๐ Final Thought This is not just a job. This is your opportunity to build the worldโs most scalable AI sales intelligence platform โfrom India, for the world. Show more Show less
Posted 1 month ago
2 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description ๐ Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level ๐ About Darwix AI Darwix AI is one of Indiaโs fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interactionโacross voice, video, and chatโin real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. ๐ง Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. ๐ง Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. ๐ ๏ธ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) ๐ฏ Requirements & Qualifications ๐จโ๐ป Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. ๐ Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). โ๏ธ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. ๐ก Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. ๐ What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2โ3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. ๐ผ What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months โ ๏ธ This Role is NOT for Everyone ๐ซ If you're looking for a slow, abstract research roleโthis is NOT for you. ๐ซ If you're used to months of ideation before shippingโyou won't enjoy our speed. ๐ซ If you're not comfortable being hands-on and diving into scrappy buildsโyou may struggle. โ But if youโre a builder , architect , and visionary โwho loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. ๐ฉ How to Apply Send your CV, GitHub/portfolio, and a brief note on โWhy AI at Darwix?โ to: ๐ง careers@cur8.in Subject Line: Application โ Lead AI Engineer โ [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on ๐ Final Thought This is not just a job. This is your opportunity to build the worldโs most scalable AI sales intelligence platform โfrom India, for the world. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We are looking for an exceptional Data Scientist with deep expertise in speech technologies, advanced NLP, and LLM fine-tuning to join our cutting-edge AI research team. In this pivotal role, you will be responsible for building and optimizing state-of-the-art machine learning pipelines that drive intelligent audio and language-based products. Your work will directly contribute to the development of next-generation AI solutions that are privacy-focused, high-performance, and built for scale. Key Responsibilities Develop and deploy real-time ASR pipelines, leveraging models like Whisper, wav2vec2, or custom speech models. Design and implement robust intent detection and entity extraction systems, utilizing transcribed speech, keyword spotting, and semantic pattern recognition. Fine-tune LLMs and transformer architectures (BERT, RoBERTa, etc.) for tasks including intent classification, entity recognition, and contextual comprehension. Optimize end-to-end pipelines for mobile and on-device inference, employing tools like TFLite, ONNX, quantization, and pruning to achieve low-latency performance. Collaborate closely with AI product teams and MLOps engineers to ensure seamless deployment, continuous iteration, and performance monitoring. Required Technical Skills Hands-on experience with ASR models (Whisper, wav2vec2, DeepSpeech, Kaldi, Silero), with a focus on fine-tuning for Indian languages and multilingual scenarios. Strong command of NLP techniques such as keyword spotting, sequence labeling, masked token prediction, and rule-based classification. Proven track record in LLM and transformer fine-tuning for NER, intent detection, and domain-specific adaptation. Expertise in speech metadata extraction, feature engineering, and signal enrichment. Proficiency in model optimization methods like quantization-aware training (QAT), pruning, and efficient runtime deployment for edge devices. Excellent Python skills with proficiency in PyTorch or TensorFlow, along with solid experience in NumPy, pandas, and real-time data processing frameworks. Qualifications Bachelors or Masters degree in Computer Science, Electrical Engineering, Data Science, or a related technical field. Academic or industry background in speech processing, ASR, telecom analytics, or applied NLP is highly desirable. Portfolio showcasing real-world speech/NLP projects, open-source contributions, or published research will be a strong advantage. Experience 3 to 6+ years of applied experience in speech AI, NLP for intent detection, or machine learning model development. Proven success in building, deploying, and optimizing ML models for real-time, low-latency environments. Contributions to leading open-source projects like openai/whisper, mozilla/DeepSpeech, or facebook/wav2vec2 are highly valued. (ref:hirist.tech) Show more Show less
Posted 1 month ago
2 - 5 years
4 - 9 Lacs
Bengaluru
Work from Office
Role & responsibilities Build Audio ML/DL models for given requirement Build models from scratch wherever required Must be able to code relevant research papers independently Tune to improve accuracy and latency Support to satisfy the deployment requirements of the product Apply transfer learning wherever required Training, fine tuning and optimization of different flavors of transformer models Hands on experience with LLM, fine tuning and Optimization of LLM Audio AI Modelling and tuning experience Audio AI concepts, ML/DL working experience in Pytorch/Tensorflow/Keras framework Variations of CNN, RNN, Attention Mechanism, Transformers and large models like wav2vec, whisper Proficiency in machine learning/audio pre-processing libraries/frameworks such as Librosa, Kaldi, sklearn, python speech features, Numpy, Scipy, Pandas, Matplotlib. Signal Processing fundamentals, understanding the data characteristics and wave signals Transfer Learning Experience in Acoustic models for speech recognition systems. Experience in Voice activity detection with noise analysis & denoising Experience in WakeupWord, ASR Working experience of LLM fine tuning, optimization and performance improvement. Audio AI Audio related certification Tools for creating Synthetic data Experience working in Edge devices Experience in noise augmentation and mixing techniques
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane