Jobs
Interviews

4760 Latency Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Software Architect Location: Sector 63, Gurgaon (On‑site) Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 8–12 years in scalable software design, with significant ownership of cloud‑native, data‑intensive, or AI‑driven products Apply: careers@darwix.ai Subject Line: Application – Software Architect – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform transforming how enterprise sales, service, and field teams operate across India, MENA, and Southeast Asia. Our products— Transform+ , Sherpa.ai , and Store Intel —deliver multilingual speech‑to‑text pipelines, live agent coaching, behavioural scoring, and computer‑vision insights for brands such as IndiaMart, Wakefit, Bank Dofar, Sobha, and GIVA. Backed by leading VCs and built by alumni from IIT, IIM, and BITS, we are scaling rapidly and require a robust architectural foundation for continued global growth. Role Overview The Software Architect will own the end‑to‑end technical architecture of Darwix AI’s platform—spanning real‑time voice processing, generative‑AI services, analytics dashboards, and enterprise integrations. You will define reference architectures, lead critical design reviews, and partner with engineering, AI research, DevOps, and product teams to ensure the platform meets stringent requirements on scalability, latency, security, and maintainability. Key ResponsibilitiesArchitectural Ownership Define and evolve the platform architecture covering microservices, API gateways, data pipelines, real‑time streaming, and storage strategies. Establish design patterns for integrating LLMs, speech‑to‑text engines, vector databases, and retrieval‑augmented generation pipelines. Create and maintain architecture artefacts—logical and physical diagrams, interface contracts, data flow maps, threat models. Scalability & Reliability Specify non‑functional requirements (latency, throughput, availability, observability) and drive their implementation. Guide decisions on sharding, caching, queueing, and auto‑scaling to handle spikes in concurrent calls and AI inference workloads. Collaborate with DevOps on HA/DR strategies, cost‑optimised cloud deployments, and CI/CD best practices. Technical Leadership Lead design reviews, code reviews, and proof‑of‑concepts for complex modules (speech pipelines, RAG services, dashboard analytics). Mentor senior and mid‑level engineers on clean architecture, domain‑driven design, and testability. Evaluate new tools, frameworks, and open‑source components; build decision matrices for adoption. Security & Compliance Set architectural guardrails for authentication, authorisation, encryption, and data residency. Support infosec questionnaires and client audits by providing architecture and data‑flow evidence. Ensure alignment with industry standards (SOC 2, GDPR where applicable) in design and implementation. Cross‑Functional Collaboration Work with Product and AI leadership to translate business requirements into well‑scoped, feasible technical solutions. Engage with customer‑facing solution architects to map client environments to Darwix AI components. Drive architectural alignment across multiple engineering pods to avoid duplication and technical debt. Required Skills & Qualifications 8–12 years in backend or full‑stack engineering, with at least 3 years in an architecture or principal engineer role. Deep expertise in Python/Node.js , microservices, REST/gRPC APIs, and event‑driven architectures (Kafka/Redis Streams). Strong knowledge of cloud platforms (AWS or GCP), container orchestration (Docker/Kubernetes), and IaC tools. Experience designing data platforms with PostgreSQL, MongoDB, Redis, S3 , and vector databases (FAISS/Pinecone). Proven ability to optimise for high‑concurrency, low‑latency audio or data‑streaming workloads. Demonstrated track record of guiding teams through major refactors, migrations, or greenfield platform builds. Preferred Qualifications Familiarity with speech processing stacks (Whisper, Deepgram), LLM orchestration (LangChain), and GPU inference serving. Exposure to enterprise integrations with CRMs (Salesforce, Zoho), telephony (Twilio, Exotel), and messaging APIs (WhatsApp). Prior experience in a high‑growth SaaS or AI startup serving international enterprise clients. Bachelor’s or Master’s degree in Computer Science or related discipline from a Tier 1 institution. Success Metrics (First 12 Months) Architectural blueprints ratified and adopted across all engineering squads. Achieve target latency and uptime SLAs (≥ 99.99 %) for real‑time AI services. Reduction of production incidents attributable to architectural debt or design gaps. Completion of at least one major scalability initiative (e.g., regional multitenancy, GPU inference pool, streaming upgrade). Positive feedback from engineering teams on clarity and usability of architectural guidelines. Who You Are A systems thinker who balances immediate product deadlines with long‑term platform health. A pragmatic technologist: you know when to refactor, when to extend, and when to build net‑new. A persuasive communicator comfortable explaining complex designs to engineers, product managers, and clients. A mentor and collaborator who raises the technical bar through example and feedback. Motivated by building resilient architectures that power real‑world AI products at scale. How to Apply Send your résumé and (optionally) architectural portfolio links to careers@darwix.ai Subject: Application – Software Architect – [Your Name] Join Darwix AI to architect the next generation of real‑time, multilingual conversational intelligence platforms and leave a lasting impact on how global enterprises drive revenue with AI.

Posted 4 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Senior AI Research Scientist Location: Sector 63, Gurgaon – On‑site Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 6–10 years in applied AI/ML research, with multiple publications or patents and demonstrable product impact Apply: careers@darwix.ai Subject Line: Application – Senior AI Research Scientist – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform that powers real‑time conversation intelligence, multilingual coaching, and behavioural analytics for large revenue and service teams. Our products— Transform+ , Sherpa.ai , and Store Intel —integrate speech‑to‑text, LLM‑driven analysis, real‑time nudging, and computer vision to improve performance across BFSI, real estate, retail, and healthcare enterprises such as IndiaMart, Wakefit, Bank Dofar, GIVA, and Sobha. Role Overview The Senior AI Research Scientist will own the end‑to‑end research agenda that advances Darwix AI’s core capabilities in speech, natural‑language understanding, and generative AI. You will design novel algorithms, convert them into deployable prototypes, and collaborate with engineering to ship production‑grade features that directly influence enterprise revenue outcomes. Key ResponsibilitiesResearch Leadership Formulate and drive a 12‑ to 24‑month research roadmap covering multilingual speech recognition, conversation summarisation, LLM prompt optimisation, retrieval‑augmented generation (RAG), and behavioural scoring. Publish internal white papers and, where strategic, peer‑reviewed papers or patents to establish technological leadership. Model Development & Prototyping Design and train advanced models (e.g., Whisper fine‑tunes, Conformer‑RNN hybrids, transformer‑based diarisation, LLM fine‑tuning with LoRA/QLoRA). Build rapid prototypes in PyTorch or TensorFlow; benchmark against latency, accuracy, and compute cost targets relevant to real‑time use cases. Production Transfer Work closely with backend and MLOps teams to convert research code into containerised, scalable inference micro‑services. Define evaluation harnesses (WER, BLEU, ROUGE, accuracy, latency) and automate regression tests before every release. Data Strategy Lead data‑curation efforts: multilingual audio corpora, domain‑specific fine‑tuning datasets, and synthetic data pipelines for low‑resource languages. Establish annotation guidelines, active‑learning loops, and data quality metrics. Cross‑Functional Collaboration Act as the principal technical advisor in customer POCs involving custom language models, domain‑specific ontologies, or privacy‑sensitive deployments. Mentor junior researchers and collaborate with product managers on feasibility assessments and success metrics for AI‑driven features. Required Qualifications 6–10 years of hands‑on research in ASR, NLP, or multimodal AI, including at least three years in a senior or lead capacity. Strong publication record (top conferences such as ACL, INTERSPEECH, NeurIPS, ICLR, EMNLP) or patents showing applied innovation. Expert‑level Python and deep‑learning fluency (PyTorch or TensorFlow); comfort with Hugging Face, OpenAI APIs, and distributed training. Proven experience delivering research outputs into production systems with measurable business impact. Solid grasp of advanced topics: sequence‑to‑sequence modelling, attention mechanisms, LLM alignment, speaker diarisation, vector search, on‑device optimisation. Preferred Qualifications Experience with Indic or Arabic speech/NLP, code‑switching, or low‑resource language modelling. Familiarity with GPU orchestration, Triton inference servers, TorchServe, or ONNX runtime optimisation. Prior work on enterprise call‑centre datasets, sales enablement analytics, or real‑time speech pipelines. Doctorate (PhD) in Computer Science, Electrical Engineering, or a closely related field from a Tier 1 institution. Success Metrics Reduction of transcription error rate and/or inference latency by agreed percentage targets within 12 months. Successful deployment of at least two novel AI modules into production with adoption across Tier‑1 client accounts. Internal citation and reuse of developed components in other product lines. Peer‑recognised technical leadership through mentoring, documentation, and knowledge sharing. Application Process Send your résumé (and publication list, if separate) to careers@darwix.ai with the subject line indicated above. Optionally, include a one‑page summary of a research project you transitioned from lab to production, detailing the problem, approach, and measured impact. Joining Darwix AI as a Senior AI Research Scientist means shaping the next generation of real‑time, multilingual conversational intelligence for enterprise revenue teams worldwide. If you are passionate about applied research that moves the business needle, we look forward to hearing from you.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]”

Posted 4 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Founding AI Engineer Location: Sector 63, Gurgaon – On‑site Working Days: Monday to Saturday (2nd and 4th Saturdays are working) Working Hours: 10:30 AM – 8:00 PM Experience: 4 – 8 years of hands‑on AI/ML engineering in production environments Apply: careers@darwix.ai Subject Line: Application – Founding AI Engineer – [Your Name] About Darwix AI Darwix AI is a GenAI SaaS platform transforming how enterprise revenue and service teams operate. Our products— Transform+ , Sherpa.ai , and Store Intel —deliver multilingual speech‑to‑text, live coaching nudges, behavioural scoring, and computer‑vision insights for clients such as IndiaMart, Wakefit, Bank Dofar, Sobha, and GIVA. Backed by leading investors and built by IIT/IIM/BITS alumni, we are expanding rapidly across India, MENA, and Southeast Asia. Role Overview As the Founding AI Engineer , you will own the design, development, and deployment of Darwix AI’s core machine‑learning and generative‑AI systems from the ground up. You will work directly with the CTO and founders to convert ambitious product ideas into scalable, low‑latency services powering thousands of live customer interactions daily. This is a zero‑to‑one, high‑ownership role that shapes the technical backbone—and the culture—of our AI organisation. Key Responsibilities End‑to‑End Model Build & Deployment Architect, train, and fine‑tune multilingual speech‑to‑text, diarisation, NER, summarisation, and scoring models (Whisper, wav2vec 2.0, transformer‑based NLP). Design RAG pipelines and prompt‑engineering frameworks with commercial and open‑source LLMs (OpenAI, Mistral, Llama 2). Build GPU/CPU‑optimised inference micro‑services in Python/FastAPI with strict latency budgets. Production Engineering Implement asynchronous processing, message queues, caching, and load balancing for high‑concurrency voice and text streams. Establish CI/CD, model versioning, A/B testing, and automated rollback for ML APIs. Data Strategy & Tooling Define data‑collection, labelling, and active‑learning loops; build pipelines for continuous model improvement. Create evaluation harnesses (WER, ROUGE, AUROC, latency) and automate nightly regression tests. Security & Compliance Implement role‑based access, encryption‑at‑rest/in‑transit, and audit logging for all AI endpoints. Ensure adherence to enterprise infosec requirements and regional data‑privacy standards. Cross‑Functional Collaboration Partner with product managers to translate customer pain points into technical requirements and success metrics. Work with backend, DevOps, and frontend teams to expose AI outputs via dashboards, APIs, and real‑time agent assist overlays. Technical Leadership Establish coding standards, documentation templates, and peer‑review culture for the AI team. Mentor junior engineers as the team scales; influence hiring and tech‑stack decisions. Required Skills & Qualifications 4 – 8 years building and deploying ML systems in production (audio, NLP, or LLM focus). Expert‑level Python; strong grasp of PyTorch (or TensorFlow), Hugging Face Transformers, and data‑processing libraries. Proven record of optimising inference pipelines for sub‑second latency at scale. Hands‑on experience with cloud infrastructure (AWS or GCP), Docker/Kubernetes, and CI/CD for ML. Deep understanding of REST/gRPC APIs, security best practices, and high‑availability architectures. Ability to articulate trade‑offs and align technical decisions with business outcomes. Preferred Experience Prior work on Indic or Arabic speech/NLP, code‑switching, or low‑resource language modelling. Familiarity with vector databases (Pinecone, FAISS), Redis Streams/Kafka, and GPU orchestration (Triton, TorchServe). Exposure to sales‑tech, call‑centre analytics, or real‑time coaching platforms. Contributions to open‑source AI projects or relevant peer‑reviewed publications. Success Metrics (First 12 Months) ≥ 25 % reduction in transcription error rate or latency across core languages. Two net‑new AI modules shipped to production and adopted by Tier‑1 clients. Robust CI/CD and monitoring pipelines in place with < 1 % model downtime. Documentation and onboarding playbooks enabling AI team headcount to double without quality loss. Who You Are A builder who takes ideas from whiteboard to production with minimal supervision. A systems thinker who balances algorithmic innovation with engineering pragmatism. A hands‑on leader who codes, mentors, and sets the technical bar through example. A product‑centric technologist who obsesses over user impact, not benchmark vanity. A lifelong learner who follows the bleeding edge of GenAI and applies it wisely. How to Apply Email your résumé to careers@darwix.ai with the subject line specified above. Optionally, include a brief note detailing an AI system you have designed and deployed, the challenges faced, and the measurable impact achieved. Joining Darwix AI as the Founding AI Engineer means taking ownership of the platform that will redefine how revenue teams worldwide leverage real‑time intelligence. If you are ready to build, scale, and lead at the frontier of GenAI, we look forward to hearing from you.

Posted 4 days ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Head of AI & ML Platforms Focus : Voice AI, NLP, Conversation Intelligence for Omnichannel Enterprise Sales Location : Sector 63, Gurugram, Haryana — Full-time, 100% In-Office Work Hours : 10:30 AM – 8:00 PM, Monday to Friday (2nd and 4th Saturdays off) Experience Required : 8–15 years in AI/ML, with 3+ years leading teams in voice, NLP, or conversation platforms Apply : careers@darwix.ai Subject Line : “Application – Head of AI & ML Platforms – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI stack ingests multimodal inputs—voice calls, chat logs, emails, and CCTV streams—and delivers contextual nudges, conversation scoring, and performance analytics in real time. Our suite of products includes: Transform+ : Real-time conversational intelligence for contact centers and field sales Sherpa.ai : A multilingual GenAI assistant that provides in-the-moment coaching, summaries, and objection handling support Store Intel : A computer vision solution that transforms CCTV feeds into actionable insights for physical retail spaces Darwix AI is trusted by large enterprises such as IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty , and is backed by leading institutional and operator investors. We are expanding rapidly across India, the Middle East, and Southeast Asia. Role Overview We are seeking a highly experienced and technically strong Head of AI & ML Platforms to architect and lead the end-to-end AI systems powering our voice intelligence, NLP, and GenAI solutions. This is a leadership role that blends research depth with applied engineering execution. The ideal candidate will have deep experience in building and deploying voice-to-text pipelines, multilingual NLP systems, and production-grade inference workflows. The individual will be responsible for model design, accuracy benchmarking, latency optimization, infrastructure orchestration, and integration across our product suite. This is a critical leadership role with direct influence over product velocity, enterprise client outcomes, and future platform scalability. Key ResponsibilitiesVoice-to-Text (ASR) Architecture Lead the design and optimization of large-scale automatic speech recognition (ASR) pipelines using open-source and commercial frameworks (e.g., WhisperX, Deepgram, AWS Transcribe) Enhance speaker diarization, custom vocabulary accuracy, and latency performance for real-time streaming scenarios Build fallback ASR workflows for offline and batch mode processing Implement multilingual and domain-specific tuning, especially for Indian and GCC languages Natural Language Processing and Conversation Analysis Build NLP models for conversation segmentation, intent detection, tone/sentiment analysis, and call scoring Implement multilingual support (Hindi, Arabic, Tamil, etc.) with fallback strategies for mixed-language and dialectal inputs Develop robust algorithms for real-time classification of sales behaviors (e.g., probing, pitching, objection handling) Train and fine-tune transformer-based models (e.g., BERT, RoBERTa, DeBERTa) and sentence embedding models for text analytics GenAI and LLM Integration Design modular GenAI pipelines for nudging, summarization, and response generation using tools like LangChain, LlamaIndex, and OpenAI APIs Implement retrieval-augmented generation (RAG) architectures for contextual, accurate, and hallucination-resistant outputs Build prompt orchestration frameworks that support real-time sales coaching across channels Ensure safety, reliability, and performance of LLM-driven outputs across use cases Infrastructure and Deployment Lead the development of scalable, secure, and low-latency AI services deployed via FastAPI, TorchServe, or similar frameworks Oversee model versioning, monitoring, and retraining workflows using MLflow, DVC, or other MLOps tools Build hybrid inference systems for batch, real-time, and edge scenarios depending on product usage Optimize inference pipelines for GPU/CPU balance, resource scheduling, and runtime efficiency Team Leadership and Cross-functional Collaboration Recruit, manage, and mentor a team of machine learning engineers and research scientists Collaborate closely with Product, Engineering, and Customer Success to translate product requirements into AI features Own AI roadmap planning, sprint delivery, and KPI measurement Serve as the subject-matter expert for AI-related client discussions, sales demos, and enterprise implementation roadmaps Required Qualifications 8+ years of experience in AI/ML with a minimum of 3 years in voice AI, NLP, or conversational platforms Proven experience delivering production-grade ASR or NLP systems at scale Deep familiarity with Python, PyTorch, HuggingFace, FastAPI, and containerized environments (Docker/Kubernetes) Expertise in fine-tuning LLMs and building multi-language, multi-modal intelligence stacks Demonstrated experience with tools such as WhisperX, Deepgram, Azure Speech, LangChain, MLflow, or Triton Inference Server Experience deploying real-time or near real-time inference models at enterprise scale Strong architectural thinking with the ability to design modular, reusable, and scalable ML services Track record of building and leading high-performing ML teams Preferred Skills Background in telecom, contact center AI, conversational analytics, or field sales optimization Familiarity with GPU deployment, model quantization, and inference optimization Experience with low-resource languages and multilingual data augmentation Understanding of sales enablement workflows and domain-specific ontology development Experience integrating AI models into customer-facing SaaS dashboards and APIs Success Metrics Transcription accuracy improvement by ≥15% across core languages within 6 months End-to-end voice-to-nudge latency reduced below 5 seconds GenAI assistant adoption across 70%+ of eligible conversations AI-driven call scoring rolled out across 100% of Tier 1 clients within 9 months Model deployment velocity (dev to prod) reduced by ≥40% through tooling and process improvements Culture at Darwix AI At Darwix AI, we operate at the intersection of engineering velocity and product clarity. We move fast, prioritize outcomes over optics, and expect leaders to drive hands-on impact. You will work directly with the founding team and senior leaders across engineering, product, and GTM functions. Expect ownership, direct communication, and a culture that values builders who scale systems, people, and strategy. Compensation and Benefits Competitive fixed compensation Performance-based bonuses and growth-linked incentives ESOP eligibility for leadership candidates Access to GPU/compute credits and model experimentation infrastructure Comprehensive medical insurance and wellness programs Dedicated learning and development budget for technical and leadership upskilling MacBook Pro, premium workstation, and access to industry tooling licenses Career Progression 12-month roadmap: Build and stabilize AI platform across all product lines 18–24-month horizon: Elevate to VP of AI or Chief AI Officer as platform scale increases globally Future leadership role in enabling new verticals (e.g., healthcare, finance, logistics) with domain-specific GenAI solutions How to Apply Send the following to careers@darwix.ai : Updated CV (PDF format) A short statement (200 words max) on: “How would you design a multilingual voice-to-text pipeline optimized for low-resource Indic languages, with real-time nudge delivery?” Links to any relevant GitHub repos, publications, or deployed projects (optional) Subject Line : “Application – Head of AI & ML Platforms – [Your Name]”

Posted 4 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience - Experience programming with at least one software programming language Are you eager to shape the future of video entertainment, from movies and TV shows to live sports events? Do you enjoy tackling complex challenges within large-scale systems? If so, we invite you to help build the future of entertainment with us at Prime Video! Since its launch in 2007, Prime Video has been transforming the traditional television and movie industry. With a rapidly expanding library of high-quality content and availability in over 240 countries and territories, Prime Video is now a key strategic priority for Amazon. We invest heavily in acquiring, producing, and programming a diverse range of TV shows, movies, and live events. Our exclusive titles include "The Lord of the Rings," "The Boys," "The Wheel of Time," and "The Marvelous Mrs. Maisel," among others. Beyond our extensive On-Demand catalog, Prime Video offers a wide selection of Linear Channels and Live Sports, featuring major events like Thursday Night Football, the English Premier League, NBA, MLB, ATP tennis, French Open, and the UEFA Champions League. If this sparks your interest, read on! Key job responsibilities - Design, develop, and maintain highly scalable systems that power Prime Video’s streaming services. - Define the architecture for our software, utilizing a broad range of technologies, programming languages, and systems. - Implement and operate APIs that manage billions of playback start requests, ensuring seamless, low-latency content delivery. - Collaborate with cross-functional teams to influence the overall technical strategy and contribute to the team’s long-term roadmap. - Communicate effectively across teams and stakeholders, ensuring alignment with technical goals and project timelines. - Apply creative problem-solving skills to address complex challenges. - Stay ahead of industry trends and new technologies, integrating them where appropriate. - Mentor and guide other engineers, contributing to the overall growth and success of the engineering team. A day in the life As a software engineer, I design and build the APIs that power every title on Prime Video — ensuring seamless, low-latency streaming at massive scale. My day blends deep coding sessions, architectural decisions, and collaboration with cross-functional teams to shape the platform’s future. I tackle challenges using AWS services like EC2, S3, DynamoDB, and Kinesis, and find real joy in seeing my work in action when friends stream their favorite shows. I also mentor junior engineers and explore new ideas that push the boundaries. Knowing my work impacts millions of viewers makes each day deeply rewarding. About the team The Play Starts team is at the core of Prime Video’s streaming experience. We manage essential services that handle customer interactions from the moment they press play. Our mission is to ensure seamless content delivery, personalizing each viewer's experience based on their subscriptions and device specifications. We handle content manifest delivery, synchronize play states across devices, and manage concurrent streaming, all while securing the content. The services we operate are critical to both Live and Video-on-Demand playback, and we handle billions of playback start requests every month. 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 4 days ago

Apply

0 years

0 Lacs

Delhi, India

On-site

We are seeking a highly skilled and experienced Senior Python Developer to join our dynamic team. The ideal candidate will have a strong background in designing, developing, and deploying robust and scalable Python applications. You will be responsible for leading complex projects, mentoring junior developers, and contributing to architectural decisions. This role requires a deep understanding of Python best practices, excellent problem-solving skills, and a passion for building high-quality software. Key Responsibilities Development And Architecture Design, develop, and maintain high-performance, scalable, and secure Python applications. Lead the development of complex features and modules, ensuring adherence to architectural best practices. Contribute to architectural discussions and decisions, providing technical leadership and guidance. Implement and optimize microservices architecture. Provide technical expertise in the selection and implementation of appropriate technologies and frameworks. API Development And Integration Design and implement RESTful or GraphQL APIs for seamless integration with frontend and other backend systems. Integrate with third-party APIs and services. Ensure API security and Management : Design and implement database schemas (SQL and NoSQL) to support application requirements. Write efficient SQL queries and optimize database performance. Integrate applications with various database systems (e.g., PostgreSQL, MySQL, MongoDB, Optimization and Troubleshooting : Identify and resolve performance bottlenecks and latency issues. Debug and troubleshoot complex Python systems. Conduct thorough performance testing and optimization. Implement robust monitoring and logging solutions. Collaboration And Mentorship Collaborate with product managers, designers, and other developers to define andimplement software solutions. Mentor and guide junior developers, providing technical guidance and support. Participate in code reviews, providing constructive feedback and ensuring code quality. Contribute to technical documentation and knowledge sharing. Software Development Lifecycle Participate in agile development processes, including sprint planning, daily stand-ups, and retrospectives. Ensure proper documentation and maintain software development standards. Stay updated with the latest Python technologies and industry trends. Deployment And DevOps Deploy and maintain applications on cloud platforms (e.g., AWS, Azure, GCP). Implement CI/CD pipelines for automated testing and deployment. Work with containerization technologies (Docker, Kubernetes). Required Technical Skills Proficiency Extensive experience in Python 3. Strong understanding of Python design patterns and best practices. Experience with asynchronous programming (e.g., asyncio). Backend Frameworks Proficiency in at least one Python web framework (e.g., Django, Flask, FastAPI). API Development Strong understanding of RESTful or GraphQL API design principles and best practices. Experience with API documentation tools (e.g., Swagger, OpenAPI). Databases Proficiency in SQL and NoSQL databases. Experience with database ORMs (e.g., SQLAlchemy, Django ORM). Cloud Platforms Experience with cloud platforms such as AWS, Azure, or GCP. Experience with deploying and managing Python applications in cloud environments. Version Control Proficiency in Git version control. Testing Experience with unit testing, integration testing, and end-to-end testing. Proficiency in testing frameworks such as pytest or unittest. DevOps Familiarity with CI/CD pipelines and tools (e.g., Jenkins, GitLab CI). Experience with containerization technologies (e.g., Docker, Skills : Experience with message queues (e.g., RabbitMQ, Kafka). Knowledge of serverless architecture. Experience with caching mechanisms (e.g., Redis, Memcached). Experience with performance monitoring and profiling tools. Knowledge of security best practices for Python applications. Experience with data science libraries (e.g., NumPy, Pandas). Soft Skills Strong problem-solving and analytical skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Strong leadership and mentoring abilities. Ability to learn and adapt to new technologies. Strong attention to detail and a commitment to quality. Education : Bachelors or Masters degree in Computer Science, Engineering, or a related field (ref:hirist.tech)

Posted 4 days ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Job Description The primary role of Tech Architect will be to lead technical architecture of high scalable and data computation heavy SaaS products for AI powered dynamic price optimization, including but not limited to: Product performance : Ensure applications are scalable., fault tolerant, highly available, handle high degree of concurrency, deliver low latency, and perform AI tasks at massive scale (+1Billion data processing per day). Tech architecture design : Lead end to end tech design of SaaS applications. Project management : Work with other software and full stack developers and oversee that the developments are adhering to the tech design following the software development and documentation standards consistently of the highest quality. Plan integrations : Plan API based integrations with third party tech platforms (e.g. GDSs operating in Travel or Hospitality industries, Seller services in marketplaces like Amazon, FlipKart, Public Apps in Shopify, woocommerce; sales enablement platforms like SalesForce, Microsoft Dynamics, etc.) Oversee data engineering : Build complete understanding of data architecture, write data processing scripts to prepare data for feeding to the AI engine(s) AI and ML modules : Ensure seamless integrations of AI/ML modules at production scale. App performance optimization : Enhance the data processing and tech architecture to deliver superior computation performance and scalability of the Apps. Product enhancements : Interact with client and business side teams to develop a roadmap for product enhancements. Candidate Profile Entrepreneurial mind set with a positive attitude is a must. Prior experience (minimum 5 years) in tech architecture designs of multi-tenant SaaS full stack application development. Expert knowledge of new generation technologies is a must (e.g. Machine Queues, MSA architecture, PWA architecture, React, No-SQL, Serverless, Redis, Python, Django, SQL, Docker/Containers/Kubernetes, etc.). Strong acumen in algorithmic problem solving. Self-learner: highly curious, self-starter and can work with minimum supervision and guidance. Track record of excellence in academics or non-academic areas, with significant accomplishments. Excellent written & oral communication and inter-personal skills, with high degree of comfort working in teams and making teams successful. Qualifications BTech / BE / BS / MTech / MCA / ME / MS in Computer Science/Engineering, IT or related field 5 - 7 years work experience. (ref:hirist.tech)

Posted 4 days ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

🚀 Senior C++ Full Stack Developer 📍 Location: Coimbatore | 🕒 Full-Time | On-Site 🏢 Company: Xtreme Next Company Description Xtreme Next is a leading provider of all-in-one multi-asset trading platforms for exchanges, brokers, hedge funds, and financial institutions. Our ecosystem includes a branded mobile app, CRM, liquidity hub, order matching engine, PAMM/MAM, social trading, algo trading tools, backtesting systems, multi-chart layout, FIX engine, FIX API, and customizable trading solutions. We focus on delivering ultra-low-latency , high-performance , and scalable infrastructure for global financial markets. Role Description We’re looking for a skilled and performance-driven Senior C++ Developer to join our core engineering team in Coimbatore . You will work on building the backbone of our trading infrastructure — developing low-latency modules, order management systems, and trading APIs that power real-time financial transactions. This is a hands-on role for someone who thrives in a high-performance environment and has strong experience in C++ system-level development , preferably within trading or fintech domains . 🛠️ What You Will Do Design and develop low-latency, high-throughput C++ components for trading platforms Build core modules such as order matching engine , FIX engine , market data parsers , and execution gateways Write clean, efficient, and well-tested code for real-time applications Optimize performance through multithreading, memory management, and system profiling Integrate external APIs, market data feeds, and connectivity layers Work closely with team leads and QA for implementation, debugging, and performance tuning Contribute to system architecture discussions and product evolution Maintain code quality, participate in code reviews, and follow best practices Troubleshoot production issues and deliver high-reliability solutions under pressure ✅ What You Bring 5+ years of experience in C++ software development Strong experience with C++11/14/17 , STL, Boost, and system-level programming Solid knowledge of multithreading , network programming (TCP/UDP) , and memory optimization Experience in low-latency and real-time system development Familiarity with Linux , gdb , valgrind , and performance profiling tools Exposure to financial protocols like FIX , FAST , or proprietary market data protocols Experience with message queues , shared memory , or high-performance IPC mechanisms is a plus Strong debugging, problem-solving, and analytical skills Bachelor’s or Master’s degree in Computer Science , Engineering , or related field Experience in trading, market data systems, or financial exchanges is a major plus 💼 Why Join Xtreme Next? Build systems that run at the core of global financial markets Work on real-world high-performance , low-latency infrastructure Collaborate with a team of passionate engineers and fintech experts Competitive salary, growth opportunities, and cutting-edge tech stack Be part of a fast-scaling product company with global clients

Posted 4 days ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

🚀 Head of Development (Tech Lead) – C++ 📍 Location: Mumbai | 🕒 Full-Time | On-Site 🏢 Company: Xtreme Next Company Description Xtreme Next is a leading provider of all-in-one multi-asset trading platforms for exchanges, financial institutions, brokers, and hedge funds. Our platform ecosystem includes a branded mobile app, CRM, liquidity hub, order matching engine, PAMM/MAM, social trading, algo trading, strategy builder, backtesting tools, multi-chart layout, FIX engine, FIX API, and fully customizable trading solutions. We are committed to delivering high-performance, low-latency solutions for the global financial trading industry. Role Description We’re seeking an experienced Head of Development (Tech Lead) with deep expertise in C++ and high-performance systems development to lead the technical direction of our core trading infrastructure. This full-time, on-site position in Mumbai will oversee the engineering team, drive technical architecture, and ensure successful delivery of scalable trading technologies. 🛠️ What You Will Do Lead development of performance-critical components in C++ (e.g., order matching engine, FIX engine, market data handlers) Architect scalable systems and low-latency, high-throughput trading systems Manage a growing team of C++ developers and software engineers Collaborate with product, QA, and business teams to align tech with strategic goals Review architecture, code, and system design to ensure scalability and reliability Own the full development lifecycle, from planning to deployment and support Optimize system performance and minimize latency through advanced techniques Ensure secure and fault-tolerant design across all modules Lead R&D efforts for new trading technologies and platform enhancements ✅ What You Bring 8+ years of experience in software engineering with a strong C++ background 2+ years in a leadership or tech lead role Proven experience building and scaling real-time trading systems Strong understanding of multi-threading, networking, and memory management Familiarity with financial protocols like FIX, FAST , or binary protocols Experience with low-latency databases and message queues is a plus Solid understanding of Linux systems , debugging tools, and profiling Strong grasp of software architecture, performance tuning, and design patterns Excellent communication and leadership skills Bachelor’s or Master’s in Computer Science, Engineering, or related field Experience in the financial trading industry is highly preferred 💼 Why Join Xtreme Next? Work on cutting-edge trading technologies used by global financial institutions Lead a talented, growing team in a fast-paced and innovation-driven environment Opportunities for career advancement and continuous learning Be part of a high-impact company shaping the future of multi-asset trading platforms

Posted 4 days ago

Apply

4.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it.In March 2022, we became India's fastest fin tech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises. So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore.Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story Key Responsibilities: Develop and maintain APIs using Node.js and FastAPI for bot orchestration and deployment. Implement and optimize CI/CD pipelines (GitHub, Docker, AWS ECR) for automated bot deployments. Manage and operate Kubernetes clusters (AWS EKS or K3S) for voice bot hosting and scaling. Integrate voice bots with ASR, TTS, and telephony systems (e.g., AWS Connect). Implement real-time monitoring and alerting for bot performance, latency, and system health. Collaborate on ensuring high availability and fault tolerance for 10M+ daily user interactions. Work with the Campaign Management Platform to schedule and execute outbound voice campaigns. Required Qualifications: 4-5 years of experience in MLOps, DevOps, or a related software engineering role. Strong proficiency in Node.js and Python (FastAPI) for backend development. Strong proficiency in Redis, NATS, Dragonfly, context and cached management. Strong proficiency in MongoDB, SQLite or any database. Solid experience with Docker and containerization. Hands-on experience with Kubernetes (EKS or K3S) for deployment and operations. Practical experience with AWS cloud services (EC2, EKS, ECR, S3, CloudWatch). Experience with CI/CD pipelines. Preferred Qualifications (Added Advantages): Exposure to frontend development (e.g., React). Familiarity with Voice AI Architecture (ASR/TTS/LLM) or telephony systems. Experience with LLM serving frameworks Exposure to campaign management or outbound dialer systems.

Posted 4 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Principal Software Engineer – AI Location : Gurgaon (In-Office) Working Days : Monday to Saturday (2nd and 4th Saturdays are working) Working Hours : 10:30 AM – 8:00 PM Experience : 6–10 years of hands-on development in AI/ML systems, with deep experience in shipping production-grade AI products Apply at : careers@darwix.ai Subject Line : Application – Principal Software Engineer – AI – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how large sales and CX teams operate across India, MENA, and Southeast Asia. We build deeply integrated conversational intelligence and agent assist tools that enable: Multilingual speech-to-text pipelines Real-time agent coaching AI-powered sales scoring Predictive analytics and nudges CRM and telephony integrations Our clients include leading enterprises like IndiaMart, Bank Dofar, Wakefit, GIVA, and Sobha , and our product is deeply embedded in the daily workflows of field agents, telecallers, and enterprise sales teams. We are backed by top VCs and built by alumni from IIT, IIM, and BITS with deep expertise in real-time AI, enterprise SaaS, and automation. Role Overview We are hiring a Principal Software Engineer – AI to lead the development of advanced AI features in our conversational intelligence suite. This is a high-ownership role that combines software engineering, system design, and AI/ML application delivery. You will work across our GenAI stack—including Whisper, LangChain, LLMs, audio streaming, transcript processing, NLP pipelines, and scoring models—to build robust, scalable, and low-latency AI modules that power real-time user experiences. This is not a research role. You will be building, deploying, and optimizing production-grade AI features used daily by thousands of sales agents and managers across industries. Key Responsibilities 1. AI System Architecture & Development Design, build, and optimize core AI modules such as: Multilingual speech-to-text (Whisper, Deepgram, Google STT) Prompt-based LLM workflows (OpenAI, open-source LLMs) Transcript post-processing: punctuation, speaker diarization, timestamping Real-time trigger logic for call nudges and scoring Build resilient pipelines using Python, FastAPI, Redis, Kafka , and vector databases 2. Production-Grade Deployment Implement GPU/CPU-optimized inference services for latency-sensitive workflows Use caching, batching, asynchronous processing, and message queues to scale real-time use cases Monitor system health, fallback workflows, and logging for ML APIs in live environments 3. ML Workflow Engineering Work with Head of AI to fine-tune, benchmark, and deploy custom models for: Call scoring (tone, compliance, product pitch) Intent recognition and sentiment classification Text summarization and cue generation Build modular services to plug models into end-to-end workflows 4. Integrations with Product Modules Collaborate with frontend, dashboard, and platform teams to serve AI output to users Ensure transcript mapping, trigger visualization, and scoring feedback appear in real-time in the UI Build APIs and event triggers to interface AI systems with CRMs, telephony, WhatsApp, and analytics modules 5. Performance Tuning & Optimization Profile latency and throughput of AI modules under production loads Implement GPU-aware batching, model distillation, or quantization where required Define and track key performance metrics (latency, accuracy, dropout rates) 6. Tech Leadership Mentor junior engineers and review AI system architecture, code, and deployment pipelines Set engineering standards and documentation practices for AI workflows Contribute to planning, retrospectives, and roadmap prioritization What We’re Looking For Technical Skills 6–10 years of backend or AI-focused engineering experience in fast-paced product environments Strong Python fundamentals with experience in FastAPI, Flask , or similar frameworks Proficiency in PyTorch , Transformers , and OpenAI API/LangChain Deep understanding of speech/text pipelines, NLP, and real-time inference Experience deploying LLMs and AI models in production at scale Comfort with PostgreSQL, MongoDB, Redis, Kafka, S3 , and Docker/Kubernetes System Design Experience Ability to design and deploy distributed AI microservices Proven track record of latency optimization, throughput scaling, and high-availability setups Familiarity with GPU orchestration, containerization, CI/CD (GitHub Actions/Jenkins), and monitoring tools Bonus Skills Experience working with multilingual STT models and Indic languages Knowledge of Hugging Face, Weaviate, Pinecone, or vector search infrastructure Prior work on conversational AI, recommendation engines, or real-time coaching systems Exposure to sales/CX intelligence platforms or enterprise B2B SaaS Who You Are A pragmatic builder—you don’t chase perfection but deliver what scales A systems thinker—you see across data flows, bottlenecks, and trade-offs A hands-on leader—you mentor while still writing meaningful code A performance optimizer—you love shaving off latency and memory bottlenecks A product-focused technologist—you think about UX, edge cases, and real-world impact What You’ll Impact Every nudge shown to a sales agent during a live customer call Every transcript that powers a manager’s coaching decision Every scorecard that enables better hiring and training at scale Every dashboard that shows what drives revenue growth for CXOs This role puts you at the intersection of AI, revenue, and impact —what you build is used daily by teams closing millions in sales across India and the Middle East. How to Apply Send your resume to careers@darwix.ai Subject Line: Application – Principal Software Engineer – AI – [Your Name] (Optional): Include a brief note describing one AI system you've built for production—what problem it solved, what stack it used, and what challenges you overcame. If you're ready to lead the AI backbone of enterprise sales , build world-class systems, and drive real-time intelligence at scale— Darwix AI is where you belong.

Posted 4 days ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Vice President – Engineering Location : Gurgaon (In-Office) Working Days : Monday to Saturday (2nd and 4th Saturdays are working) Working Hours : 10:30 AM – 8:00 PM Experience : 10–15 years in full-stack engineering, with at least 4+ years in a senior leadership role in SaaS/AI platforms Apply at : careers@darwix.ai Subject Line : Application – VP Engineering – [Your Name] About Darwix AI Darwix AI is one of India’s fastest-growing GenAI SaaS platforms, transforming enterprise revenue teams with real-time conversational intelligence and AI-powered sales enablement tools. Our core product suite powers: Multilingual speech-to-text Real-time agent coaching & nudges Automated scoring of sales calls CRM/LOS integrations Analytics dashboards and more Our clients include leading brands in BFSI, real estate, retail, and healthcare , such as IndiaMart, Wakefit, Bank Dofar, Sobha, and GIVA . We operate across India, MENA, and Southeast Asia, and are backed by marquee investors, industry CXOs, and IIT/IIM/BITS alumni. As we scale to become the backbone of revenue intelligence globally, engineering excellence is the cornerstone of our journey. Role Overview We are looking for a visionary and hands-on Vice President – Engineering who will take end-to-end ownership of our technology stack, engineering roadmap, and team. You will be responsible for architecting scalable GenAI-first systems , driving high-velocity product shipping, ensuring enterprise-grade reliability, and building a high-performing engineering culture. This role requires strategic thinking, hands-on execution, deep experience with AI/ML or data-intensive platforms, and the ability to attract and retain top-tier engineering talent. You will work closely with the CEO, Head of AI/ML, Product, and Founder's Office to deliver robust, secure, and intelligent experiences across web and real-time systems. Key Responsibilities 1. Technology Leadership Define and drive the technology vision, architecture, and roadmap across AI, platform, data, and frontend systems Lead system architecture design for speech processing, real-time audio streaming, LLM-based agents, and multilingual interfaces Ensure engineering best practices: microservices design, scalability, uptime SLAs, and test automation 2. Team Building & People Leadership Build, mentor, and scale a high-performance engineering team including backend, frontend, AI, DevOps, QA, and integrations Instill a culture of ownership, fast iteration, performance reviews, and deep collaboration across teams Define career paths, recruit top-tier talent, and drive technical onboarding and coaching 3. Product Execution & Delivery Work with Product, Founders, and Sales Engineering to translate business needs into scalable technical implementations Own timelines, sprint planning, delivery schedules, and development velocity metrics Ensure security, latency, fault tolerance, and compliance across all releases 4. Platform & Infra Ownership Oversee cloud infrastructure on AWS (or GCP), containerization, CI/CD pipelines, observability, and cost optimization Own DevOps, uptime, disaster recovery, and horizontal scaling for high concurrency needs Manage data engineering and database architecture across PostgreSQL, MongoDB, S3, Redis, and vector DBs 5. AI/ML & GenAI Integration Work closely with Head of AI/ML on integrating Whisper, LangChain, custom RAG pipelines, and multilingual inference stacks into production systems Translate AI/ML research into performant APIs, stream pipelines, and real-time inference microservices Ensure optimized GPU/CPU utilization, caching, and latency control on real-time AI modules 6. Client-Facing & Sales Support Support enterprise pre-sales and post-sales engineering for large deals (in BFSI, real estate, etc.) Handle security reviews, solution architecture walkthroughs, and platform deep dives with client CTO teams Create system documentation, architecture diagrams, and technical solution notes Qualifications & Skills Must-Have 10–15 years of total experience in software engineering, with at least 4–5 years in senior leadership Proven experience scaling a B2B SaaS platform across geographies Deep backend expertise in Python, Node.js, PostgreSQL, MongoDB , microservices Strong understanding of cloud-native development (AWS/GCP), DevOps, CI/CD , and real-time systems Experience managing large-scale, concurrent user bases and low-latency applications Strong understanding of security protocols, enterprise authentication (SAML/OAuth), and compliance Good to Have Exposure to speech-to-text , audio processing, NLP, or GenAI-based inference systems Experience working with AI frameworks (Torch, HuggingFace, LangChain) and GPU inference optimization Familiarity with Flutter , Angular/React , or mobile deployment is a plus Hands-on experience with vector databases, Redis, WebSockets , and scalable caching mechanisms Who You Are You’ve built high-growth engineering teams from scratch or scaled them through 10x growth phases You’ve architected and shipped world-class SaaS systems, preferably with real-time or AI components You balance technical depth with strategic business alignment You lead by influence, feedback, and deep respect—not just authority You’re obsessed with metrics, uptime, shipping velocity, and end-user outcomes You’re energized by fast-paced environments, ambiguous challenges, and clear impact What Success Looks Like in 12 Months A rock-solid engineering team shipping weekly with full CI/CD and 99.99% uptime Near real-time AI inference systems powering multilingual speech and coaching Engineering velocity has doubled, without sacrificing stability Architecture is mature, modular, secure, and scalable across geographies You’ve built strong tech documentation and institutional knowledge systems Product features are built fast, secure, and adopted well by enterprise clients Engineering culture is resilient, hungry, and ready for global scale How to Apply Send your resume to careers@darwix.ai Subject: Application – VP Engineering – [Your Name] (Optional): Include a brief note outlining a high-impact engineering initiative you’ve led and the measurable impact it created. This is not just an engineering leadership role. It’s a rare opportunity to build a generational AI company from the ground up —as the tech backbone for sales teams globally. If you’ve built systems that scale, teams that thrive, and products that change behavior— we’d love to speak with you.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

HCLTech is looking for a highly talented and self- motivated [ Principal Software Engineer to join it in advancing the technological world through innovation and creativity. Job Title: Principal Software Engineer Role/Responsibilities Gen AI capability development - Design, fine-tune, and optimize LLMs, retrieval-augmented generation (RAG), and reinforcement learning models for IT automation. Experiment with cutting-edge AI techniques, including multi-agent architectures, prompt tuning, and continual learning. 6+ years of experience in a similar role, with a strong focus on developing & deploying machine learning models in production environments. Improve model accuracy, latency, and efficiency, ensuring optimal performance for deployed ML applications. Design and implement CI/CD pipelines for deploying machine learning models to production environments Automate the processes involved in training, testing, deploying, and monitoring models to ensure smooth and continuous operations. Develop tools and frameworks for monitoring model performance and stability in production, ensuring models remain accurate and effective over time. Work closely with data scientists and data engineers to integrate machine learning models with existing data infrastructure Manage cloud-based machine learning environments, ensuring scalability, security, and cost-efficiency. Optimize models for production, focusing on performance, scalability, and resource utilization Create and maintain comprehensive documentation for processes, pipelines, and system architecture. Proficiency in Python and demonstrable expertise in at-least one of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with LLMs and finetuning models Experience with CI/CD tools (e.g., Jenkins, GitLab CI, Gitub Actions) and containerization (e.g., Docker, Kubernetes). Hands-on Knowledge of cloud platforms (e.g., AWS, GCP, Azure, Databricks) and cloud-native tools for MLOps. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack). Experience with infrastructure-as-code tools (e.g., Terraform, Ansible). Familiarity with data engineering workflows and ETL processes. Knowledge of model explainability and fairness considerations in production. Working knowledge of Databricks is a huge plus. How You’ll Grow At HCLTech, we offer continuous opportunities for you to find your spark and grow with us. We want you to be happy and satisfied with your role and to really learn what type of work sparks your brilliance the best. Throughout your time with us, we offer transparent communication with senior-level employees, learning and career development programs at every level, and opportunities to experiment in different roles or even pivot industries. We believe that you should be in control of your career with unlimited opportunities to find the role that fits you best.

Posted 4 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

Remote

🧠 Job Title: Engineering Manager Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 7–12 Years Compensation: Competitive salary + ESOPs + Performance-based bonuses 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing AI-first startups, building next-gen conversational intelligence and real-time agent assist tools for sales teams globally. We’re transforming how enterprise sales happens across industries like BFSI, real estate, retail, and telecom with a GenAI-powered platform that combines multilingual transcription, NLP, real-time nudges, knowledge base integration, and performance analytics—all in one. Our clients include some of the biggest names in India, MENA, and SEA. We’re backed by marquee venture capitalists, 30+ angel investors, and operators from top AI, SaaS, and B2B companies. Our founding team comes from IITs, IIMs, BITS Pilani, and global enterprise AI firms. Now, we’re looking for a high-caliber Engineering Manager to help lead the next phase of our engineering evolution. If you’ve ever wanted to build and scale real-world AI systems for global use cases—this is your shot. 🎯 Role Overview As Engineering Manager at Darwix AI, you will be responsible for leading and managing a high-performing team of backend, frontend, and DevOps engineers. You will directly oversee the design, development, testing, and deployment of new features and system enhancements across Darwix’s AI-powered product suite. This is a hands-on technical leadership role , requiring the ability to code when needed, conduct architecture reviews, resolve blockers, and manage the overall engineering execution. You’ll work closely with product managers, data scientists, QA teams, and the founders to deliver on roadmap priorities with speed and precision. You’ll also be responsible for building team culture, mentoring developers, improving engineering processes, and helping the organization scale its tech platform and engineering capacity. 🔧 Key Responsibilities1. Team Leadership & Delivery Lead a team of 6–12 software engineers (across Python, PHP, frontend, and DevOps). Own sprint planning, execution, review, and release cycles. Ensure timely and high-quality delivery of key product features and platform improvements. Solve execution bottlenecks and ensure clarity across JIRA boards, product documentation, and sprint reviews. 2. Architecture & Technical Oversight Review and refine high-level and low-level designs proposed by the team. Provide guidance on scalable architectures, microservices design, performance tuning, and database optimization. Drive migration of legacy PHP code into scalable Python-based microservices. Maintain technical excellence across deployments, containerization, CI/CD, and codebase quality. 3. Hiring, Coaching & Career Development Own the hiring and onboarding process for engineers in your pod. Coach team members through 1:1s, OKRs, performance cycles, and continuous feedback. Foster a culture of ownership, transparency, and high-velocity delivery. 4. Process Design & Automation Drive adoption of agile development practices—daily stand-ups, retrospectives, sprint planning, documentation. Ensure production-grade observability, incident tracking, root cause analysis, and rollback strategies. Introduce quality metrics like test coverage, code review velocity, time-to-deploy, bug frequency, etc. 5. Cross-functional Collaboration Work closely with the product team to translate high-level product requirements into granular engineering plans. Liaise with QA, AI/ML, Data, and Infra teams to coordinate implementation across the board. Collaborate with customer success and client engineering for debugging and field escalations. 🔍 Technical Skills & Stack🔹 Primary Languages & Frameworks: Python (FastAPI, Flask, Django) PHP (legacy services; transitioning to Python) TypeScript, JavaScript, HTML5, CSS3 Mustache templates (preferred), React/Next.js (optional) 🔹 Databases & Storage: MySQL (primary), PostgreSQL MongoDB, Redis Vector DBs: Pinecone, FAISS, Weaviate (RAG pipelines) 🔹 AI/ML Integration: OpenAI APIs, Whisper, Wav2Vec, Deepgram Langchain, HuggingFace, LlamaIndex, LangGraph 🔹 DevOps & Infra: AWS EC2, S3, Lambda, CloudWatch Docker, GitHub Actions, Nginx Git (GitHub/GitLab), Jenkins (optional) 🔹 Monitoring & Testing: Prometheus, Grafana, Sentry PyTest, Selenium, Postman ✅ Candidate Profile👨‍💻 Experience: 7–12 years of total engineering experience in high-growth product companies or startups. At least 2 years of experience managing teams as a tech lead or engineering manager. Experience working on real-time data systems, microservices architecture, and SaaS platforms. 🎓 Education: Bachelor’s or Master’s degree in Computer Science or related field. Preferred background from Tier 1 institutions (IITs, BITS, NITs, IIITs). 💼 Traits We Love: You lead with clarity, ownership, and high attention to detail. You believe in building systems—not just shipping features. You are pragmatic and prioritize team delivery velocity over theoretical perfection. You obsess over latency, clean interfaces, and secure deployments. You want to build a high-performing tech org that scales globally. 🌟 What You’ll Get Leadership role in one of India’s top GenAI startups Competitive fixed compensation with performance bonuses Significant ESOPs tied to company milestones Transparent performance evaluation and promotion framework A high-speed environment where builders thrive Access to investor and client demos, roadshows, GTM huddles, and more Annual learning allowance and access to internal AI/ML bootcamps Founding-team-level visibility in engineering decisions and product innovation 🛠️ Projects You’ll Work On Real-time speech-to-text engine in 11 Indian languages AI-powered live nudges and agent assistance in B2B sales Conversation summarization and analytics for 100,000+ minutes/month Automated call scoring and custom AI model integration Multimodal input processing: audio, text, CRM, chat Custom knowledge graph integrations across BFSI, real estate, retail 📢 Why This Role Matters This is not just an Engineering Manager role. At Darwix AI, every engineering decision feeds directly into how real sales teams close deals. You’ll see your work powering real-time customer calls, nudging field reps in remote towns, helping CXOs make hiring decisions, and making a measurable impact on enterprise revenue. You’ll help shape the core technology platform of a company that’s redefining how humans and machines interact in sales. 📩 How to Apply Email your resume, GitHub/portfolio (if any), and a few lines on why this role excites you to: 📧 people@darwix.ai Subject: Application – Engineering Manager – [Your Name] If you’re a technical leader who thrives on velocity, takes pride in mentoring developers, and wants to ship mission-critical AI systems that power revenue growth across industries, this is your stage . Join Darwix AI. Let’s build something that lasts.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Python Developer – Backend Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 4–8 Years About Darwix AI Darwix AI is building India’s most advanced GenAI-powered platform for enterprise sales teams. We combine speech recognition, LLMs, vector databases, real-time analytics, and multilingual intelligence to power customer conversations across India, the Middle East, and Southeast Asia. We’re solving complex backend problems across speech-to-text pipelines , agent assist systems , AI-based real-time decisioning , and scalable SaaS delivery . Our engineering team sits at the core of our product and works closely with AI research, product, and client delivery to build the future of revenue enablement. Backed by top-tier VCs, AI advisors, and enterprise clients, this is a chance to build something foundational. Role Overview We are hiring a Senior Python Developer to architect, implement, and optimize high-performance backend systems that power our AI platform. You will take ownership of key backend services—from core REST APIs and data pipelines to complex integrations with AI/ML modules. This role is for builders. You’ll work closely with product, AI, and infra teams, write production-grade Python code, lead critical decisions on architecture, and help shape engineering best practices. Key Responsibilities 1. Backend API Development Design and implement scalable, secure RESTful APIs using FastAPI , Flask , or Django REST Framework Architect modular services and microservices to support AI, transcription, real-time analytics, and reporting Optimize API performance with proper indexing, pagination, caching, and load management strategies Integrate with frontend systems, mobile clients, and third-party systems through clean, well-documented endpoints 2. AI Integrations & Inference Orchestration Work closely with AI engineers to integrate GenAI/LLM APIs (OpenAI, Llama, Gemini), transcription models (Whisper, Deepgram), and retrieval-augmented generation (RAG) workflows Build services to manage prompt templates, chaining logic, and LangChain flows Deploy and manage vector database integrations (e.g., FAISS , Pinecone , Weaviate ) for real-time search and recommendation pipelines 3. Database Design & Optimization Model and maintain relational databases using MySQL or PostgreSQL ; experience with MongoDB is a plus Optimize SQL queries, schema design, and indexes to support low-latency data access Set up background jobs for session archiving, transcript cleanup, and audio-data binding 4. System Architecture & Deployment Own backend deployments using GitHub Actions , Docker , and AWS EC2 Ensure high availability of services through containerization, horizontal scaling, and health monitoring Manage staging and production environments, including DB backups, server health checks, and rollback systems 5. Security, Auth & Access Control Implement robust authentication (JWT, OAuth), rate limiting , and input validation Build role-based access controls (RBAC) and audit logging into backend workflows Maintain compliance-ready architecture for enterprise clients (data encryption, PII masking) 6. Code Quality, Documentation & Collaboration Write clean, modular, extensible Python code with meaningful comments and documentation Build test coverage (unit, integration) using PyTest , unittest , or Postman/Newman Participate in pull requests, code reviews, sprint planning, and retrospectives with the engineering team Required Skills & QualificationsTechnical Expertise 3–8 years of experience in backend development with Python, PHP. Strong experience with FastAPI , Flask , or Django (at least one in production-scale systems) Deep understanding of RESTful APIs , microservice architecture, and asynchronous Python patterns Strong hands-on with MySQL (joins, views, stored procedures); bonus if familiar with MongoDB , Redis , or Elasticsearch Experience with containerized deployment using Docker and cloud platforms like AWS or GCP Familiarity with Git , GitHub , CI/CD pipelines , and Linux-based server environments Plus Points Experience working on audio processing , speech-to-text (STT) pipelines, or RAG architectures Hands-on with vector databases or LangChain , LangGraph Exposure to real-time systems, WebSockets, and stream processing Basic understanding of frontend integration workflows (e.g., with HTML/CSS/JS interfaces)

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 4 days ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description: Director of Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 8–12 Years About Darwix AI Darwix AI is a next-generation Gen-AI-powered sales enablement platform that empowers enterprise sales teams with intelligent nudges, real-time insights, and AI-driven conversation analytics. By combining AI, automation, and contextual intelligence, we are redefining how sales teams engage, close, and scale. Backed by leading VCs and industry leaders, Darwix AI is one of the fastest-growing AI startups in India, with an expanding presence across MENA, India, and the US. Role Overview We are seeking a highly experienced and technically proficient Director of Engineering to lead and scale our engineering team. In this role, you will be responsible for managing backend development, DevOps, and infrastructure initiatives. The ideal candidate will be a hands-on technical leader with a strong architectural foundation and proven experience scaling engineering teams and systems in high-growth environments. You will work closely with the Vice President of Engineering and Founders to drive engineering excellence, ensure timely delivery, and lead technical decision-making aligned with business goals. Key Responsibilities Engineering Leadership Lead engineering execution across backend services (Python, PHP), infrastructure, and DevOps. Define technical strategy and ensure alignment with product and organizational goals. Own the delivery roadmap and ensure timely and high-quality outputs. System Architecture and Scalability Oversee backend architecture to ensure reliability, scalability, and performance. Guide the implementation of microservices, RESTful APIs, and scalable cloud-based infrastructure. Design and review systems with considerations for high-volume data ingestion and low-latency processing. Team Management and Development Build, lead, and mentor a high-performing engineering team. Implement best practices for code quality, testing, deployment, and team collaboration. Foster a strong engineering culture focused on learning, execution, and accountability. Cross-Functional Collaboration Collaborate with product, AI/ML, sales, and design teams to align engineering deliverables with business priorities. Translate product requirements into structured engineering plans and milestones. Cloud Infrastructure and DevOps Work closely with the DevOps function to manage AWS cloud infrastructure, CI/CD pipelines, and security protocols. Ensure system uptime, data integrity, and disaster recovery preparedness. AI Infrastructure Support Support integration with LLMs and AI/ML models. Lead initiatives involving vector databases (e.g., FAISS, Pinecone, Weaviate) and retrieval-augmented generation pipelines. Qualifications Education Bachelor's or Master’s degree in Computer Science, Engineering, or a related technical discipline. Candidates from premier institutions (IITs, BITS, NITs) will be preferred. Experience 8–12 years of progressive engineering experience with at least 3 years in a leadership role. Proven experience in building scalable backend systems and managing high-performing engineering teams. Strong exposure to Python and PHP/NodeJS in production-grade systems. Experience in designing and managing infrastructure on AWS or equivalent cloud platforms. Familiarity with containerization (Docker, Kubernetes) and CI/CD systems. Desirable Skills Experience working with vector databases, embeddings, and AI/ML deployment in production. Deep understanding of microservices architecture, event-driven systems, and RESTful API design. Strong communication and stakeholder management skills. What We Offer Leadership role in a fast-scaling, venture-backed AI technology firm. Opportunity to work on large-scale AI applications in real-world enterprise environments. Competitive compensation including fixed salary, ESOPs, and performance-based bonuses. A high-performance culture that encourages ownership, innovation, and continuous learning. Direct collaboration with senior leadership on strategic initiatives. Application Note: This role demands both technical expertise and strategic foresight. We are looking for leaders who are comfortable building systems hands-on, mentoring teams, and ensuring consistent execution in a high-growth, high-ownership environment.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. How Will You Fulfill Your Potential Work with a global team of highly motivated platform engineers and software developers building integrated architectures for secure, scalable infrastructure services serving a diverse set of use cases. Partner with colleagues from across technology and risk to ensure an outstanding platform is delivered. Help to provide frictionless integration with the firm’s runtime, deployment and SDLC technologies. Collaborate on feature design and problem solving. Help to ensure reliability, define, measure, and meet service level objectives. Quality coding & integration, testing, release, and demise of software products supporting AWM functions. Engage in quality assurance and production troubleshooting. Help to communicate and promote best practices for software engineering across the Asset Management tech stack. Basic Qualifications A strong grounding in software engineering concepts and implementation of architecture design patterns. A good understanding of multiple aspects of software development in microservices architecture, full stack development experience, Identity / access management and technology risk. Sound SDLC and practices and tooling experience - version control, CI/CD and configuration management tools. Ability to communicate technical concepts effectively, both written and orally, as well as interpersonal skills required to collaborate effectively with colleagues across diverse technology teams. Experience meeting demands for high availability and scalable system requirements. Ability to reason about performance, security, and process interactions in complex distributed systems. Ability to understand and effectively debug both new and existing software. Experience with metrics and monitoring tooling, including the ability to use metrics to rationally derive system health and availability information. Experience in auditing and supporting software based on sound SRE principles. Preferred Qualifications 3+ Years of Experience using and/or supporting Java based frameworks & SQL / NOSQL data stores. Experience with deploying software to containerized environments - Kubernetes/Docker. Scripting skills using Python, Shell or bash. Experience with Terraform or similar infrastructure-as-code platforms. Experience building services using public cloud providers such as AWS, Azure or GCP. Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. The Project. We have embarked on a highly ambitious, visible and impactful project which wholly reimagines the functional architecture needed to support the firm’s trading business and to empower the next two decades of growth by developing an extensible and scalable platform which also delivers operational efficiencies. This multi-year effort is based around an engineering-principles-first approach and dovetails with the firm’s core technology strategy. The Role. We are looking for engineers to work on both the infrastructure side of the project as well as on developing the core business model and the services around it. On the Infrastructure side of the project, the work consists of establishing the capabilities of the platform, as well as developing the development environment which will form the basis for other engineers’ experience with the platform. On the Core Business side of the project, the work consists of establishing an extensible model that can easily and seamlessly represent all of the firm’s business; of developing core services for that business model and collaborating with engineers in the business teams to develop their services on top of the core services. Your Impact As you build an innate understanding of the firm’s businesses, you will be responsible for developing core models and services, and deep collaboration with engineers both in the team and in other teams across the firm. By taking a principled approach to that development, you will deliver a constellation of services that can be both maintained as well as extended at minimal cost. You will fulfil your potential by Building software services and libraries to provide business and/or platform functionality with security and maintainability built-in at the core Partnering with other engineers and firm experts to understand and develop models for representing the firm’s business Innovating creative solutions to complex business problems, and… Influencing broadly across teams to challenge entrenched practices Managing the full lifecycle of software components from requirements through design, testing, development, release and demise. Engaging in production troubleshooting, mitigation and remediation Basic Qualifications Java proficiency. 2 years+ experience Experience with Distributed systems Sound SDLC and practices and tooling experience; version control, CI/CD The ability to understand and effectively debug both new and old software The ability to communicate technical concepts effectively, both written and orally, Strong teamworking and collaboration skills required to be effective with diverse and geographically distributed teams Preferred Qualifications Cloud technologies, specifically GCP Containerization, specifically Kubernetes Experience with open source Experience monitoring, measuring, auditing and supporting software About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal opportunity employer and does not discriminate on the basis of race, color, religion, sex, national origin, age, veterans status, disability, or any other characteristic protected by applicable law.

Posted 4 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Job Purpose At Intercontinental Exchange (ICE), we engineer technology, exchanges and clearing houses that connect companies around the world to global capital and derivative markets. With a leading-edge approach to developing technology platforms, we have built market infrastructure in all major trading centers, offering customers the ability to manage risk and make informed decisions globally. By leveraging our core strengths in technology, we continue to identify new ways to serve our customers and transform global markets. We're looking for motivated, results-oriented people to join our team. As a Lead Java Developer, you will be mentoring a team, responsible for contributing to the design, development, maintenance and support of high-volume enterprise applications. This is an excellent opportunity for a technologist to further develop their problem-solving skills and learn hands on from a small and experienced team. The ideal candidate must be results-oriented, self-motivated and can thrive in a fast-paced environment. This role requires frequent interactions with project and product managers, developers, quality assurance and other stakeholders, to ensure delivery of a world class application to our users. Responsibilities Create an inspiring team environment with an open communication culture Motivating the team to achieve organizational goals. Developing and implementing a timeline to achieve targets. Delegating tasks to team members. Identifying training needs of team members to maximize their potential and provide coaching. Empowering team members with skills to improve their confidence, product knowledge, and communication skills. Design and implement software solutions based on standard design and architecture patterns for user requirements. Accurately document the design and implementation steps, review with business analysts, development, and QA teams Collaborate with product, project management, and QA team in requirements analysis, solution design, providing development work estimates and project status. Assist to develop and ensure complete functional and non-functional specifications. Collaborate with other internal teams to translate business requirements into technical implementation for the automation of existing processes and the development of new applications. Understand complex business logic in existing systems and transition it to new technologies and systems. Work with system operations, database administration and systems engineering teams in production support and defining system recovery procedures. Identify root causes and develop solutions for program failures. Plan and execute unit tests to ensure the developed code is free of functional defects. Work closely with Performance Test team to identify performance hotspots and in providing timely resolution during load tests. Work in an agile and continuous integration environment with a command of SDLC tools . Knowledge And Experience Bachelor’s degree in Computer Science or Information technology. 10+ years of experience developing low latency, high-performance transactional software systems and components using standard Enterprise Integration Patterns and design principles. 2+ years of experience in leading team and technical management activities. A deep knowledge of: Java 8+ OOD, Design Patterns Distributed messaging, JMS Spring and its frameworks like Spring Boot, Spring MVC, Spring Data Multi-threaded server-side development Strong experience with Oracle PL/SQL and Database Technologies Experience applying continuous improvement tools and agile development methods to enhance and evolve complex systems driven by business needs. Strong written and verbal communication skills Ability to multitask and work independently on multiple projects. Demonstrable skills in production support and root cause analysis Open to learn and willing to participate in development using new frameworks, programming languages. Good to Have Knowledge of REACT tools including React.js, TypeScript and JavaScript ES6, Webpack, Enzyme, Redux, and Flux. In-depth knowledge of Java, JavaScript, CSS, HTML, and front-end languages. Experience with user interface design. experience in AWS Amplify, RDS, Event Bridge, SNS, SQS and SES Preferred Experience developing data processing pipelines using distributed compute principles and open-source frameworks. Experience in developing micro-services in container-based Kubernetes platforms (OpenShift, Tanzu) Experience developing Web UI using JavaScript based frameworks like React/JS Exposure to the financial services technologies domain, particularly in futures and options Working knowledge with shell scripts and CLI in Linux

Posted 4 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.

Posted 4 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Lead Backend Developer (Python & Microservices) Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 6–10 years About Darwix AI Darwix AI is at the forefront of building the future of revenue enablement through a GenAI-powered conversational intelligence and real-time agent assist platform. Our mission is to empower global sales teams to close better, faster, and smarter by harnessing the transformative power of Generative AI, real-time speech recognition, multilingual insights, and next-gen sales analytics. Backed by top venture capitalists and industry leaders, Darwix AI is scaling rapidly across India, MENA, and US markets. With a leadership team from IIT, IIM, and BITS, we are building enterprise-grade SaaS solutions that are poised to redefine how organizations engage with customers. If you are looking for a role where your work directly powers mission-critical AI applications used globally, this is your moment. Role Overview We are seeking a Lead Backend Developer (Python & Microservices) to drive the architecture, scalability, and performance of our GenAI platform's core backend services. You will own the responsibility of designing, building, and leading backend systems that are real-time, distributed, and capable of supporting AI-powered applications at scale. You will mentor engineers, set technical direction, collaborate across AI, Product, and Frontend teams, and ensure that the backend infrastructure is robust, secure, and future-proof. This is a high-ownership, high-impact role for individuals who are passionate about building world-class systems that are production-ready, scalable, and designed for rapid innovation. Key Responsibilities🔹 Backend Architecture and Development Architect and lead the development of highly scalable, modular, and event-driven backend systems using Python. Build and maintain RESTful APIs and microservices that power real-time, multilingual conversational intelligence platforms. Design systems with a strong focus on scalability, fault tolerance, high availability, and security. Implement API gateways, service registries, authentication/authorization layers, and caching mechanisms. 🔹 Microservices Strategy Champion microservices best practices: service decomposition, asynchronous communication, event-driven workflows. Manage service orchestration, containerization, and scaling using Docker and Kubernetes (preferred). Implement robust service monitoring, logging, and alerting frameworks for proactive system health management. 🔹 Real-time Data Processing Build real-time data ingestion and processing pipelines using tools like Kafka, Redis streams, WebSockets. Integrate real-time speech-to-text (STT) engines and AI/NLP pipelines into backend flows. Optimize performance to achieve low-latency processing suitable for real-time agent assist experiences. 🔹 Database and Storage Management Design and optimize relational (PostgreSQL/MySQL) and non-relational (MongoDB, Redis) database systems. Implement data sharding, replication, and backup strategies for resilience and scalability. Integrate vector databases (FAISS, Pinecone, Chroma) to support AI retrieval and embedding-based search. 🔹 DevOps and Infrastructure Collaborate with DevOps teams to deploy scalable and reliable services on AWS (EC2, S3, Lambda, EKS). Implement CI/CD pipelines, containerization strategies, and blue-green deployment models. Ensure security compliance across all backend services (API security, encryption, RBAC). 🔹 Technical Leadership Mentor junior and mid-level backend engineers. Define and enforce coding standards, architectural patterns, and best practices. Conduct design reviews, code reviews, and ensure high engineering quality across the backend team. 🔹 Collaboration Work closely with AI scientists, Product Managers, Frontend Engineers, and Customer Success teams to deliver delightful product experiences. Translate business needs into technical requirements and backend system designs. Drive sprint planning, estimation, and delivery for backend engineering sprints. Core RequirementsTechnical Skills 6–10 years of hands-on backend engineering experience. Expert-level proficiency in Python . Strong experience building scalable REST APIs and microservices. Deep understanding of FastAPI (preferred) or Flask/Django frameworks. In-depth knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases. Experience with event-driven architectures: Kafka, RabbitMQ, Redis streams. Proficiency in containerization and orchestration: Docker, Kubernetes. Familiarity with real-time communication protocols: WebSockets, gRPC. Strong understanding of cloud platforms (AWS preferred) and serverless architectures. Good experience with DevOps tools: GitHub Actions, Jenkins, Terraform (optional). Bonus Skills Exposure to integrating AI/ML models (especially LLMs, STT, Diarization models) in backend systems. Familiarity with vector search databases and RAG-based architectures. Knowledge of GraphQL API development (optional). Experience in multilingual platform scaling (support for Indic languages is a plus). Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Experience working in product startups, SaaS platforms, AI-based systems, or high-growth technology companies. Proven track record of owning backend architecture at scale (millions of users or real-time systems). Strong understanding of software design principles (SOLID, DRY, KISS) and scalable system architecture. What You’ll Get Ownership : Lead backend engineering at one of India's fastest-growing GenAI startups. Impact : Build systems that directly power the world's next-generation enterprise sales platforms. Learning : Work with an elite founding team and top engineers from IIT, IIM, BITS, and top tech companies. Growth : Fast-track your career into senior technology leadership roles. Compensation : Competitive salary + ESOPs + performance bonuses. Culture : High-trust, high-ownership, no-bureaucracy environment focused on speed and innovation. Vision : Be a part of a once-in-a-decade opportunity building from India for the world. About the Tech Stack You’ll Work On Languages : Python 3.x Frameworks : FastAPI (Primary), Flask/Django (Secondary) Data Stores : PostgreSQL, MongoDB, Redis, FAISS, Pinecone Messaging Systems : Kafka, Redis Streams Cloud Platforms : AWS (EC2, S3, Lambda, EKS) DevOps : Docker, Kubernetes, GitHub Actions Others : WebSockets, OAuth 2.0, JWT, Microservices Patterns Application Process Submit your updated resume and GitHub/portfolio links (if available). Shortlisted candidates will have a technical discussion and coding assessment. Technical interview rounds covering system design, backend architecture, and problem-solving. Final leadership interaction round. Offer! How to Apply 📩 careers@darwix.ai Please include: Updated resume GitHub profile (optional but preferred) 2–3 lines about why you're excited to join Darwix AI as a Lead Backend Engineer Join Us at Darwix AI – Build the AI Future for Revenue Teams, Globally! #LeadBackendDeveloper #PythonEngineer #MicroservicesArchitecture #BackendEngineering #FastAPI #DarwixAI #AIStartup #TechCareers

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies