Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 3 days ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: Director of Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 8–12 Years About Darwix AI Darwix AI is a next-generation Gen-AI-powered sales enablement platform that empowers enterprise sales teams with intelligent nudges, real-time insights, and AI-driven conversation analytics. By combining AI, automation, and contextual intelligence, we are redefining how sales teams engage, close, and scale. Backed by leading VCs and industry leaders, Darwix AI is one of the fastest-growing AI startups in India, with an expanding presence across MENA, India, and the US. Role Overview We are seeking a highly experienced and technically proficient Director of Engineering to lead and scale our engineering team. In this role, you will be responsible for managing backend development, DevOps, and infrastructure initiatives. The ideal candidate will be a hands-on technical leader with a strong architectural foundation and proven experience scaling engineering teams and systems in high-growth environments. You will work closely with the Vice President of Engineering and Founders to drive engineering excellence, ensure timely delivery, and lead technical decision-making aligned with business goals. Key Responsibilities Engineering Leadership Lead engineering execution across backend services (Python, PHP), infrastructure, and DevOps. Define technical strategy and ensure alignment with product and organizational goals. Own the delivery roadmap and ensure timely and high-quality outputs. System Architecture and Scalability Oversee backend architecture to ensure reliability, scalability, and performance. Guide the implementation of microservices, RESTful APIs, and scalable cloud-based infrastructure. Design and review systems with considerations for high-volume data ingestion and low-latency processing. Team Management and Development Build, lead, and mentor a high-performing engineering team. Implement best practices for code quality, testing, deployment, and team collaboration. Foster a strong engineering culture focused on learning, execution, and accountability. Cross-Functional Collaboration Collaborate with product, AI/ML, sales, and design teams to align engineering deliverables with business priorities. Translate product requirements into structured engineering plans and milestones. Cloud Infrastructure and DevOps Work closely with the DevOps function to manage AWS cloud infrastructure, CI/CD pipelines, and security protocols. Ensure system uptime, data integrity, and disaster recovery preparedness. AI Infrastructure Support Support integration with LLMs and AI/ML models. Lead initiatives involving vector databases (e.g., FAISS, Pinecone, Weaviate) and retrieval-augmented generation pipelines. Qualifications Education Bachelor's or Master’s degree in Computer Science, Engineering, or a related technical discipline. Candidates from premier institutions (IITs, BITS, NITs) will be preferred. Experience 8–12 years of progressive engineering experience with at least 3 years in a leadership role. Proven experience in building scalable backend systems and managing high-performing engineering teams. Strong exposure to Python and PHP/NodeJS in production-grade systems. Experience in designing and managing infrastructure on AWS or equivalent cloud platforms. Familiarity with containerization (Docker, Kubernetes) and CI/CD systems. Desirable Skills Experience working with vector databases, embeddings, and AI/ML deployment in production. Deep understanding of microservices architecture, event-driven systems, and RESTful API design. Strong communication and stakeholder management skills. What We Offer Leadership role in a fast-scaling, venture-backed AI technology firm. Opportunity to work on large-scale AI applications in real-world enterprise environments. Competitive compensation including fixed salary, ESOPs, and performance-based bonuses. A high-performance culture that encourages ownership, innovation, and continuous learning. Direct collaboration with senior leadership on strategic initiatives. Application Note: This role demands both technical expertise and strategic foresight. We are looking for leaders who are comfortable building systems hands-on, mentoring teams, and ensuring consistent execution in a high-growth, high-ownership environment.
Posted 3 days ago
40.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Must Have Skills: Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Experience in design, build, and deployment of innovative applications utilizing Gen AI technologies such as RAG (Retrieval-Augmented Generation) based chatbots or AI Agents. Proficiency in programming using Python or Java. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Effective communication and presentation skills. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, TensorFlow, PyTorch, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC2/IC3 Responsibilities As a member of Oracle Cloud LIFT, you’ll help guide our customers from concept to successful cloud deployment. You’ll: Shape architecture and solution design with best practices and experience. Own the delivery of agreed workload implementations. Validate and test deployed solutions. Conduct security assurance reviews. You’ll work in a fast-paced, international environment, engaging with customers across industries and regions. You’ll collaborate with peers, sales, architects, and consulting teams to make cloud transformation real. https://www.oracle.com/in/cloud/cloud-lift/ Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 3 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Must Have Skills: Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Experience in design, build, and deployment of innovative applications utilizing Gen AI technologies such as RAG (Retrieval-Augmented Generation) based chatbots or AI Agents. Proficiency in programming using Python or Java. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Effective communication and presentation skills. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, TensorFlow, PyTorch, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC2/IC3 Responsibilities As a member of Oracle Cloud LIFT, you’ll help guide our customers from concept to successful cloud deployment. You’ll: Shape architecture and solution design with best practices and experience. Own the delivery of agreed workload implementations. Validate and test deployed solutions. Conduct security assurance reviews. You’ll work in a fast-paced, international environment, engaging with customers across industries and regions. You’ll collaborate with peers, sales, architects, and consulting teams to make cloud transformation real. https://www.oracle.com/in/cloud/cloud-lift/ Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 3 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 3 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Lead Backend Developer (Python & Microservices) Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 6–10 years About Darwix AI Darwix AI is at the forefront of building the future of revenue enablement through a GenAI-powered conversational intelligence and real-time agent assist platform. Our mission is to empower global sales teams to close better, faster, and smarter by harnessing the transformative power of Generative AI, real-time speech recognition, multilingual insights, and next-gen sales analytics. Backed by top venture capitalists and industry leaders, Darwix AI is scaling rapidly across India, MENA, and US markets. With a leadership team from IIT, IIM, and BITS, we are building enterprise-grade SaaS solutions that are poised to redefine how organizations engage with customers. If you are looking for a role where your work directly powers mission-critical AI applications used globally, this is your moment. Role Overview We are seeking a Lead Backend Developer (Python & Microservices) to drive the architecture, scalability, and performance of our GenAI platform's core backend services. You will own the responsibility of designing, building, and leading backend systems that are real-time, distributed, and capable of supporting AI-powered applications at scale. You will mentor engineers, set technical direction, collaborate across AI, Product, and Frontend teams, and ensure that the backend infrastructure is robust, secure, and future-proof. This is a high-ownership, high-impact role for individuals who are passionate about building world-class systems that are production-ready, scalable, and designed for rapid innovation. Key Responsibilities🔹 Backend Architecture and Development Architect and lead the development of highly scalable, modular, and event-driven backend systems using Python. Build and maintain RESTful APIs and microservices that power real-time, multilingual conversational intelligence platforms. Design systems with a strong focus on scalability, fault tolerance, high availability, and security. Implement API gateways, service registries, authentication/authorization layers, and caching mechanisms. 🔹 Microservices Strategy Champion microservices best practices: service decomposition, asynchronous communication, event-driven workflows. Manage service orchestration, containerization, and scaling using Docker and Kubernetes (preferred). Implement robust service monitoring, logging, and alerting frameworks for proactive system health management. 🔹 Real-time Data Processing Build real-time data ingestion and processing pipelines using tools like Kafka, Redis streams, WebSockets. Integrate real-time speech-to-text (STT) engines and AI/NLP pipelines into backend flows. Optimize performance to achieve low-latency processing suitable for real-time agent assist experiences. 🔹 Database and Storage Management Design and optimize relational (PostgreSQL/MySQL) and non-relational (MongoDB, Redis) database systems. Implement data sharding, replication, and backup strategies for resilience and scalability. Integrate vector databases (FAISS, Pinecone, Chroma) to support AI retrieval and embedding-based search. 🔹 DevOps and Infrastructure Collaborate with DevOps teams to deploy scalable and reliable services on AWS (EC2, S3, Lambda, EKS). Implement CI/CD pipelines, containerization strategies, and blue-green deployment models. Ensure security compliance across all backend services (API security, encryption, RBAC). 🔹 Technical Leadership Mentor junior and mid-level backend engineers. Define and enforce coding standards, architectural patterns, and best practices. Conduct design reviews, code reviews, and ensure high engineering quality across the backend team. 🔹 Collaboration Work closely with AI scientists, Product Managers, Frontend Engineers, and Customer Success teams to deliver delightful product experiences. Translate business needs into technical requirements and backend system designs. Drive sprint planning, estimation, and delivery for backend engineering sprints. Core RequirementsTechnical Skills 6–10 years of hands-on backend engineering experience. Expert-level proficiency in Python . Strong experience building scalable REST APIs and microservices. Deep understanding of FastAPI (preferred) or Flask/Django frameworks. In-depth knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases. Experience with event-driven architectures: Kafka, RabbitMQ, Redis streams. Proficiency in containerization and orchestration: Docker, Kubernetes. Familiarity with real-time communication protocols: WebSockets, gRPC. Strong understanding of cloud platforms (AWS preferred) and serverless architectures. Good experience with DevOps tools: GitHub Actions, Jenkins, Terraform (optional). Bonus Skills Exposure to integrating AI/ML models (especially LLMs, STT, Diarization models) in backend systems. Familiarity with vector search databases and RAG-based architectures. Knowledge of GraphQL API development (optional). Experience in multilingual platform scaling (support for Indic languages is a plus). Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Experience working in product startups, SaaS platforms, AI-based systems, or high-growth technology companies. Proven track record of owning backend architecture at scale (millions of users or real-time systems). Strong understanding of software design principles (SOLID, DRY, KISS) and scalable system architecture. What You’ll Get Ownership : Lead backend engineering at one of India's fastest-growing GenAI startups. Impact : Build systems that directly power the world's next-generation enterprise sales platforms. Learning : Work with an elite founding team and top engineers from IIT, IIM, BITS, and top tech companies. Growth : Fast-track your career into senior technology leadership roles. Compensation : Competitive salary + ESOPs + performance bonuses. Culture : High-trust, high-ownership, no-bureaucracy environment focused on speed and innovation. Vision : Be a part of a once-in-a-decade opportunity building from India for the world. About the Tech Stack You’ll Work On Languages : Python 3.x Frameworks : FastAPI (Primary), Flask/Django (Secondary) Data Stores : PostgreSQL, MongoDB, Redis, FAISS, Pinecone Messaging Systems : Kafka, Redis Streams Cloud Platforms : AWS (EC2, S3, Lambda, EKS) DevOps : Docker, Kubernetes, GitHub Actions Others : WebSockets, OAuth 2.0, JWT, Microservices Patterns Application Process Submit your updated resume and GitHub/portfolio links (if available). Shortlisted candidates will have a technical discussion and coding assessment. Technical interview rounds covering system design, backend architecture, and problem-solving. Final leadership interaction round. Offer! How to Apply 📩 careers@darwix.ai Please include: Updated resume GitHub profile (optional but preferred) 2–3 lines about why you're excited to join Darwix AI as a Lead Backend Engineer Join Us at Darwix AI – Build the AI Future for Revenue Teams, Globally! #LeadBackendDeveloper #PythonEngineer #MicroservicesArchitecture #BackendEngineering #FastAPI #DarwixAI #AIStartup #TechCareers
Posted 3 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.
Posted 3 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Engineering Lead Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 5–10 Years Compensation: Competitive + Performance-based incentives + Meaningful ESOPs 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, building the future of enterprise revenue intelligence. We offer a GenAI-powered conversational intelligence and real-time agent assist suite that transforms how large sales teams interact, close deals, and scale operations. We’re already live with enterprise clients across India, the UAE, and Southeast Asia , and our platform enables multilingual speech-to-text, AI-driven nudges, and contextual conversation coaching—backed by our proprietary LLMs and cutting-edge voice infrastructure. With backing from top-tier VCs and over 30 angel investors, we’re now hiring an Engineering Lead who can architect, own, and scale the core engineering stack as we prepare for 10x growth. 🌟 Role Overview As the Engineering Lead at Darwix AI , you’ll take ownership of our platform architecture, product delivery, and engineering quality across the board. You’ll work closely with the founders, product managers, and the AI team to convert fast-moving product ideas into scalable features. You will: Lead backend and full-stack engineers across microservices, APIs, and real-time pipelines Architect scalable systems for AI/LLM deployments Drive code quality, maintainability, and engineering velocity This is a hands-on, player-coach role —perfect for someone who loves building but is also excited about mentoring and growing a technical team. 🎯 Key Responsibilities🛠️ Technical Leadership Own technical architecture across backend, frontend, and DevOps stacks Translate product roadmaps into high-performance, production-ready systems Drive high-quality code reviews, testing practices, and performance optimization Make critical system-level decisions around scalability, security, and reliability 🚀 Feature Delivery Work with the product and AI teams to build new features around speech recognition, diarization, real-time coaching, and analytics dashboards Build and maintain backend services for data ingestion, processing, and retrieval from Vector DBs, MySQL, and MongoDB Create clean, reusable APIs (REST & WebSocket) that power our web-based agent dashboards 🧱 System Architecture Refactor monoliths into microservice-based architecture Optimize real-time data pipelines with Redis, Kafka, and async queues Implement serverless modules using AWS Lambda, Docker containers, and CI/CD pipelines 🧑🏫 Mentorship & Team Building Lead a growing team of engineers—guide on architecture, code design, and performance tuning Foster a culture of ownership, documentation, and continuous learning Mentor junior developers, review PRs, and set up internal coding best practices 🔄 Collaboration Act as the key technical liaison between Product, Design, AI/ML, and DevOps teams Work directly with founders on roadmap planning, delivery tracking, and go-live readiness Contribute actively to investor tech discussions, client onboarding, and stakeholder calls ⚙️ Our Tech Stack Languages: Python (FastAPI, Django), PHP (legacy support), JavaScript, TypeScript Frontend: HTML, CSS, Bootstrap, Mustache templates; (React.js/Next.js optional) AI/ML Integration: LangChain, Whisper, RAG pipelines, Transformers, Deepgram, OpenAI APIs Databases: MySQL, PostgreSQL, MongoDB, Redis, Pinecone/FAISS (Vector DBs) Cloud & Infra: AWS EC2, S3, Lambda, CloudWatch, Docker, GitHub Actions, Nginx DevOps: Git, Docker, CI/CD pipelines, Jenkins/GitHub Actions, load testing Tools: Jira, Notion, Slack, Postman, Swagger 🧑💼 Who You Are 5–10 years of professional experience in backend/full-stack development Proven experience leading engineering projects or mentoring junior devs Comfortable working in high-growth B2B SaaS startups or product-first orgs Deep expertise in one or more backend frameworks (Django, FastAPI, Laravel, Flask) Experience working with AI products or integrating APIs from OpenAI, Deepgram, HuggingFace is a huge plus Strong understanding of system design, DB normalization, caching strategies, and latency optimization Bonus: exposure to working with voice pipelines (STT/ASR), NLP models, or real-time analytics 📌 Qualities We’re Looking For Builder-first mindset – you love launching features fast and scaling them well Execution speed – you move with urgency but don’t break things Hands-on leadership – you guide people by writing code, not just processes Problem-solver – when things break, you own the fix and the root cause Startup hunger – you thrive on chaos, ambiguity, and shipping weekly 🎁 What We Offer High Ownership : Directly shape the product and its architecture from the ground up Startup Velocity : Ship fast, learn fast, and push boundaries Founding Engineer Exposure : Work alongside IIT-IIM-BITS founders with full transparency Compensation : Competitive salary + meaningful equity + performance-based incentives Career Growth : Move into an EM/CTO-level role as the org scales Tech Leadership : Own features end-to-end—from spec to deployment 🧠 Final Note This is not just another engineering role. This is your chance to: Own the entire backend for a GenAI product serving global enterprise clients Lead technical decisions that define our future infrastructure Join the leadership team at a startup that’s shipping faster than anyone else in the category If you're ready to build a product with 10x potential, join a high-output team, and be the reason why the tech doesn’t break at scale , this role is for you. 📩 How to Apply Send your resume to people@darwix.ai with the subject line: “Application – Engineering Lead – [Your Name]” Attach: Your latest CV or LinkedIn profile GitHub/portfolio link (if available) A short note (3–5 lines) on why you're excited about Darwix AI and this role
Posted 3 days ago
1.0 years
0 Lacs
Greater Nashik Area
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Job Title: Intermediate Data Developer – Azure ADF and Databricks Experience Range: 5-7 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Supply Chain Symphony™ The UPS Supply Chain Symphony™ platform is a cloud-based solution that seamlessly integrates key supply chain components, including shipping, warehousing, and inventory management, into a unified platform. This solution empowers businesses by offering enhanced visibility, advanced analytics, and customizable dashboards to streamline global supply chain operations and decision-making. About The Role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development: Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About The Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.
Posted 3 days ago
5.0 years
5 - 9 Lacs
Calicut
On-site
We are excited to share a fantastic opportunity for the AI Lead/Sr. AI-ML Engineer position at Gritstone Technologies . We believe your skills and experience could be a perfect match for this role, and we would love for you to explore this opportunity with us. Design and implement scalable, high-performance AI/ML architectures with Python tailored for real-time and batch processing use cases. Lead the development of robust, end-to-end AI pipelines, including advanced data preprocessing, feature engineering, model development, and deployment. Define and drive the integration of AI solutions across cloud-native platforms (AWS, Azure, GCP) with optimized cost-performance trade-offs. Architect and deploy multimodal AI systems, leveraging advanced NLP (e.g., LLMs, OpenAI-based customizations, scanned invoice data extraction), computer vision (e.g., inpainting, super-resolution scaling, video-based avatar generation), and generative AI technologies (e.g., video and audio generation). Integrate domain-specific AI solutions, such as reinforcement learning, and self-supervised learning models. Implement distributed training and inferencing pipelines using state-of-the-art frameworks. Drive model optimization through quantization, pruning, sparsity techniques, and mixed-precision training to maximize performance across GPU hardware. Develop scalable solutions using large vision-language models (VLMs) and large language models (LLMs). Define and implement MLOps practices for version control, CI/CD pipelines, and automated model deployment using tools like Kubernetes, Docker, Kubeflow, and FastAPI. Enable seamless integration of databases (SQL Server, MongoDB, NoSQL) with AI workflows. Drive cutting-edge research in AI/ML, including advancements in RLHF, retrieval-augmented generation (RAG), and multimodal knowledge graphs. Experiment with emerging generative technologies, such as diffusion models for video generation and neural audio synthesis. Collaborate with cross-functional stakeholders to deliver AI-driven business solutions aligned with organizational goals. null 5+ years of Experience
Posted 3 days ago
12.0 - 16.0 years
5 - 8 Lacs
Hyderābād
Remote
The AIN QA Technical Resource Team Senior Manager will play a critical role in advancing Quality Assurance initiatives across the Quality Operations Network, with a particular focus on leading and directing a team of quality professionals responsible for the support of Management Review, Inspections and Compliance, and Technical Writing & Data Analytics. The senior manager will use strategic planning and prioritization to support the collective requirements of the Quality organization alongside the individual needs and timelines of the sites. The individual will be required to work from our office located in Hyderabad India (Amgen India-AIN). The candidate will also lead the remote support from AIN to Amgen sites across multiple time zones globally This candidate will primarily work during regular working hours (9 am – 6 PM local time) to enable the business in delivering Amgen’s mission to serve patients and may lead a shift-based team that provides coverage in support of the Amgen network across multiple time zones. The candidate may need to work outside of his/her routine workday to support business needs and will be responsible for determining the same for their staff. As Senior Manager in the Quality Assurance organization, you’re in a leadership position with responsibilities to supervise and mentor staff. As a leader, you will focus your efforts on the following functions in support of global Quality Assurance operations: Focus Areas This role provides operational support, technical leadership, and cross-functional collaboration to ensure compliance, continuous improvement, and data-driven decision making in support of the Quality Management System (QMS). Oversight of the AIN-based Quality Assurance Technical Resource team Collaboration with the global quality leaders and business process owner(s) to resolve issues encountered by the team Management of request prioritization in alignment with QA network needs Support staff training, career development and performance management of team across all three shifts Responsible for ensuring compliance with safety guidelines, cGMPs and other applicable regulatory requirements Champion process improvements to increase efficiency and productivity Assign workload appropriately and strategically based on required interactions with sites in the Amgen network across multiple time zones The following are some examples of tasks for the position Support of management review (MR) at each Amgen site through collaboration with Amgen leadership and coordination of the AIN technical support team to provide MR (management review) logistical support, metrics/KPI’s, meeting agenda/content, site-level and cross-site trend analysis, and meeting facilitation. Leading the team responsible to provide readiness and response for internal and external inspections, including generation of pre-inspection documents such as deviation lists, change controls, and supporting evidence. Actively contribute during inspections by managing team to provide timely response to information requests, facilitating document electronic retrieval, and preparing responses in collaboration with subject matter experts. Leading a technical writing and data analytics workstream that includes responsibility for periodic quality trend report authorship, Site Master File authoring, product and process monitoring deviation summary reports, and leading quality risk management processes. Preferred Qualifications Strong project management skills and experience supervising professionals in a Quality organization working with cross functional and global stakeholders across multiple time zones Working knowledge of cGMP regulations Excellent written and verbal communication skills, ability to work in a team environment and build relationships with partners Track record of building and maintaining a high performing team Experience with various Quality Systems and applications Strong leadership and negotiation skills with a demonstrated ability to influence others Demonstrated innovative thinking and ability to transform work organizations Demonstrated ability to navigate through ambiguity and provide structured problem solving Demonstrated ability to deliver right the first time on schedule in accordance with established Service Level Agreements Demonstrated skills in staff motivation, coaching/mentoring and professional development Basic Qualifications and Experience: Master’s degree with 12-16 years of Pharma and Biotech commercial or clinical manufacturing Quality experience.
Posted 3 days ago
8.0 - 13.0 years
4 - 9 Lacs
Hyderābād
Remote
QA Technical Specialist The AIN QA Technical Specialist plays a critical role in advancing Quality Assurance initiatives across the Quality Operations Network, with a particular focus on Management Review, Inspections and Compliance, and Technical Writing & Data Analytics. This role provides operational support, technical leadership, and cross-functional collaboration to ensure compliance, continuous improvement, and data-driven decision making in support of the Quality Management System (QMS). The position will be responsible for tasks including the key responsibilities documented below and other technical quality-related job functions. This candidate will primarily work during regular working hours (9 AM – 6 PM local time) to enable the business in delivering Amgen’s mission to serve patients and may lead a shift-based team that provides coverage in support of the Amgen network across multiple time zones. The candidate may need to work outside of his/her routine workday to support business needs and will be responsible for determining the same for their staff. The individual will be required to work from our office located in Hyderabad India (Amgen India-AIN). The candidate will also lead the remote support from AIN to Amgen sites globally. Key Responsibilities - Management Review Coordinate and manage all logistics related to Site Management Review, including compiling metrics, maintaining and updating Smartsheet trackers, and preparing content. Perform site-level and cross-site trend analysis (as applicable) using key quality metrics; identify trends and collaborate with site stakeholders to implement corrective and preventive actions (CAPA). Lead preparation of Management Review meetings, ensuring comprehensive data presentation creation, documentation of meeting minutes, and follow-up on action items. Inspections and Compliance Support readiness and response for internal and external inspections, including generation of pre-inspection documents such as deviation lists, change controls, and supporting evidence. Actively contribute during inspections by managing information requests, facilitating document electronic retrieval, and preparing responses in collaboration with subject matter experts. Lead Site Master File updates by coordinating content input from cross-functional stakeholders, drafting revisions, and managing review and approval workflows. Technical Writing and Data Analytics Lead authoring and workflow coordination for periodic quality trend reports and related documentation. Generate deviation summary reports to support product and process monitoring efforts, ensuring accuracy and consistency with cGMP standards. Drive quality risk assessments, providing technical leadership in risk identification, analysis, and mitigation planning in alignment with standards. Preferred Qualifications Demonstrated experience in a GMP-compliant environment with working knowledge of inspection protocols, site audits, and quality risk management principles. Proficiency in technical writing and data visualization tools; experience with Smartsheet, Tableau, or equivalent platforms preferred. Strong analytical skills with the ability to interpret data trends and drive improvements based on quality insights. Familiarity with electronic quality systems (e.g., Veeva, TrackWise, SAP-QM, LIMS) and documentation practices. Excellent verbal and written communication skills, including experience presenting to senior leaders. Proven ability to lead and collaborate within cross-functional teams in a dynamic, fast-paced setting. Core Competencies Leadership in Quality Governance (e.g., Management Review) Inspection Readiness and Compliance Assurance Quality Data Visualization, Interpretation and Analytics Technical Document Drafting and Workflow Ownership Cross-Functional Stakeholder Engagement Continuous Improvement Mindset Basic Qualifications and Experience: Master’s degree with 8-13 years of Pharma and Biotech commercial or clinical manufacturing Quality experience.
Posted 3 days ago
35.0 years
3 - 7 Lacs
Hyderābād
On-site
Description Company Overview: When it comes to IT solution providers, there are a lot of choices. But when it comes to providers with innovative and differentiating end-to-end service offerings, there’s really only one: Zones – First Choice for IT.TM Zones is a Global Solution Provider of end-to-end IT solutions with an unmatched supply chain. Positioned to be the IT partner you need, Zones, a Minority Business Enterprise (MBE) in business for over 35 years, specializes in Digital Workplace, Cloud & Data Center, Networking, Security, and Managed/Professional/Staffing services. Operating in more than 120 countries, leveraging a robust portfolio, and utilizing the highest certification levels from key partners, including Microsoft, Apple, Cisco, Lenovo, Adobe, and more, Zones has mastered the science of building digital infrastructures that change the way business does business ensuring whatever they need, they can Consider IT Done. Follow Zones, LLC on Twitter @Zones, and LinkedIn and Facebook. Position Overview: The primary focus of this position is to Design, develop, and maintain robust data pipelines using Azure Data Factory. Implement and manage ETL processes to ensure efficient data flow and transformation. What you’ll do as a (BI Developer Lead): Design, develop, and maintain robust data pipelines using Azure Data Factory. Implement and manage ETL processes to ensure efficient data flow and transformation. Develop and maintain data models and data warehouses using Azure SQL Database and Azure Synapse Analytics. Create and manage Power BI reports and dashboards to provide actionable insights to stakeholders. Ensure data quality, integrity, and security across all data systems. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Optimize data storage and retrieval processes for performance and cost efficiency. Monitor and troubleshoot data pipelines and workflows to ensure smooth operations. Create and maintain tabular models for efficient data analysis and reporting. Stay updated with the latest Azure services and best practices to continuously improve data infrastructure. What will you bring to the team: Bachelor’s degree in computer science, Information Technology, or a related field. Certification in Azure Data Engineer or related Azure certifications will be an added advantage. Experience with machine learning and AI services on Azure will be an added advantage. Proven experience in designing and maintaining data pipelines using Azure Data Factory. Strong proficiency in SQL and experience with Azure SQL Database. Hands-on experience with Azure Synapse Analytics and Azure Data Lake Storage. Proficiency in creating and managing Power BI reports and dashboards. Knowledge of Azure DevOps for CI/CD pipeline implementation. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Knowledge of data governance and compliance standards. Zones offers a comprehensive Benefits package. While we’re committed to providing top-tier solutions, we’re just as committed to supporting our own teams. We offer a competitive compensation package where our team members are rewarded based on their performance and recognized for the value, they bring into our business. Our team members enjoy a variety of comprehensive benefits, including Medical Insurance Coverage, Group Term Life and Personal Accident Cover to handle the uncertainties of life, flexible leave policy to balance their work life. At Zones, work is more than a job – it's an exciting careers immersed in an inventive, collaborative culture. If you’re interested in working on the cutting edge of IT innovation, sales, engineering, operations, administration, and more, Zones is the place for you! All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status or on the basis of disability.
Posted 3 days ago
2.0 years
8 - 10 Lacs
Hyderābād
On-site
About the job Our Team: Sanofi Global Hub is an internal Sanofi resource organization based in India and is setup to centralize processes and activities to support Specialty Care, Vaccines, General Medicines, CHC, CMO, and R&D, Data & Digital functions. Sanofi Global Hub strives to be a strategic and functional partner for tactical deliveries to Medical, HEVA, and Commercial organizations in Sanofi, globally. Main responsibilities: Write and/or edit under guidance high-quality safety documents, medical section of Periodic Benefit-Risk Evaluation Report, medical sections of Addendum to clinical overview, Disease and Product ID Cards, product alerts and trial transparency documents. Delivery of high-quality medical documents in time and in compliance with internal and external standards and guidelines. Essential job duties and responsibilities: 1) Participate with support in the planning of analysis and data presentation to be used, initially in conjunction with the mentoring medical writer. 2) Develops and maintains TA expertise. 3) Collaborates effectively with Scientific communication global or local teams, Medical regulatory writing global or local teams, Pharmacovigilance teams, Regulatory Teams and Corporate Affairs teams based on the documents assigned. People: 1) Maintain effective relationships with the end stakeholders (Medical scientific community) within the allocated Global business unit and product – with an end objective to develop medical regulatory content as per requirement. 2) Interact effectively with stakeholders in medical and pharmacovigilance departments. 3) Constantly assist other medical regulatory writers in developing knowledge and sharing expertise. Performance: Provide deliverables (PBRER, ACO, Product and Disease ID Cards, managing Product Alerts, posting of trial information such as study protocol and amendments, study results, redacted documents, lay summaries on websites such as CTG (ct.gov), EUCTR, EUDRACT ) as per agreed timelines and quality. Process: 1) Author, review, and act as an expert in the field of medical regulatory writing and maintain the regulatory requirement for countries supported. 2) Assist the assigned medical team in conducting a comprehensive medical regulatory writing needs analysis. 3) Implement relevant elements of the medical regulatory plan and associated activities for the year identified for the region. 4) Work with selected vendors within the region to deliver the required deliverables as per the defined process. 5) Design an overall plan of action basis end-customers feedback & improve course content and delivery. 6) Prepare/review stand-by statement and questions and answer (SBS QA) document as part of managing Product Alerts. 7) Track postings, file, or archive material in relevant systems, and ensure audit and inspection-readiness. 8) Remain abreast of Sanofi Policy or Quality Documents evolution. Stakeholders: 1) Work closely with Clinical/Medical teams in regions/areas to identify medical writing needs and assist in developing assigned deliverables. 2) Proactively liaise with the Clinical/Medical/Pharmacovigilance/Biostats/Legal/Regulatory/corporate affairs departments to prepare relevant & customized deliverables. About you Experience : >2 years of experience in regulatory writing for the pharmaceuticals/healthcare industry Soft skills : Stakeholder management; vendor management; communication skills; and ability to work independently and within a team environment. Technical skills : As applicable (including but not limited to time, and risk management skills, Excellent technical (medical) editing and writing skills, data retrieval, interpretation of scientific data, medical literature screening, knowledge of ICH and GCP/GVP, ability to summarize scientific information and edit text for specific audiences, well-ver sed with computer applications ) Education : Advanced degree in life sciences/ pharmacy/ similar discipline (PhD, Masters or bachelor’s in science, D Pharma, PharmD) or medical degree (MBBS, BDS, BAMS, BHMS, MD) Languages : Excellent English language knowledge (to read, write and speak)
Posted 3 days ago
4.0 years
25 - 30 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : INR 2500000-3000000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Serenity) (*Note: This is a requirement for one of Uplers' client - Serenity) What do you need for this opportunity? Must have skills required: Fintech, Next Js, React Js, web3, Nest.js, Online Marketplace Serenity is Looking for: Seeking a talented Web3 Front End Developer to design intuitive and visually appealing user interfaces for our blockchain-based applications. You will play a key role in ensuring our platforms deliver a seamless user experience while integrating with cutting-edge blockchain technologies for secure data storage and management. Responsibilities: Develop responsive and interactive user interfaces using HTML, CSS, and JavaScript frameworks. Implement UI designs with a focus on usability, accessibility, and performance. Integrate front-end applications with back-end APIs and blockchain services via Web3 libraries. Optimize applications for speed and scalability across devices and browsers. Collaborate with designers to translate wireframes and mockups into functional code. Ensure blockchain interactions (e.g., wallet connections, data retrieval) are user-friendly. Conduct code reviews and maintain clean, maintainable codebases. Required Skills: Bachelor’s degree in Computer Science, Design, or a related field (or equivalent experience). Proven experience as a Front End Developer or similar role. Expertise in HTML, CSS, and JavaScript/TypeScript, with experience in ReactJS, Nextjs & NestJS Experience with Web3 libraries (e.g., Web3.js, ethers.js) for blockchain interaction. Strong understanding of UI/UX principles and responsive design. Ability to work collaboratively in a fast-paced environment. Excellent communication and problem-solving skills. Preferred Skills: Experience building front-ends for blockchain DApps or Web3 applications. Knowledge of CosmJS or other tools for Secret Network integration. Background in optimizing front-end performance for decentralized platforms. Passion for privacy-focused technologies and user-centric design. Interview Process - Technical Round 1 Assessment Technical Round 2 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 days ago
3.0 years
4 - 10 Lacs
Chennai
Remote
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About the Team If you thrive on tackling significant technical challenges, delivering scalable solutions for mission-critical platforms, and collaborating closely with world-class engineers, you will love being a part of our Technology Product Management team! You'll help to build the foundational services that power Workday's enterprise cloud, impacting millions of users globally. About the Role We’re looking for a Technical Product Manager who is deeply curious about complex distributed systems with a track record of driving innovation within established platforms. Above all, we are seeking a Product Manager who excels at driving technical strategy, making astute trade-offs that balance innovation with system stability, and translating complex technical requirements into actionable, engineering-ready roadmaps. Experience with AWS and data storage and retrieval technologies like Apache Parquet, and Apache Iceberg is a plus! If you are a natural collaborator and a great storyteller, capable of working seamlessly with senior engineering leaders and architects around the world, and love diving deep into the intricate details of distributed system design and implementation, we strongly encourage you to apply! About You Basic Qualifications 3+ years experience of technical product management. A college degree in Computer Science or an equivalent technical degree; or at least 5 years of proven experience at a software company in product management or a similar role Other Qualifications Always brings data-informed arguments to the forefront with SQL and Python-based data analysis Can get software developers to enthusiastically build on top of your product Flexible and adaptable to adapt to change Can design scalable, reliable, business-critical systems for large customers Experience with distributed processing and scheduling; indexing and search technologies; devops-related initiatives to improve developer experience, automation, and operational stability; or system health and monitoring Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!
Posted 3 days ago
8.0 years
4 - 5 Lacs
Chennai
On-site
Gen AI Engineer WorkMode :Hybrid Work Location : Chennai / Hyderabad Work Timing : 2 PM to 11 PM Primary : GEN AI Python, AWS Bedrock, Claude, Sagemaker , Machine Learning experience) 8+ years of full-stack development experience 5+ years of AI/ Gen AI development Strong proficiency in JavaScript/TypeScript, Python, or similar languages Experience with modern frontend frameworks (React, Vue.js, Angular) Backend development experience with REST APIs and microservices Knowledge of AWS services, specifically AWS Bedrock, Sagemaker Experience with generative AI models, LLM integration and Machine Learning Understanding of prompt engineering and model optimization Hands-on experience with foundation models (Claude, GPT, LLaMA, etc.) Experience retrieval-augmented generation (RAG) Knowledge of vector databases and semantic search AWS cloud platform expertise (Lambda, API Gateway, S3, RDS, etc.) Knowledge of financial regulatory requirements and risk frameworks. Experience integrating AI solutions into financial workflows or trading systems. Published work or patents in financial AI or applied machine learning. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 3 days ago
6.0 years
3 - 6 Lacs
Chennai
On-site
6+ Years on IT experience and 4+ years of experirnce in ne04j Design and implement efficient graph models using Neo4j to represent complex relationships. Write optimized Cypher queries for data retrieval, manipulation, and aggregation. Develop and maintain ETL pipelines to integrate data from various sources into the graph database. Integrate Neo4j databases with existing systems using APIs and other middleware technologie About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Do As an AI Engineer at Wednesday, you’ll design and build production-ready AI systems using state-of-the-art language models, vector databases, and modern AI frameworks. You’ll own the full lifecycle of AI features — from prototyping and prompt engineering to deployment, monitoring, and optimization. You’ll work closely with product and engineering teams to ensure our AI solutions deliver real business value at scale. Your Responsibilities System Architecture & Development Architect and implement AI applications leveraging transformer-based LLMs, embeddings, and vector similarity techniques. Build modular, maintainable codebases using Python and AI frameworks. Retrieval-Augmented Generation & Semantic Search Design and deploy RAG systems with vector databases such as Pinecone, Weaviate, or Chroma to power intelligent document search and knowledge retrieval. LLM Integration & Optimization Integrate with LLM platforms (OpenAI, Anthropic) or self-hosted models (Llama, Mistral), including prompt engineering, fine-tuning, and model evaluation. Experience with AI orchestration tools (LangFlow, Flowise), multimodal models, or AI safety and evaluation frameworks. AI Infrastructure & Observability Develop scalable AI pipelines with proper monitoring, evaluation metrics, and observability to ensure reliability in production environments. End-to-End Integration & Rapid Prototyping Connect AI backend to user-facing applications; prototype new AI features quickly using frontend frameworks (React/Next.js). Cross-Functional Collaboration Partner with product managers, designers, and fellow engineers to translate complex business requirements into robust AI solutions. Requirements Have 3–5 years of experience building production AI/ML systems at a consulting or product-engineering firm. Possess deep understanding of transformer architectures, vector embeddings, and semantic search. Are hands-on with vector databases (Pinecone, Weaviate, Chroma) and RAG pipelines. Have integrated and optimized LLMs via APIs or local deployment. Are proficient in Python AI stacks (LangChain, LlamaIndex, Hugging Face). Have built backend services (FastAPI, Node.js, or Go) to power AI features. Understand AI UX patterns (chat interfaces, streaming responses, loading states, error handling). Can deploy and orchestrate AI systems on AWS, GCP, or Azure with containerization and orchestration tools. Bonus: Advanced React/Next.js skills for prototyping AI-driven UIs. Benefits Creative Freedom: A culture that empowers you to innovate and take bold product decisions for client projects. Comprehensive Healthcare: Extensive health coverage for you and your family. Tailored Growth Plans: Personalized professional development programs to help you achieve your career aspirations
Posted 3 days ago
0 years
3 - 5 Lacs
Ahmedabad
On-site
To handle the proposed changes as per the change control procedure. To assess the risk / impact associated with proposed change and to verify the implementation of action plan as per approved change control form. To determine the investigation plan and carry out investigations using appropriate root cause analysis tools, assessing the risk associated with them, to perform the additional studies and to derive the appropriate CAPA. To handle the recommended CAPA and to verify the implementation of recommended actions as per CAPA system and to evaluate effectiveness checks of implemented CAPA. To perform the trend analysis of deviation, change control, complaints etc.. to identify any repetitive event for further evaluation and CAPA. To conduct or be part of team conducting risk assessment of various activities, equipment, systems, etc.. and responsible for assigning the QRM no. To review all labelling related artwork components for products. To prepare artwork information details for new / revised labelling components and submit to QA doc cell for issuance of artwork as per procedure. To review and verify the destruction of old printed packaging materials (vendor and Amneal site) in case of revision of artworks of labelling components. Tracking the actions, review of extensions, verification of documents and closure of actions. Responsible for document handling, issuance, distribution, and retrieval of document. Scanning of documents for regulatory submission To maintain master documents like Site Master File, Validation Master Plan, Quality Manual, SOPs, protocols, reports, Batch Records, specification, method of analysis, drawings, artworks, planners, etc.. Responsible for issuance, archival and retrieval of documents like SOPs, Validation / qualification protocols / reports, Batch records, Specifications, Drawings, Artworks, planners, etc.. To provide BMR/BPR numbering to exhibit, intended and media fill BMR. To provide Batch number to exhibit, commercial, feasibility and medial fill batches. To receive and distribute the product development documents like Master Formula Records, Master Packaging records, protocols (sampling, study, and stability protocols). Issuance of uncontrolled copy / reference copies of master document to user as and when requested by user. To provide requested documents to regulatory affairs department for regulatory submissions (AR or other submissions), whenever required. Artwork management that includes effective the Artwork information details, distribution, retrieval and archival of artwork and all labelling components. To retire the documents as per change control assessment / proposal. Activities other than defined in the Job responsibility are to be done as per the requirement of HOD, by following HODs instruction and guidance. Skills: Monitors production processes in real-time to ensure compliance with specifications and GMP, Batch record review, process monitoring, sampling, quality checks, real-time deviation management. M.Sc. / B. Pharm / M. Pharm
Posted 3 days ago
1.0 years
1 - 3 Lacs
Ahmedabad
On-site
Job Title: Accounts Executive Location: Ahmedabad Salary: Upto Rs.25,000 Per Month Working Mode: Full time WFO, Day Shift Reports To: Accounts Manager/Finance Manager Job Summary: We are seeking a detail-oriented and proactive Accounts Executive with 1+ year of experience to join our Accounts team. The successful candidate will handle daily accounting tasks, ensure accurate financial records, and support the team with various accounting functions. Key Responsibilities: - Daily Accounting: Record daily financial transactions including sales, purchases, receipts, and payments. - Invoice Management: Generate and process invoices for customers and suppliers. - Bank Reconciliation: Perform regular bank reconciliations to ensure accuracy of bank statements. - Accounts Payable and Receivable: Manage accounts payable and receivable, ensuring timely payments and collections. - General Ledger: Maintain and update the general ledger with accurate and timely entries. - Financial Reporting: Assist in preparing monthly, quarterly, and annual financial reports. - Expense Tracking: Monitor and record expenses, ensuring proper documentation and compliance with company policies. - Support Audits: Assist in internal and external audits by providing necessary documentation and information. - Tax Compliance: Support the preparation and filing of tax returns and ensure compliance with tax regulations. - Documentation: Maintain organized financial records and documentation for easy retrieval and audit purposes. Qualifications and Skills: - Education: Bachelor’s degree in Accounting, Finance, or a related field. - Experience: 1+ year of relevant experience in accounting or finance. - Technical Skills: Proficiency in accounting software (e.g., QuickBooks, Tally) and Microsoft Office Suite, Excel. - Organization: Ability to manage multiple tasks and prioritize effectively. How to Apply: Interested candidates should submit their resume on: hr@kavininfra.com Contact HR- 9099046687 Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹25,000.00 per month Benefits: Cell phone reimbursement Flexible schedule Health insurance Leave encashment Provident Fund Schedule: Day shift Supplemental Pay: Yearly bonus Education: Bachelor's (Preferred) Work Location: In person Expected Start Date: 15/08/2025
Posted 3 days ago
0 years
1 - 3 Lacs
India
On-site
Job Summary: We are looking for a skilled and detail-oriented Computer Operator to manage and maintain computer data entry, ensure smooth data processing, and provide support to RM team for operational efficiency. The ideal candidate should be proficient with computer systems, office software, and basic troubleshooting. Key Responsibilities: Operate and monitor computer systems and peripheral equipment such as printers, scanners, and backup systems. Perform data entry, processing, and verification tasks with high accuracy. Maintain and update records, databases, and reports as required. Manage routine backups, file storage, and retrieval procedures. Support internal teams by providing computer-based assistance and resolving minor hardware/software issues. Ensure confidentiality and security of data and information. Collaborate with the IT team for maintenance and system upgrades. Maintain logs of system activities, errors, and downtime. Follow all IT and organizational protocols and compliance norms. Required Skills and Qualifications: High School Diploma or equivalent; ITI/Diploma in Computer Applications preferred. Proven experience as a computer operator, data entry operator, or similar role. Proficiency in MS Office (Excel, Word, Outlook), email, and internet operations. Familiarity with computer hardware, software, and basic troubleshooting. Good typing speed and accuracy. Strong attention to detail and organizational skills. Ability to work independently and in a team environment. Preferred Qualifications: Certification in computer operations or basic IT courses. Experience in ERP software or industry-specific data management tools. Working Conditions: Office-based work environment. Job Type: Full-time Pay: ₹15,500.00 - ₹25,000.00 per month Language: English (Preferred) Work Location: In person Expected Start Date: 01/08/2025
Posted 3 days ago
2.0 years
0 Lacs
Ghaziabad
On-site
Responsibilities: Assist in managing executives' calendars, scheduling appointments, and organizing meetings. Prepare and edit correspondence, reports, and presentations as needed. Maintain filing systems and documentation for easy retrieval and organization. Coordinate travel arrangements and itineraries for executives. Handle incoming calls and emails, responding to inquiries and directing messages appropriately. Support project management efforts and follow up on action items. Required Qualifications: 2+ years of experience. Candidate is from Kerala only. Proficient in Microsoft Office Suite (Word, Excel, PowerPoint, Outlook) Strong written and verbal communication skills. Excellent organizational skills and attention to detail. Ability to prioritize tasks and manage time effectively. Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
Noida
On-site
Lead Assistant Manager EXL/LAM/1433397 Digital EmergingNoida Posted On 25 Jul 2025 End Date 08 Sep 2025 Required Experience 6 - 10 Years Basic Section Number Of Positions 1 Band B2 Band Name Lead Assistant Manager Cost Code G090622 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type Backfill Max CTC 1000000.0000 - 1500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Emerging Business Unit Organization Digital Emerging LOB Digital Delivery Practice SBU Digital Finance Suite Country India City Noida Center Noida - Centre 59 Skills Skill AI ML Minimum Qualification B.TECH/B.E MCA BCA Certification No data available Job Description Job Summary: We are seeking a highly experienced AI/ML Engineer with 7–10 years of hands-on experience in building intelligent systems using Generative AI, Agentic AI, RAG (Retrieval-Augmented Generation), NLP, and Deep Learning. The ideal candidate will play a critical role in designing, developing, and deploying cutting-edge AI applications using Langchain, FastAPI, and other modern AI frameworks, with a strong programming foundation in Python and SQL. Key Responsibilities: Design and develop scalable Agentic AI and Generative AI-based applications and pipelines. Implement Retrieval-Augmented Generation (RAG) architectures to enhance LLM performance with dynamic knowledge integration. Fine-tune and deploy NLP and deep learning models to solve real-world business problems. Build autonomous agents that can execute goal-oriented tasks independently. Develop robust APIs using FastAPI and integrate with Langchain workflows. Work in a Linux environment for model development and production deployment. Collaborate with cross-functional teams to drive ML product delivery end-to-end. Write optimized code in Python, manage datasets and queries using SQL. Keep pace with rapid advancements in the AI/ML space and propose innovations. Required Skills & Qualifications: 7–10 years of experience in AI/ML system design and deployment. Strong expertise in Agentic AI, Generative AI, RAG, NLP, and Deep Learning. Solid programming skills in Python and SQL. Proficient in Langchain, FastAPI, and working in Linux environments. Experience in building and scaling ML pipelines, and deploying models into production. Knowledge of vector databases, prompt engineering, and LLM orchestration is a plus. Strong analytical and problem-solving skills, with the ability to work in Agile teams. Workflow Workflow Type Digital Solution Center
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
32455 Jobs | Dublin
Wipro
16590 Jobs | Bengaluru
EY
11025 Jobs | London
Accenture in India
10991 Jobs | Dublin 2
Amazon
8878 Jobs | Seattle,WA
Uplers
8715 Jobs | Ahmedabad
IBM
8204 Jobs | Armonk
Oracle
7750 Jobs | Redwood City
Capgemini
6181 Jobs | Paris,France
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi