Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
20 - 30 Lacs
Thāne
On-site
Key Responsibilities: Develop and Fine-Tune LLMs (e.g., GPT-4, Claude, LLaMA, Mistral, Gemini) using instruction tuning, prompt engineering, chain-of-thought prompting, and fine-tuning techniques. Build RAG Pipelines: Implement Retrieval-Augmented Generation solutions leveraging embeddings, chunking strategies, and vector databases like FAISS, Pinecone, Weaviate, and Qdrant. Implement and Orchestrate Agents: Utilize frameworks like MCP, OpenAI Agent SDK, LangChain, LlamaIndex, Haystack, and DSPy to build dynamic multi-agent systems and serverless GenAI applications. Deploy Models at Scale: Manage model deployment using HuggingFace, Azure Web Apps, vLLM, and Ollama, including handling local models with GGUF, LoRA/QLoRA, PEFT, and Quantization methods. Integrate APIs: Seamlessly integrate with APIs from OpenAI, Anthropic, Cohere, Azure, and other GenAI providers. Ensure Security and Compliance: Implement guardrails, perform PII redaction, ensure secure deployments, and monitor model performance using advanced observability tools. Optimize and Monitor: Lead LLMOps practices focusing on performance monitoring, cost optimization, and model evaluation. Work with AWS Services: Hands-on usage of AWS Bedrock, SageMaker, S3, Lambda, API Gateway, IAM, CloudWatch, and serverless computing to deploy and manage scalable AI solutions. Contribute to Use Cases: Develop AI-driven solutions like AI copilots, enterprise search engines, summarizers, and intelligent function-calling systems. Cross-functional Collaboration: Work closely with product, data, and DevOps teams to deliver scalable and secure AI products. Required Skills and Experience: Deep knowledge of LLMs and foundational models (GPT-4, Claude, Mistral, LLaMA, Gemini). Strong expertise in Prompt Engineering, Chain-of-Thought reasoning, and Fine-Tuning methods. Proven experience building RAG pipelines and working with modern vector stores ( FAISS, Pinecone, Weaviate, Qdrant ). Hands-on proficiency in LangChain, LlamaIndex, Haystack, and DSPy frameworks. Model deployment skills using HuggingFace, vLLM, Ollama, and handling LoRA/QLoRA, PEFT, GGUF models. Practical experience with AWS serverless services: Lambda, S3, API Gateway, IAM, CloudWatch. Strong coding ability in Python or similar programming languages. Experience with MLOps/LLMOps for monitoring, evaluation, and cost management. Familiarity with security standards: guardrails, PII protection, secure API interactions. Use Case Delivery Experience: Proven record of delivering AI Copilots, Summarization engines, or Enterprise GenAI applications. Experience 6-8 years of experience in AI/ML roles, focusing on LLM agent development, data science workflows, and system deployment. Demonstrated experience in designing domain-specific AI systems and integrating structured/unstructured data into AI models. Proficiency in designing scalable solutions using LangChain and vector databases. Job Type: Full-time Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Benefits: Health insurance Schedule: Monday to Friday Work Location: In person
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
TechStack :Cloud Technologies, Python, ML libraries, Prompt tuning and few-shot learning techniques, RAG, ReactJS, NodeJS etc What We Are Looking For As the Technology Analyst, you’ll leverage cutting-edge cloud-based solutions such as AWS and Azure. We are seeking an individual who not only possesses the requisite technical expertise but also thrives in the dynamic landscape of a fast-paced global firm. What You’ll Do ▪ Collaborate with cross-functional teams and domain experts to design and build AI-powered solutions (e.g., GenAI chatbots, summarizers, recommendation engines) ▪ Work with LLMs, prompt engineering, and RetrievalAugmented Generation (RAG) frameworks ▪ Analyze data and extract meaningful insights using Python, SQL, and ML libraries ▪ Prototype, test, and fine-tune AI models for tasks like classification, entity extraction, and summarization ▪ Support end-to-end implementation from ideation to deployment while ensuring scalability and performance. Must have Bachelor's degree in engineering, preferably in CS, IT, or electronics with a record of academic excellence. Strong foundation in Python and hands-on experience with AI/ML concepts Familiarity with tools like Hugging Face, LangChain, OpenAI APIs, or similar is a plus Interest in applying AI to practical use cases (bonus if you’ve worked on GenAI projects or built a chatbot) Problem-solving mindset, strong communication skills, and eagerness to learn ▪ Ability to thrive in a fast-paced, collaborative environment Note: This is a paid internship.Skills: nodejs,reactjs,ml libraries,prompt tuning,sql,few-shot learning techniques,cloud technologies,rag,python Show more Show less
Posted 3 weeks ago
7.0 - 10.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description: About Us* At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services* Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Global Finance was set up in 2007 as a part of the CFO Global Delivery strategy to provide offshore delivery to Line of Business and Enterprise Finance functions. The capabilities hosted include Legal Entity Controllership, General Accounting & Reconciliations, Regulatory Reporting, Operational Risk and Control Oversight, Finance Systems Support, etc. Our group supports the Finance function for the Consumer Banking, Wealth and Investment Management business teams across Financial Planning and Analysis, period end close, management reporting and data analysis. Job Description* Candidate will be responsible for developing & validating dashboards and business reports using Emerging Technology tools like MicroStrategy, Tableau, Alteryx, etc. The candidate will be responsible for delivering complex and time critical data mining and analytical projects for the Consumer & Small Business Banking, lending products such as Credit Cards and in addition will be responsible for analysis of data for decision making by senior leadership. Candidate will be responsible for data management, data extraction and upload, data validation, scheduling & process automation, report preparation, etc. The individual will play a key role in the team responsible for financial data reporting, adhoc reporting & data requirements, data analytics & business analysis and would manage multiple projects in parallel by ensuring adequate understanding of the requirements and deliver data driven insights and solutions to complex business problems. These projects would be time critical which would require the candidate to comprehend & evaluate the strategic business drivers to bring in efficiencies through automation of existing reporting packages or codes. The work would be a mix of standard and ad-hoc deliverables based on dynamic business requirements. Technical competency is essential to build processes which ensure data quality and completeness across all projects / requests related to business. The core responsibility of this individual is process management to achieve sustainable, accurate and well controlled results. Candidate should have a clear understanding of the end-to-end process (including its purpose) and a discipline of managing and continuously improving those processes. Responsibilities* Preparation and maintenance of various KPI reporting (Consumer lending such as Credit Cards) including performing data or business driven deep dive analysis. Credit Cards rewards reporting, data mining & analytics. Understand business requirements and translate those into deliverables. Support the business on periodic and ad-hoc projects related to consumer lending products. Develop and maintain codes for the data extraction, manipulation, and summarization on tools such as SQL, SAS, Emerging technologies like MicroStrategy, Tableau and Alteryx. Design solutions, generate actionable insights, optimize existing processes, build tool-based automations, and ensure overall program governance. Managing and improve the work: develop full understanding of the work processes, continuous focus on process improvement through simplification, innovation, and use of emerging technology tools, and understanding data sourcing and transformation. Managing risk: managing and reducing risk, proactively identify risks, issues and concerns and manage controls to help drive responsible growth (ex: Compliance, procedures, data management, etc.), establish a risk culture to encourage early escalation and self-identifying issues. Effective communication: deliver transparent, concise, and consistent messaging while influencing change in the teams. Extremely good with numbers and ability to present various business/finance metrics, detailed analysis, and key observations to Senior Business Leaders. Requirements* Education* Masters/Bachelor’s Degree in Information Technology/Computer Science/ MCA or MBA finance Experience Range* 7-10 years of relevant work experience in data analytics & reporting, business analysis & financial reporting in banking or credit card industry. Exposure to Consumer banking businesses would be an added advantage. Experience around credit cards reporting & analytics would be preferable. Foundational skills* Strong abilities in data extraction, data manipulation and business analysis and strong financial acumen. Strong computer skills, including MS excel, Teradata SQL, SAS and emerging technologies like MicroStrategy, Alteryx, Tableau. Prior Banking and Financial services industry experience, preferably Retail banking, and Credit Cards. Strong business problem solving skills, and ability to deliver on analytics projects independently, from initial structuring to final presentation. Strong communication skills (both verbal and written), Interpersonal skills and relationship management skills to navigate the complexities of aligning stakeholders, building consensus, and resolving conflicts. Proficiency in Base SAS, Macros, SAS Enterprise Guide Querying data from multiple source Experience in data extraction, transformation & loading using SQL/SAS. Proven ability to manage multiple and often competing priorities in a global environment. Manages operational risk by building strong processes and quality control routines. SQL: Querying data from multiple source Data Quality and Governance: Ability to clean, validate and ensure data accuracy and integrity. Troubleshooting: Expertise in debugging and optimizing SAS and SQL codes. Desired Skills Ability to effectively manage multiple priorities under pressure and deliver as well as being able to adapt to changes. Able to work in a fast paced, deadline-oriented environment. Multiple stakeholder management Attention to details: Strong focus on data accuracy and documentation. Work Timings* 11:30 pm to 8:30 pm (will require to stretch 7-8 days in a month to meet critical deadlines) Job Location* Mumbai Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
About Us We're building the world’s first AI Super-Assistant purpose-built for enterprises and professionals. Our platform is designed to supercharge productivity, automate workflows, and redefine the way teams work with AI. Our two core products: ChatLLM – Designed for professionals and small teams, offering conversational AI tailored for everyday productivity. Enterprise Platform – A robust, secure, and highly customizable platform for organizations seeking to integrate AI into every facet of their operations. We’re on a mission to redefine enterprise AI – and we’re looking for engineers ready to build the connective tissue between AI and the systems that power modern business. Role: Connector Integration Engineer – File Systems & Productivity Platforms As a Connector Integration Engineer focused on file systems, you’ll be responsible for building robust and secure integrations with enterprise content and collaboration platforms. You’ll enable our AI to retrieve, index, and interact with organizational documents, enabling powerful search, automation, and summarization features. What You’ll Do Build and manage connectors for productivity platforms including: SharePoint OneDrive Google Drive Work with Microsoft Graph APIs and related SDKs for document access Implement secure file access via OAuth2 and delegated permissions Enable metadata indexing and real-time syncing of document repositories Collaborate with product and AI teams to build document-aware AI experiences Troubleshoot access control issues, token lifecycles, and permission scopes Write reliable, maintainable backend code for secure data sync What We’re Looking For Experience building or working with SharePoint, OneDrive, or Google Drive APIs Strong understanding of document permissions, OAuth2, and delegated access Proficiency in backend languages like Python or TypeScript Ability to design integrations for structured and unstructured content Familiarity with API rate limits, refresh tokens, and file metadata models Solid communication skills and a bias for shipping clean, well-tested code Nice to Have Experience with Microsoft Graph SDK, webhook handling, or file system events Knowledge of indexing, search, or document summarization workflows Background in collaboration tools, SaaS products, or enterprise IT Exposure to modern security and compliance practices (SOC 2, ISO 27001) Candidates from top-tier tech environments or universities are encouraged to apply What We Offer Remote-first work environment Opportunity to shape the future of AI in the enterprise Work with a world-class team of AI researchers and product builders Flat team structure with real impact on product and direction $60,000 USD annual salary Ready to help our AI assistant work smarter with enterprise files? Join us – and power the world’s first AI Super-Assistant. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
Design, develop, and deploy NLP systems using advanced LLM architectures (e.g., GPT, BERT, LLaMA, Mistral) tailored for real-world applications such as chatbots, document summarization, Q&A systems, and more. Implement and optimize RAG pipelines, combining LLMs with vector search engines (e.g., FAISS, Weaviate, Pinecone) to create context-aware, knowledge-grounded responses. Integrate external knowledge sources, including databases, APIs, and document repositories, to enrich language models with real-time or domain-specific information. Fine-tune and evaluate pre-trained LLMs, leveraging techniques like prompt engineering, LoRA, PEFT, and transfer learning to customize model behavior. Collaborate with data engineers and MLOps teams to ensure scalable deployment and monitoring of AI services in cloud environments (e.g., AWS, GCP, Azure). Build robust APIs and backend services to serve NLP/RAG models efficiently and securely. Conduct rigorous performance evaluation and model validation, including accuracy, latency, bias/fairness, and explainability (XAI). Stay current with advancements in AI research, particularly in generative AI, retrieval systems, prompt tuning, and hybrid modeling strategies. Participate in code reviews, documentation, and cross-functional team planning to ensure clean and maintainable code. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Guindy, Tamil Nadu, India
On-site
Company Description Bytezera is a data services provider that specialise in AI and data solutions to help businesses maximise their data potential. With expertise in data-driven solution design, machine learning, AI, data engineering, and analytics, we empower organizations to make informed decisions and drive innovation. Our focus is on using data to achieve competitive advantage and transformation. About the Role We are seeking a highly skilled and hands-on AI Engineer to drive the development of cutting-edge AI applications using the latest in Computer vision, STT, Large Language Models (LLMs) , agentic frameworks , and Generative AI technologies . This role covers the full AI development lifecycle—from data preparation and model training to deployment and optimization—with a strong focus on NLP and open-source foundation models . You will be directly involved in building and deploying goal-driven, autonomous AI agents and scalable AI systems for real-world use cases. Key Responsibilities Computer Vision Development Design and implement advanced computer vision models for object detection, image segmentation, tracking, facial recognition, OCR, and video analysis. Fine-tune and deploy vision models using frameworks like PyTorch, TensorFlow, OpenCV, Detectron2, YOLO, MMDetection , etc. Optimize inference pipelines for real-time vision processing across edge devices, GPUs, or cloud-based systems. Speech-to-Text (STT) System Development Build and fine-tune ASR (Automatic Speech Recognition) models using toolkits such as Whisper, NVIDIA NeMo, DeepSpeech, Kaldi, or wav2vec 2.0 . Develop multilingual and domain-specific STT pipelines optimized for real-time transcription and high accuracy. Integrate STT into downstream NLP pipelines or agentic systems for transcription, summarization, or intent recognition. LLM and Agentic AI Design & Development Build and deploy advanced LLM-based AI agents using frameworks such as LangGraph , CrewAI , AutoGen , and OpenAgents . Fine-tune and optimize open-source LLMs (e.g., GPT-4 , LLaMA 3 , Mistral , T5 ) for domain-specific applications. Design and implement retrieval-augmented generation (RAG) pipelines with vector databases like FAISS , Weaviate , or Pinecone . Develop NLP pipelines using Hugging Face Transformers , spaCy , and LangChain for various text understanding and generation tasks. Leverage Python with PyTorch and TensorFlow for training, fine-tuning, and evaluating models. Prepare and manage high-quality datasets for model training and evaluation. Experience & Qualifications 2+ years of hands-on experience in AI engineering , machine learning , or data science roles. Proven track record in building and deploying computer vision and STT AI application . Experience with agentic workflows or autonomous AI agents is highly desirable. Technical Skills Languages & Libraries:Python, PyTorch, TensorFlow, Hugging Face Transformers, LangChain, spaCy LLMs & Generative AI:GPT, LLaMA 3, Mistral, T5, Claude, and other open-source or commercial models Agentic Tooling:LangGraph, CrewAI, AutoGen, OpenAgents Vector databases (Pinecone or ChromaDB) DevOps & Deployment: Docker, Kubernetes, AWS (SageMaker, Lambda, Bedrock, S3) Core ML Skills: Data preprocessing, feature engineering, model evaluation, and optimization Qualifications:Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Job Description About Fractal What makes Fractal a GREAT fit for you? When you join Fractal, you’ll be part of a fast-growing team that helps our clients leverage AI together with the power of behavioural sciences to make better decisions. We’re a strategic analytics partner to most admired fortune 500 companies globally, we help them power every human decision in the enterprise by bringing analytics, AI and behavioural science to the decision. Our people enjoy a collaborative work environment, exceptional training and career development — as well as unlimited growth opportunities. We have a Glassdoor rating of 4 / 5 and achieve customer NPS of 9/ 10. If you like working with a curious, supportive, high-performing team, Fractal is the place for you. close. Responsibilities Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git. Tech Skills (10+ Years’ Experience): Machine Learning (ML) & Deep Learning Solid understanding of supervised and unsupervised learning. Proficiency with deep learning architectures like Transformers, LSTMs, RNNs, etc. Generative AI: Hands-on experience with models such as OpenAI GPT4, Anthropic Claude, LLama etc. Knowledge of fine-tuning and optimizing large language models (LLMs) for specific tasks. Natural Language Processing (NLP): Expertise in NLP techniques, including text preprocessing, tokenization, embeddings, and sentiment analysis. Familiarity with NLP tasks such as text classification, summarization, translation, and question-answering. Retrieval-Augmented Generation (RAG): In-depth understanding of RAG pipelines, including knowledge retrieval techniques like dense/sparse retrieval. Experience integrating generative models with external knowledge bases or databases to augment responses. Data Engineering: Ability to build, manage, and optimize data pipelines for feeding large-scale data into AI models. Search and Retrieval Systems: Experience with building or integrating search and retrieval systems, leveraging knowledge of Elasticsearch, AI Search, ChromaDB, PGVector etc. Prompt Engineering: Expertise in crafting, fine-tuning, and optimizing prompts to improve model output quality and ensure desired results. Understanding how to guide large language models (LLMs) to achieve specific outcomes by using different prompt formats, strategies, and constraints. Knowledge of techniques like few-shot, zero-shot, and one-shot prompting, as well as using system and user prompts for enhanced model performance. Programming & Libraries: Proficiency in Python and libraries such as PyTorch, Hugging Face, etc. Knowledge of version control (Git), cloud platforms (AWS, GCP, Azure), and MLOps tools. Database Management: Experience working with SQL and NoSQL databases, as well as vector databases APIs & Integration: Ability to work with RESTful APIs and integrate generative models into applications. Evaluation & Benchmarking: Strong understanding of metrics and evaluation techniques for generative models. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Lead - Data Scientist Join our high-performing team, honored with the prestigious "Outstanding Data Engineering Team" Award at DES 2025 for setting new benchmarks in data excellence. About the Role: We are seeking a highly skilled and GCP-certified AI/ML Engineer with expertise in Generative AI models , Natural Language Processing (NLP) , and cloud-native development with 8+Yrs of Expertise . This role involves designing and deploying scalable ML solutions, building robust APIs, and integrating AI capabilities into enterprise applications. The ideal candidate will also have a solid background in software engineering and DevOps practices. Key Responsibilities: Design, develop, and implement GenAI and NLP models using Python and relevant libraries (Transformers, LangChain, etc.) Deploy ML models and pipelines on GCP (Vertex AI, BigQuery, Cloud Functions) and Azure ML/Azure Services Develop and manage RESTful APIs for model integration Apply ML Ops and CI/CD best practices using tools like GitHub Actions, Azure DevOps, or Jenkins Ensure solutions follow software engineering principles , including modularity, reusability, and scalability Work in Agile teams, contributing to sprint planning, demos, and retrospectives Collaborate with cross-functional teams to define use cases and deliver PoCs and production-ready solutions Optimize performance and cost efficiency of deployed models and cloud services Maintain strong documentation and follow secure coding and data privacy standards. Required Skills: Strong programming skills in Python with experience in ML & NLP frameworks (e.g., TensorFlow, PyTorch, spaCy, Hugging Face) Experience with Generative AI models (OpenAI, PaLM, LLaMA, etc.) Solid understanding of Natural Language Processing (NLP) concepts like embeddings, summarization, NER, etc. Proficiency in GCP services Vertex AI, Cloud Run, Cloud Storage, BigQuery (GCP Certification is mandatory) Familiarity with Azure ML / Azure Functions / Azure API Management Experience in building and managing REST APIs Hands-on with CI/CD tools and containerization (Docker, Kubernetes is a plus) Strong grasp of software engineering concepts and Agile methodology Preferred Qualifications: Bachelors or Master’s in Computer Science, Data Science, or related field Experience working with cross-platform AI integration Exposure to LLM Ops / Prompt Engineering Certification in Azure AI Engineer or Azure Data Scientist is a plus Location: Chennai, Coimbatore, Pune, Bangalore Experience : 8 -12 Years Regards, TA Team Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 3 weeks ago
25.0 years
0 Lacs
India
Remote
Welo Data works with technology companies to provide datasets that are high-quality, ethically sourced, relevant, diverse, and scalable to supercharge their AI models. As a Welocalize brand, WeloData leverages over 25 years of experience in partnering with the world’s most innovative companies and brings together a curated global community of over 500,000 AI training and domain experts to offer services that span: ANNOTATION & LABELLING: Transcription, summarization, image and video classification and labeling. ENHANCING LLMs: Prompt engineering, SFT, RLHF, red teaming and adversarial model training, model output ranking. DATA COLLECTION & GENERATION: From institutional languages to remote field audio collection. RELEVANCE & INTENT: Culturally nuanced and aware, ranking, relevance, and evaluation to train models for search, ads, and LLM output. Want to join our Welo Data team? We bring practical, applied AI expertise to projects. We have both strong academic experience and a deep working knowledge of state-of-the-art AI tools, frameworks, and best practices. Help us elevate our clients' Data at Welo Data. Shape the Future of AI — On Your Terms At Welo Data, we’re reimagining how people and machines understand each other. As part of the Welocalize family, we partner with leading global companies to power inclusive, human-centered AI — built on high-quality language data. We’re building a global network of talented linguists, language enthusiasts, and culturally curious contributors ready to shape the next wave of technology through the power of language. This is your space to grow, learn, and connect on your schedule. Join Our Talent Community Whether you're a professional linguist or just passionate about how language and technology intersect, Welo Data welcomes you. By joining our talent pool, you’ll be first in line for future task-based projects in areas like annotation, evaluation, and prompt creation. When a suitable opportunity opens up, we’ll invite you to a short qualification process, which may include training, assessments, or onboarding steps depending on the project. Who We're Looking For: - Native or near-native fluency in Marathí - Based in India - Proficient in English (written and spoken) - Comfortable using digital tools and working remotely - Naturally detail-oriented, curious, and eager to learn - Open to working on a wide variety of language-focused tasks Why Choose Welo Data? - Limitless You – Work on your terms. Whether you're just starting out or deepening your expertise, Welo Data gives you the flexibility to grow your skills, explore new projects, and balance life on your own schedule. - Limitless AI – Be part of the technology revolution. Your contributions will help train and improve AI systems that touch millions of lives, making them more inclusive, intelligent, and human-centered. - Be Part of Us – Join a vibrant, global community of language lovers, technologists, and creatives working together to shape a more connected world. - Opportunity – Be the first to access projects that match your skills and availability. If you're passionate about language, technology, and shaping the future of AI, we want to hear from you. Apply now by answering a few quick questions to join our community. 📬 Got questions? Reach out to us at JobPosting@welocalize.com Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
About the Role We’re looking for an experienced AI Developer with hands-on expertise in Large Language Models (LLMs) , Azure AI services , and end-to-end ML pipeline deployment . If you’re passionate about building scalable AI solutions, integrating document intelligence, and deploying models in production using Azure, this role is for you. 💡 Key Responsibilities Design and develop AI applications leveraging LLMs (e.g., GPT, BERT) for tasks like summarization, classification, and document understanding Implement solutions using Azure Document Intelligence to extract structured data from forms, invoices, and contracts Train, evaluate, and tune ML models using Scikit-learn, XGBoost , or PyTorch Build ML pipelines and workflows using Azure ML , MLflow , and integrate with CI/CD tools Deploy models to production using Azure ML endpoints , containers, or Azure Functions for real-time AI workflows Write clean, efficient, and scalable code in Python and manage code versioning using Git Work with structured and unstructured data from SQL/NoSQL databases and Data Lakes Ensure performance monitoring and logging for deployed models ✅ Skills & Experience Required Proven experience with LLMs and Prompt Engineering (e.g., GPT, BERT) Hands-on with Azure Document Intelligence for OCR and data extraction Solid background in ML model development, evaluation , and hyperparameter tuning Proficient in Azure ML Studio , model registry , and automated ML workflows Familiar with MLOps tools such as Azure ML pipelines, MLflow , and CI/CD practices Experience with Azure Functions for building serverless, event-driven AI apps Strong coding skills in Python ; familiarity with libraries like NumPy, Pandas, Scikit-learn, Matplotlib Working knowledge of SQL/NoSQL databases and Data Lakes Proficiency with Azure DevOps , Git version control, and testing frameworks Show more Show less
Posted 3 weeks ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Thank you for your interest in working for our Company. Recruiting the right talent is crucial to our goals. On April 1, 2024, 3M Healthcare underwent a corporate spin-off leading to the creation of a new company named Solventum. We are still in the process of updating our Careers Page and applicant documents, which currently have 3M branding. Please bear with us. In the interim, our Privacy Policy here: https://www.solventum.com/en-us/home/legal/website-privacy-statement/applicant-privacy/ continues to apply to any personal information you submit, and the 3M-branded positions listed on our Careers Page are for Solventum positions. As it was with 3M, at Solventum all qualified applicants will receive consideration for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Job Description PMO Operations Manager (Solventum) 3M Health Care is now Solventum At Solventum, we enable better, smarter, safer healthcare to improve lives. As a new company with a long legacy of creating breakthrough solutions for our customers’ toughest challenges, we pioneer game-changing innovations at the intersection of health, material and data science that change patients' lives for the better while enabling healthcare professionals to perform at their best. Because people, and their wellbeing, are at the heart of every scientific advancement we pursue. We partner closely with the brightest minds in healthcare to ensure that every solution we create melds the latest technology with compassion and empathy. Because at Solventum, we never stop solving for you. The Impact You’ll Make in this Role As A(n) PMO Operations Manager You Will Have The Opportunity To Tap Into Your Curiosity And Collaborate With Some Of The Most Innovative And Diverse People Around The World. Here, You Will Make An Impact By Managing and overseeing key administrative functions including spend request approvals, sourcing/procurement engagement and tracking, invoice tracking, and contract renewal tracking Supporting FP&A lead through liaising with vendors and contract owners to ensure alignment of monthly accrual and amortization process. Managing and overseeing adherence to PMO Governance model including consolidation of weekly portfolio reporting, consolidation of cross-functional RAID log items, maintenance of project portfolio dashboard, and facilitation/tracking of project charter and project change request development and approvals Supports coordination of annual budget and portfolio demand planning via the collection, consolidation, and summarization of investment requests across cybersecurity function Lead the coordination and development of material for ad-hoc data analysis and/or reporting requests in support of strategic decision making Proactively identifying and implementing process improvements to optimize efficiency across function Your Skills And Expertise To set you up for success in this role from day one, Solventum requires (at a minimum) the following qualifications: Bachelor’s Degree or higher in Information Systems, Business Administration, or Engineering or a related field AND 7 years of proven experience in operations management OR High School Diploma/GED from AND 10 years of proven experience in operations management AND In addition to the above requirements, the following are also required: 5 years of experience using collaboration capabilities in Microsoft SharePoint and Microsoft Teams to support operational processes 5 years of experience with Microsoft excel to perform advanced-level data analysis 5 years of experiencing building exec-facing reports via Microsoft PowerPoint 3 years experience with financial administration, budgeting, and forecasting Additional qualifications that could help you succeed even further in this role include: Master’s degree in Information Systems, Business Administration, or Engineering or a related field from an accredited institution Strong financial acumen – ability to translate complex business problems into financial terms Creative problem solving – ability to work with diverse functional teams to evaluate and address key issues Strong verbal and written communication skills Work location: Hybrid Eligible (Job Duties allow for some remote work but require travel to Bangalore at least 3 days per week) Travel: Not Required Solventum is committed to maintaining the highest standards of integrity and professionalism in our recruitment process. Applicants must remain alert to fraudulent job postings and recruitment schemes that falsely claim to represent Solventum and seek to exploit job seekers. Please note that all email communications from Solventum regarding job opportunities with the company will be from an email with a domain of @solventum.com . Be wary of unsolicited emails or messages regarding Solventum job opportunities from emails with other email domains. Please note: your application may not be considered if you do not provide your education and work history, either by: 1) uploading a resume, or 2) entering the information into the application fields directly. Solventum Global Terms of Use and Privacy Statement Carefully read these Terms of Use before using this website. Your access to and use of this website and application for a job at Solventum are conditioned on your acceptance and compliance with these terms. Please access the linked document by clicking here, select the country where you are applying for employment, and review. Before submitting your application you will be asked to confirm your agreement with the terms. Show more Show less
Posted 3 weeks ago
0 years
1 - 2 Lacs
Bhopal
Remote
✅ Core Technical Skills (Must-Have) •* Proficiency in Python with experience in Flask or FastAPI* •* Strong understanding of REST API design* •* Experience designing modular, scalable backends* •* Knowledge of async programming, background jobs, or task queues (e.g., Celery)* •* Familiarity with PostgreSQL or similar relational databases* •* Comfortable working with environment variables, API tokens, and deployment-ready code* You need to have good core understanding. However, you'll also get guidance by a senior BlackRock developer. - Bonus Skills (Nice-to-Have / Will Learn on Job) •* Experience (or willingness to learn) Amazon SP-API* •* Experience with LLMs or AI integrations (e.g., review summarization, data enrichment)* •* Familiarity with React.js (for integration with frontend team)* •* Knowledge of AWS services like EC2, RDS, S3, Lambda* - What You’ll Work On •* Keyword-based product search backend (like Helium10’s BlackBox)* •* Sales estimation engine (reverse-engineered from Amazon rankings)* •* Data refresh & task queues* •* API integration with the frontend dashboard* •* Optional AI modules (e.g., review analysis using LLMs)* - Mindset •* Ability to research and learn SP-API endpoints (we’ll guide you)* •* Write clean, well-documented code for long-term maintainability* •* Think modular, scale-ready (this won’t stay an MVP for long)* Job Types: Full-time, Fresher Pay: ₹180,000.00 - ₹240,000.00 per year Benefits: Work from home Location Type: In-person Application Question(s): Have you worked with Amazon API's before? Are you a fresher? You'll work under the guidance of a senior developer. Are you willing to learn? The job may require you to work from home. Do you have appropriate space to work from home without office desk? Work Location: In person
Posted 3 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana
Remote
Software Engineer II- AI/ML Hyderabad, Telangana, India No longer accepting applications Job number 1812127 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group BIC Customer Experience and work on something highly strategic to Microsoft. The goal of the Customer Zero Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to deliver high value, complete, and Copilot-enabled application scenarios across all devices and form factors. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Basic Qualifications: Knowledge of machine learning algorithms and concepts (e.g., supervised learning, unsupervised learning, deep learning) as applied to generative AI. 2+ years of professional experience in a technical role developing, training, evaluating and deploying ML solutions at scale for real-world problems 3+ years of experience as a software engineer, developing and shipping software in Python, C#, Java or modern language equivalent. Familiarity with ML frameworks and libraries like TensorFlow, PyTorch, Scikit-learn, Keras, etc. Experience in handling large datasets and working with data processing frameworks (Apache Spark, Hadoop etc.) Hands-on experience with cloud platforms like Azure, AWS or GCP for deploying and scaling machine learning models. Excellent cross-group and interpersonal skills, with the ability to articulate solutions. Bachelors/Master’s degree with relevant course work toward Computer Science, Data Science, Statistics, Machine Learning, Data Mining and equivalent work experience. Preferred Qualifications: Experience in designing and implementing MLOps strategies for model deployment, monitoring, and governance. Familiarity in deep learning architectures (Transformers, CNNs, RNNs) and knowledge of Natural Language Processing (NLP) Knowledge of containers (Docker) and orchestration tools like Kubernetes. Strong analytical mind and a confident decision maker Excellent computer science fundamentals in algorithmic design, data structures, and analyzing complexity Ability to balance competing demands and adapt to changing priorities Experience mentoring junior engineers and data scientists, providing technical guidance and code reviews. By applying to this role, you will be considered for additional roles with similar qualifications #BICJobs Responsibilities Design, develop, and implement robust and scalable software applications that utilize generative AI techniques. Integrate LLMs and NLP into software solutions for tasks such as content recommendations, summarization, translation, and retrieval. Optimize generative AI models and their performance for specific use cases. Help establish and drive the adoption of good coding standards and patterns. Help identify opportunities to improve and optimize existing systems using generative AI. Stay up-to-date with the latest trends and technologies in generative AI. Other: Embody our Culture and Values Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. No longer accepting applications
Posted 3 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who We Are Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. Who You Are We are seeking a highly motivated Senior Data Scientist with strong technical expertise, business acumen, and strategic problem-solving abilities. In this role, you will independently own and build forecasting models. You will work closely with stakeholders across Product, Data Engineering & Analytics, and Business Strategy to identify opportunities to resolve business problems. This is a high-impact individual contributor role. What Youll Do Work in a dynamic and innovative company to develop cutting-edge solutions. Perform advanced data analytics to influence decision makers. Leverage data from diverse sources to provide insights and build new data driven tools. Partner with engineering teams to continuously improve our data quality. Develop machine learning based applications using supervised and unsupervised models using Python that optimize and personalize customer experiences or reduce manual effort on our internal teams through automated decision making. Develop and support models to enable things such as prescriptive insights, automated decisioning, and insights. Develop experiments to understand model impact, monitor live model analytics, and manage training and retraining pipelines. Work with stakeholders to intake complicated business problems and translate them into solvable data science projects. Partner with data engineering to take your model from development to deployed infrastructure. Brainstorm future use cases and contribute to the learning culture of the data science team. Partner with and mentor other data scientists and data analysts. Partner with multiple marketing teams, manage multiple projects and help conceptualize applications that directly drive company growth and strategy. You will connect machine learning applications to business needs and help facilitate process changes based on algorithmic solution implementation. What Youll Need You have Data Science experience building and validating machine learning and forecasting models in Python for a minimum of 5 years. You love performing advanced analytics to find insights and patterns. You have experience in supervised and unsupervised model techniques such as random forest, gradient boosting, support vector machines, k-means and hierarchical clustering, causal models, mixture models and experience in advanced modeling techniques such as reinforcement learning, neural networks, and natural language modeling. You have experience in delivering natural language projects utilizing techniques such as text summarization, topic modeling, entity extraction, semantic encoding, and valence analysis. You have experience working in an agile business setting. You have experience with relational cloud databases like BigQuery and Snowflake and are comfortable working with unstructured datasets such as unstructured text. (ref:hirist.tech) Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Selected Intern's Day-to-day Responsibilities Include Analyze AI conversations, summarization, and operations: Review live, recorded, and transcribed user chats (text & voice) handled by our AI tool. Review AI automated actions, processes, and protocols across the product. Identify patterns, trends, successes, and areas needing improvement in AI responses and user interactions. Analyse conversation flow and user sentiment. Ensure Quality & Compliance: Flag conversations where the AI's response is inaccurate, off-brand, unhelpful, or non-compliant with our guidelines. Help maintain high standards for every user interaction. Deliver Actionable Insights: Provide clear, structured feedback to the tech team to drive AI model improvements. Share valuable user insights with the marketing team to inform strategy and messaging. Contribute to reports summarising findings and recommendations. Manage High-Priority Interactions: Recognise and proactively take over conversations involving urgent, sensitive, or complex user queries requiring human expertise. Contribute to AI Training: Participate in defining and feeding "ideal responses" to the AI model. Help expand the AI's knowledge base with domain-specific information. Drive Product Excellence: Use your analysis and profound observations to directly contribute to the following: Product Development: Identify feature opportunities and usability gaps. User Experience (UX) Enhancement: Pinpoint friction points and suggest improvements. User Acquisition Growth: Uncover insights to help attract and convert new users. About Company: Nbyula, a German technology brand, is a horizontal marketplace encompassing people, content, technology, & services for international studies and work, to enable and empower 'Skillizens without Borders' Nbyula aims at building a global digital technology ecosystem for international studies and work. Nbyula takes a contrarian approach to bring trust and transparency to the field of international studies and work. This is done based on massive data-gathering infrastructure and international alumni-contributed content. Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About us As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn’t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Team overview Roundel is Target’s entry into the media business, an advertising sell-side business built on the principles of first-party (people-based) data, brand-safe content environments, and proof that our marketing programs drive business results for our clients. We operate with the ethos of trust and transparency and that media works best when it works in everyone’s best interest. At the very root of that, Roundel is here to drive business growth for our clients and redefine “value” in the industry by solving core industry challenges vs. copying current industry methods of operation. We are here to drive a key growth initiative for Target and lead the industry to better operating within the media marketplace. Roundel’s Product Team is responsible for building the technology that supports the growing business. They are accountable for the delivery of business outcomes enabled through technology and analytic products that are easy to use, easily maintained and highly reliable. Role overview As a Product Manager for Work Management (Workfront Product) , you’ll lead the strategy and roadmap for how teams across Roundel plan, prioritize, and execute work. In this highly visible and impactful role, you’ll be responsible for driving operational efficiency, transparency, and cross-functional collaboration by evolving Work Management capabilities to meet changing business needs. You’ll serve as the “voice of the product,” advocating for users, aligning with stakeholders, and ensuring your team receives the leadership and support needed to deliver impactful solutions. You’ll partner closely with teams across the enterprise to identify pain points, streamline workflows, and deliver scalable, intuitive experiences that enable better planning and execution. With a strong focus on process improvement and operational excellence, you'll obsess over the details that drive measurable outcomes. As part of Work Management’s (Workfront platform) leadership, you’ll also explore and implement opportunities to integrate AI and automation —working with engineering and data science partners to pilot and scale solutions such as intelligent work intake, auto-prioritization, and generative task support. These innovations will aim to enhance productivity, reduce manual effort, and ensure outputs are intuitive, measurable, and grounded in responsible AI principles. You’ll be accountable for maintaining a product roadmap that balances customer needs, technical capabilities, and design challenges. You’ll own prioritization across themes, epics, and stories—ensuring work aligns to the broader product strategy and delivers maximum value. You’ll also create and manage product OKRs, define key success metrics, and track related P&L impacts. This role requires strong relationship-building across pyramids to drive alignment, champion the product vision, and continuously evolve the platform to meet the needs of Roundel’s growing organization. Job duties may change at any time due to business needs. Core responsibilities Define and drive the product strategy and roadmap for Workfront, aligned with organizational priorities. Serve as the primary product owner and subject matter expert for the Workfront platform. Partner with cross-functional teams (e.g., GTM, Operations, Technology, and PMO) to gather and prioritize user requirements. Lead backlog grooming, sprint planning, and ongoing Agile ceremonies with platform support and tech partners. Identify and implement enhancements to improve work intake, capacity planning, and resource allocation processes. Ensure data integrity, user adoption, and platform performance through strong governance practices. Monitor key metrics and feedback loops to measure platform usage and effectiveness. Champion change management and user education for new feature rollouts and best practices . AI & Automation Focus: Explore and integrate AI tools to streamline work intake, auto-prioritize tasks, and reduce manual effort. Partner with engineering and data science teams to pilot and scale generative AI capabilities for task creation, brief summarization, and workflow automation. Stay current with GenAI trends and tools, identifying opportunities to integrate them into Workfront for improved productivity and user experience. Define metrics to measure the impact of AI-driven features on team efficiency, work quality, and cost savings. Champion responsible AI practices to ensure transparency, accuracy, and trust in automated recommendations. Technical & Strategic Acumen Understand Workfront’s integration capabilities and identify opportunities to connect with other tools (e.g., DAMs, reporting platforms, Jira). Evaluate third-party add-ons or automation opportunities to streamline workflow execution. Stay current on Workfront/Work Management releases and industry best practices to inform long-term platform evolution. About you 4-year college degree (or equivalent experience) 10+ years of technical product experience Experience working in an agile environment (e.g., user stories, iterative development, scrum teams, sprints, personas) Proven experience as a product manager or platform owner (Workfront, work management experience strongly preferred). Strong understanding of workflow automation, enterprise project management, or marketing operations. Exceptional analytical, organizational, and communication skills. Ability to thrive in a fast-paced, collaborative environment with shifting priorities. Useful links Life at Target: https://india.target.com/ Benefits: https://india.target.com/life-at-target/workplace/benefits Culture: https://india.target.com/life-at-target/belonging Show more Show less
Posted 3 weeks ago
9.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Join Amgen's Mission to Serve Patients If you feel like you’re part of something bigger, it’s because you are. At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies. We are global collaborators who achieve together—researching, manufacturing, and deliver ever-better products that reach over 10 million patients worldwide. It’s time for a career you can be proud of. What You Will Do Let’s do this. Let’s change the world. In this vital role We are seeking a strategic and hands-on Specialist Software Engineer / AI Engineer – Search to lead the design, development, and deployment of AI-powered search and knowledge discovery solutions across our pharmaceutical enterprise. In this role, you'll manage a team of engineers and work closely with data scientists, oncologists, and domain experts to build intelligent systems that help users across R&D, medical, and commercial functions find relevant, actionable information quickly and accurately. Architect and lead the development of scalable, intelligent search systems leveraging NLP, embeddings, LLMs, and vector search Own the end-to-end lifecycle of search solutions, from ingestion and indexing to ranking, relevancy tuning, and UI integration Build systems that surface scientific literature, clinical trial data, regulatory content, and real-world evidence using semantic and contextual search Integrate AI models that improve search precision, query understanding, and result summarization (e.g., generative answers via LLMs). Partner with platform teams to deploy search solutions on scalable infrastructure (e.g., Kubernetes, cloud-native services, Databricks, Snowflake). Experience in Generative AI on Search Engines Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement. Building and owning the next generation of content knowledge platforms and other algorithms/systems that create high quality and unique experiences. Designing and implementing advanced AI Models for entity matching, data duplication. Experience Generative AI tasks such as content summarization. deduping and metadata quality. Researching and developing advanced AI algorithms, including Vision Models for visual content analysis. Implementing KPI measurement frameworks to evaluate the quality and performance of delivered models, including those utilizing Generative AI. Developing and maintaining Deep Learning models for data quality checks, visual similarity scoring, and content tagging. Continually researching current and emerging technologies and proposing changes where needed. Implement GenAI solutions, utilize ML infrastructure, and contribute to data preparation, optimization, and performance enhancements. Manage and mentor a cross-functional engineering team focused on AI, ML, and search infrastructure. Foster a collaborative, high-performance engineering culture with a focus on innovation and delivery. Work with domain experts, data stewards, oncologists, and product managers to align search capabilities with business and scientific need Basic Qualifications: Degree in computer science & engineering preferred with 9-12 years of software development experience Proficient in AI Model, NLM, AI, Python, Elasticsearch OR Solr OR OpenSearch, GraphQL, No SQL, Cloud CI/CD build pipelines, API Proven experience building search systems with technologies like Elasticsearch, Solr, OpenSearch, or vector DBs (e.g., Pinecone, FAISS). Hands-on experience with various AI models, GCP Search Engines, GCP Cloud services Strong understanding of NLP, embeddings, transformers, and LLM-based search applications Proficient in programming language AI/ML, Python, GraphQL, Java Crawlers, Java Script, SQL/NoSQL, Databricks/RDS, Data engineering, S3 Buckets, dynamo DB Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experience deploying ML services and search infrastructure in cloud environments (AWS, Azure, or GCP) Preferred Qualifications: Experience in AI/ML, Java, Rest API, Python Proficient in Databricks, Java Experienced with Fast Pythons API Experience with design patterns, data structures, data modelling, data algorithms Knowledge of ontologies and taxonomies such as MeSH, SNOMED CT, UMLS, or MedDRA. Familiarity with MLOps, CI/CD for ML, and monitoring of AI models in production. Experienced with AWS /Azure Platform, building and deploying the code Experience in PostgreSQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Experience in Google cloud Search and Google cloud Storage Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Experience with generative AI or retrieval-augmented generation (RAG) frameworks in pharma/biotech setting Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Good To Have Skills Willingness to work on Full stack Applications Experience working with biomedical or scientific data (e.g., PubMed, clinical trial registries, internal regulatory databases). Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Thrive What You Can Expect From Us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 3 weeks ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen's Mission to Serve Patients If you feel like you’re part of something bigger, it’s because you are. At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies. We are global collaborators who achieve together—researching, manufacturing, and deliver ever-better products that reach over 10 million patients worldwide. It’s time for a career you can be proud of. What You Will Do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled and hands-on Senior Software Engineer – Search to drive the development of intelligent, scalable search systems across our pharmaceutical organization. You'll work at the intersection of software engineering, AI, and life sciences to enable seamless access to structured and unstructured content—spanning research papers, clinical trial data, regulatory documents, and internal scientific knowledge. This is a high-impact role where your code directly accelerates innovation and decision-making in drug development and healthcare delivery Design, implement, and optimize search services using technologies such as Elasticsearch, OpenSearch, Solr, or vector search frameworks. Collaborate with data scientists and analysts to deliver data models and insights. Develop custom ranking algorithms, relevancy tuning, and semantic search capabilities tailored to scientific and medical content Support the development of intelligent search features like query understanding, question answering, summarization, and entity recognition Build and maintain robust, cloud-native APIs and backend services to support high-availability search infrastructure (e.g., AWS, GCP, Azure Implement CI/CD pipelines, observability, and monitoring for production-grade search systems Work closely with Product Owners, Tech Architect. Enable indexing of both structured (e.g., clinical trial metadata) and unstructured (e.g., PDFs, research papers) content Design & develop modern data management tools to curate our most important data sets, models and processes, while identifying areas for process automation and further efficiencies Expertise in programming languages such as Python, Java, React, typescript, or similar. Strong experience with data storage and processing technologies (e.g., Hadoop, Spark, Kafka, Airflow, SQL/NoSQL databases). Demonstrate strong initiative and ability to work with minimal supervision or direction Strong experience with cloud infrastructure (AWS, Azure, or GCP) and infrastructure as code like Terraform In-depth knowledge of relational and columnar SQL databases, including database design Expertise in data warehousing concepts (e.g. star schema, entitlement implementations, SQL v/s NoSQL modeling, milestoning, indexing, partitioning) Experience in REST and/or GraphQL Experience in creating Spark jobs for data transformation and aggregation Experience with distributed, multi-tiered systems, algorithms, and relational databases. Possesses strong rapid prototyping skills and can quickly translate concepts into working code Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Analyze and understand the functional and technical requirements of applications Identify and resolve software bugs and performance issues Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time Maintain detailed documentation of software designs, code, and development processes Basic Qualifications: Degree in computer science & engineering preferred with 6-8 years of software development experience Proficient in Databricks, Data engineering, Python, Search algorithms using NLP/AI models, GCP Cloud services, GraphQL Hands-on experience with search technologies (Elasticsearch, Solr, OpenSearch, or Lucene). Hands on experience with Full Stack software development. Proficient in programming languages, Java, Python, Fast Python, Databricks/RDS, Data engineering, S3Buckets, ETL, Hadoop, Spark, airflow, AWS Lambda Experience with data streaming frameworks (Apache Kafka, Flink). Experience with cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., S3, Redshift, Big Query, Databricks) Hands on experience with various cloud services, understand pros and cons of various cloud services in well architected cloud design principles Working knowledge of open-source tools such as AWS lambda. Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Preferred Qualifications: Experience in Python, Java, React, Fast Python, Typescript, JavaScript, CSS HTML is desirable Experienced with API integration, serverless, microservices architecture. Experience in Data bricks, PySpark, Spark, SQL, ETL, Kafka Solid understanding of data governance, data security, and data quality best practices Experience with Unit Testing, Building and Debugging the Code Experienced with AWS /Azure Platform, Building and deploying the code Experience in vector database for large language models, Databricks or RDS Experience with DevOps CICD build and deployment pipeline Experience in Agile software development methodologies Experience in End-to-End testing Experience in additional Modern Database terminologies. Good To Have Skills Willingness to work on AI Applications Experience in MLOps, React, JavaScript, Java, GCP Search Engines Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global teams. High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Thrive What You Can Expect From Us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
India
On-site
droppGroup is hiring an exceptionally capable AI Engineer with deep, practical expertise in generative AI and natural language processing (NLP). This role is not for generalists, we are looking for specialists who can operate across the full AI lifecycle: from research implementation to real-time production deployment. You must bring a disciplined engineering mindset, state-of-the-art awareness, and hands-on experience with training, fine-tuning, and deploying large language models at scale. Responsibilities Design and fine-tune large transformer models for complex generative tasks including chat, summarization, and semantic understanding. Build LLM-based systems using advanced prompt engineering, retrieval-augmented generation (RAG), and optimized inference. Own the development and optimization of model training pipelines using PyTorch, Hugging Face, DeepSpeed, and related frameworks. Deploy and maintain LLMs in high-performance production environments, optimizing for latency, cost, and stability. Conduct rigorous evaluation of models using benchmarks, human feedback, and automated testing. Manage end-to-end AI workflows with robust versioning, reproducibility, and observability using MLOps tooling. Implement scalable vector databases and semantic search infrastructure for embedding-based retrieval systems. Translate latest AI research into efficient, production-grade code—no research-only profiles. Size, allocate, and optimize compute resources (CPU/GPU/TPU) for large-scale training and inference. Establish strict testing, rollback, and fail-safe mechanisms for model deployment in live systems. Qualifications Minimum 3 years of focused experience building deep learning systems, with a proven track record in NLP and/or generative AI. Hands-on experience training and deploying transformer-based models (BERT, T5, GPT, LLaMA, etc.)—from scratch or via fine-tuning. Expert-level Python, with production-grade PyTorch skills (Tensor manipulation, gradient debugging, memory profiling, etc.). Demonstrated ability to work with multi-billion parameter models, and deploy them under real-world latency and throughput constraints. Deep understanding of language modeling, embeddings, attention mechanisms, tokenization strategies, and architectural tradeoffs. Ability to read, implement, and extend research papers into robust, testable, and efficient systems. Experience with AI infrastructure at scale, including distributed training, mixed precision, checkpointing, and A100-level GPU scheduling. Familiarity with vector databases (FAISS, Weaviate, Milvus) and hybrid search systems. Proficiency in MLOps tools (MLflow, Airflow, WandB, or similar) for model lifecycle management. Strong commitment to software engineering principles: code quality, version control, CI/CD, and modular design. Capacity to own AI systems end-to-end—from dataset preprocessing to online deployment and continuous improvement. droppGroup is an equal opportunity employer. We offer a competitive compensation structure and a fast-paced growth roadmap to all our teams. Show more Show less
Posted 4 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role description About the Role: We are looking for a dynamic and tech-savvy Pre-Sales Consultant with exceptional deck creation & storytelling skills . You’ll work closely with Sales, Product, and Delivery teams to understand client needs, shape GenAI-powered solutions, and present them in compelling business terms to enterprise clients. Key Responsibilities: Design and present GenAI use cases like document summarization, knowledge assistants, intelligent search, and content generation. Build high-impact decks and demos to communicate value propositions clearly to both technical and non-technical audiences. Collaborate with Data Scientists, Engineers, and Product Managers to scope solutions and create proposal responses (RFPs/RFIs). Stay updated on the latest LLMs, APIs, and tools like OpenAI, LangChain, Anthropic, Hugging Face, etc. Conduct proof of concept (POC) discussions and map customer problems to GenAI capabilities. Support go-to-market initiatives, GenAI workshops, and client enablement programs. Required Qualifications: 5–10 years of experience in Pre-Sales / Solution Consulting / AI Product roles Strong understanding of LLMs, RAG architecture, vector databases, and prompt engineering Demonstrated experience with AI/ML use cases , ideally in NLP/GenAI space Proven ability to create visually compelling decks and pitch narratives using PowerPoint, Canva, or equivalent tools Excellent client-facing communication and presentation skills Experience working with cross-functional teams and influencing C-level stakeholders Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Chennai
On-site
Role – AIML Data Scientist Job Location : Hyderabad Mode of Interview - Virtual Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus
Posted 4 weeks ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Sag Infotech Pvt. Ltd is looking for a Artificial Intelligence Engineer SAG Infotech Private is a Jaipur based IT development and service company that specializes in accounting software products and services. Founded in 1999, Our organization is committed to providing high-quality accounting software, and service for professionals like Chartered Accountants (CA), Company Secretaries (CS), HR Managers, and more. Our software, including Genius and Gen GST software, is being used by thousands of professional companies and individuals around the country. Apart from accounting software, we provide web and mobile development services to various other industries. Recently SAG Infotech has also come up with SAG RTA, Rajasthan's first Registrar and Share Transfer Agent and category 1st RTA Services Provider in Jaipur.' SAG Infotech product portfolio also includes SDMT LCAP/LCNC Platform for Website and Apps Development. It is a cutting-edge platform utilizing Java, Angular, and practical IDEs, frameworks , and development tools. It offers a low-code no-code approach, empowering users to visually build applications with minimal coding and enhancing efficiency for both technical and non-technical users. Tasks We are looking for an experienced Data Scientist / AI Developer with a strong foundation in classical machine learning, deep learning, natural language processing (NLP), and generative AI. You will be responsible for designing and implementing AI models, including fine-tuning large language models (LLMs), and developing innovative solutions to solve complex problems in a variety of domains. Key Responsibilities: Develop and implement machine learning models and deep learning algorithms for various use cases. Work on NLP projects involving text classification, language modelling, entity recognition, and sentiment analysis. Leverage generative AI techniques to create innovative solutions and models for content generation, summarization, and translation tasks. Fine-tune large language models (LLMs) to optimize performance for specific tasks or applications. Collaborate with cross-functional teams to design AI-driven solutions that address business problems. Analyse large-scale datasets, perform data pre-processing, feature engineering, and model evaluation. Stay updated with the latest advancements in AI, ML, NLP , and LLMs to continuously improve models and methodologies. Present findings and insights to stakeholders in a clear and actionable manner. Build and maintain end-to-end machine learning pipelines for scalable deployment. Required Skills: Strong expertise in supervised and unsupervised machine learning techniques. Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Solid experience in Natural Language Processing (NLP) , including tokenization, embeddings, and sequence modelling. Hands-on experience with generative AI models and their practical applications. Proven ability to fine-tune large language models (LLMs) for specific tasks. Strong programming skills in Python and familiarity with libraries like Scikit-learn, NumPy, and pandas. Experience in handling large datasets and working with databases (SQL, NoSQL). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization tools (Docker, Kubernetes). Deep expertise in computer vision, including techniques for object detection, image segmentation, image classification, and feature extraction. Strong problem-solving skills, analytical thinking, and attention to detail. Preferred Skills: Proven experience in fine-tuning LLMs (like llama series, mistral) for specific tasks and optimizing their performance. Expertise in computer vision techniques, including object detection, image segmentation, and classification. Proficiency with YOLO algorithms and other state-of-the-art computer vision models. Hands-on experience in building and deploying models in real-time applications or production environments. Qualifications: 5+ years of relevant experience in AI, ML, NLP, or related fields. Bachelors or Masters degree in Computer Science , Statistics , or a related discipline. Location: Jaipur (WFO) Candidate Preferably from Jaipur. Requirements Key Responsibilities: Develop and implement machine learning models and deep learning algorithms for various use cases. Work on NLP projects involving text classification, language modelling, entity recognition, and sentiment analysis. Leverage generative AI techniques to create innovative solutions and models for content generation, summarization, and translation tasks. Fine-tune large language models (LLMs) to optimize performance for specific tasks or applications. Collaborate with cross-functional teams to design AI-driven solutions that address business problems. Analyse large-scale datasets, perform data pre-processing, feature engineering, and model evaluation. Stay updated with the latest advancements in AI, ML, NLP , and LLMs to continuously improve models and methodologies. Present findings and insights to stakeholders in a clear and actionable manner. Build and maintain end-to-end machine learning pipelines for scalable deployment. Required Skills: Strong expertise in supervised and unsupervised machine learning techniques. Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Solid experience in Natural Language Processing (NLP) , including tokenization, embeddings, and sequence modelling. Hands-on experience with generative AI models and their practical applications. Proven ability to fine-tune large language models (LLMs) for specific tasks. Strong programming skills in Python and familiarity with libraries like Scikit-learn, NumPy, and pandas. Experience in handling large datasets and working with databases (SQL, NoSQL). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization tools (Docker, Kubernetes). Deep expertise in computer vision, including techniques for object detection, image segmentation, image classification, and feature extraction. Strong problem-solving skills, analytical thinking, and attention to detail. Preferred Skills: Proven experience in fine-tuning LLMs (like llama series, mistral) for specific tasks and optimizing their performance. Expertise in computer vision techniques, including object detection, image segmentation, and classification. Proficiency with YOLO algorithms and other state-of-the-art computer vision models. Hands-on experience in building and deploying models in real-time applications or production environments. Qualifications: 5+ years of relevant experience in AI, ML, NLP, or related fields. Bachelors or Masters degree in Computer Science , Statistics , or a related discipline. Location: Jaipur (WFO) Candidate Preferably from Jaipur. Show more Show less
Posted 4 weeks ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description: About Us* At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services* Global Business Services (GBS) delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Global Finance was set up in 2007 as a part of the CFO Global Delivery strategy to provide offshore delivery to Line of Business and Enterprise Finance functions. The capabilities hosted include Legal Entity Controllership, General Accounting & Reconciliations, Regulatory Reporting, Operational Risk and Control Oversight, Finance Systems Support, etc. Our group supports the Finance function for the Consumer Banking, Wealth and Investment Management business teams across Financial Planning and Analysis, period end close, management reporting and data analysis. Job Description* The candidate will be responsible for Financial Planning and Analysis (FP&A) for the Consumer Banking, Wealth and Investment Management business, and in addition will be responsible for analysis of data for decision making by senior leadership. Candidate will be responsible for data management, data extraction, data validation, report preparation, etc. The candidate will play a Leadership role in the Team responsible for FP&A and data analysis and would manage multiple projects in parallel by ensuring adequate understanding of the requirements to deliver data analysis solutions. These projects would be complex and time critical which would require the individual to comprehend & evaluate the strategic business drivers to bring in efficiencies through automation of existing reporting packages or codes. The work would be a mix of standard and ad-hoc deliverables based on dynamic business requirements. Technical competency is essential to build processes which ensure data quality and completeness across all projects / requests related to business. The core responsibilities of this individual is process management to achieve sustainable, accurate and well controlled results and influencing others to perform and deliver as per the business requirements Candidate should have a clear understanding of the end-to-end process (including its purpose) and a discipline of managing and continuously improving those processes. Responsibilities* Play a leadership role in Program Management including building / improving solutions for Financial Planning and Analysis (FP&A) and Data Analysis functions that support decision making by senior leadership Preparation of various FP&A reports (revenue, Headcount, KPIs etc.) including performing variance analysis Responsible for the forecasts and planning/budgeting process. Review and analyze variances and ensure alignment with the forecast Develop and maintain codes for the data extraction, manipulation, and summarization on tools such as SQL, Emerging technologies like Alteryx. Responsible for the preparation of various business reviews and ad-hoc financial analysis to support senior management decisions Design solutions, generate actionable insights, optimize existing processes, build tool-based automations and ensure overall program governance Understand business requirements and translate those into deliverables. Support the business on periodic and ad-hoc projects Managing and improving the work: develop full understanding of the work processes, continuous focus on process improvement through simplification, innovation, and use of emerging technology tools, and understanding data sourcing and transformation. Drive Operational Excellence across the team Managing risk: managing and reducing risk, proactively identify risks, issues and concerns and manage controls to help drive responsible growth (ex: Compliance, Operational – SAFER, Self-Identified Audit Issues, procedures, data management, etc), establish a risk culture to encourage early escalation and self-identifying issues Effective communication: deliver transparent, concise and consistent messaging while influencing and leading - drive change across teams. Collaborate with leaders across the Global Finance India teams to drive consistency and efficiency in communication, accountability and change execution Be a key contributor to business initiatives that require thought leadership and subject matter expertise. Also mentor and coach the team to become key contributors on complex data analysis requirements Extremely good with numbers and ability to present various business/finance metrics, detailed analysis and key observations to Senior Business Leaders and influence change. Requirements* Education* CA/CPA/MBA finance with bachelor’s degree in information technology/computer science/MCA - 12+ years of relevant work experience Experience Range* 12+ years of relevant work experience in Financial Planning & Analysis and Data Analysis in Banking industry. Exposure to Consumer banking or Wealth/Investment Management businesses would be an added advantage Foundational skills* Strong abilities on FP&A and Data Analytics and strong financial acumen – ability to systematically apply a combination of inductive and deductive reasoning to examine information, interpret results and arrive at well-founded logical conclusions Strong computer skills, including MS excel, power point. Familiarity with reporting tools like SQL, Essbase etc. and Emerging technologies like Alteryx, Tableau. Querying data from multiple sources, Experience in data extraction, transformation to generate meaningful insights Data Quality and Governance: Ability to clean, validate and ensure data accuracy and integrity. Prior Banking and Financial services industry experience, preferably Retail banking, or Wealth/Investment Management Strong communication skills (both verbal and written), leadership skills, and relationship management skills to navigate the complexities of aligning stakeholders, building consensus, and resolving conflicts in a large, distributed organization; proven ability to influence peers/stakeholders Proven ability to manage multiple and often competing priorities in a global environment Ability to drive strategic initiatives with a track record of successful change management Manages operational risk by building strong processes and quality control routines Desired Skills Ability to effectively manage multiple priorities under pressure, and deliver as well as being able to adapt to changes Able to work in a fast paced, deadline-oriented environment Work Timings* 12:30 pm to 9:30 pm (will require to stretch 7-8 days in a month to meet critical deadlines) Job Location* Mumbai Show more Show less
Posted 4 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane