Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra
On-site
Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Innovation & Technology Job Number: WD30240361 Job Description Job Title: ML Platform Engineer – AI & Data Platforms ML Platform Engineering & MLOps (Azure-Focused) Build and manage end-to-end ML/LLM pipelines on Azure ML using Azure DevOps for CI/CD, testing, and release automation. Operationalize LLMs and generative AI solutions (e.g., GPT, LLaMA, Claude) with a focus on automation, security, and scalability. Develop and manage infrastructure as code using Terraform, including provisioning compute clusters (e.g., Azure Kubernetes Service, Azure Machine Learning compute), storage, and networking. Implement robust model lifecycle management (versioning, monitoring, drift detection) with Azure-native MLOps components. Infrastructure & Cloud Architecture Design highly available and performant serving environments for LLM inference using Azure Kubernetes Service (AKS) and Azure Functions or App Services. Build and manage RAG pipelines using vector databases (e.g., Azure Cognitive Search, Redis, FAISS) and orchestrate with tools like LangChain or Semantic Kernel. Ensure security, logging, role-based access control (RBAC), and audit trails are implemented consistently across environments. Automation & CI/CD Pipelines Build reusable Azure DevOps pipelines for deploying ML assets (data pre-processing, model training, evaluation, and inference services). Use Terraform to automate provisioning of Azure resources, ensuring consistent and compliant environments for data science and engineering teams. Integrate automated testing, linting, monitoring, and rollback mechanisms into the ML deployment pipeline. Collaboration & Enablement Work closely with Data Scientists, Cloud Engineers, and Product Teams to deliver production-ready AI features. Contribute to solution architecture for real-time and batch AI use cases, including conversational AI, enterprise search, and summarization tools powered by LLMs. Provide technical guidance on cost optimization, scalability patterns, and high-availability ML deployments. Qualifications & Skills Required Experience Bachelor’s or Master’s in Computer Science, Engineering, or a related field. 5+ years of experience in ML engineering, MLOps, or platform engineering roles. Strong experience deploying machine learning models on Azure using Azure ML and Azure DevOps. Proven experience managing infrastructure as code with Terraform in production environments. Technical Proficiency Proficiency in Python (PyTorch, Transformers, LangChain) and Terraform, with scripting experience in Bash or PowerShell. Experience with Docker and Kubernetes, especially within Azure (AKS). Familiarity with CI/CD principles, model registry, and ML artifact management using Azure ML and Azure DevOps Pipelines. Working knowledge of vector databases, caching strategies, and scalable inference architectures. Soft Skills & Mindset Systems thinker who can design, implement, and improve robust, automated ML systems. Excellent communication and documentation skills—capable of bridging platform and data science teams. Strong problem-solving mindset with a focus on delivery, reliability, and business impact. Preferred Qualifications Experience with LLMOps, prompt orchestration frameworks (LangChain, Semantic Kernel), and open-weight model deployment. Exposure to smart buildings, IoT, or edge-AI deployments. Understanding of governance, privacy, and compliance concerns in enterprise GenAI use cases. Certification in Azure (e.g., Azure Solutions Architect, Azure AI Engineer, Terraform Associate) is a plus.
Posted 2 weeks ago
0.0 - 2.0 years
0 Lacs
Pune, Maharashtra
On-site
Job details Employment Type: Full-Time Location: Pune, Maharashtra, India Job Category: Innovation & Technology Job Number: WD30240360 Job Description Job Title: Senior Data Scientist – Data & Analytics How you will do it Advanced Analytics, LLMs & Modeling Design and implement advanced machine learning models including deep learning, time-series forecasting, recommendation engines, and LLM-based solutions (e.g., GPT, LLaMA, Claude). Develop use cases around enterprise search, document summarization, conversational AI, and automated knowledge retrieval using large language models. Fine-tune or prompt-engineer foundation models (e.g., OpenAI, Azure OpenAI, Hugging Face) for domain-specific applications. Evaluate and optimize LLM performance, latency, cost-effectiveness, and hallucination mitigation strategies for production use. Data Strategy & Engineering Collaboration Work closely with data and ML engineering teams to integrate LLM-powered applications into scalable, secure, and reliable pipelines. Contribute to the development of retrieval-augmented generation (RAG) architectures using vector databases (e.g., FAISS, Azure Cognitive Search). Support the deployment of models using MLOps principles, ensuring robust monitoring and lifecycle management. Business Impact & AI Strategy Partner with cross-functional stakeholders to identify opportunities for applying LLMs and generative AI to solve complex business challenges. Lead workshops or proofs-of-concept to demonstrate value of LLM use cases across business units. Translate complex model outputs, including those from LLMs, into clear insights and decision support tools for non-technical audiences. Thought Leadership & Mentorship Act as an internal thought leader on AI and LLM innovation, keeping JCI at the forefront of industry advancements. Mentor and upskill data science team members in advanced AI techniques, including transformer models and generative AI frameworks. Contribute to strategic roadmaps for generative AI and model governance within the enterprise. Qualifications & Experience Education in Data Science, Artificial Intelligence, Computer Science, or related quantitative discipline. 5+ years of hands-on experience in data science, including at least 1–2 years working with LLMs or generative AI technologies. Demonstrated success in deploying machine learning and NLP solutions at scale. Proven experience with cloud AI platforms—especially Azure OpenAI, Azure ML, Hugging Face, or AWS Bedrock. Technical Expertise Proficiency in Python and SQL, including libraries like Transformers (Hugging Face), LangChain, PyTorch, and TensorFlow. Experience with prompt engineering, fine-tuning, and LLM orchestration tools. Familiarity with data storage, retrieval systems, and vector databases. Strong understanding of model evaluation techniques for generative AI, including factuality, relevance, and toxicity metrics. Leadership & Soft Skills Strategic thinker with a strong ability to align AI initiatives to business goals. Excellent communication and storytelling skills, especially in articulating the value of LLMs and advanced analytics. Strong collaborator with a track record of influencing stakeholders across product, engineering, and executive teams. Preferred Qualifications Experience with IoT, edge analytics, or smart building systems. Familiarity with LLMOps, LangChain, Semantic Kernel, or similar orchestration frameworks. Knowledge of data privacy and governance considerations specific to LLM usage in enterprise environments.
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Immediate Hiring For AI Chatbot Developer - Gurgaon(WFO) About Product Chatbot : AI chatbots have become instrumental in revolutionizing various industries, including insuretech, taxation, and fleet management. In the insuretech sector, AI chatbots are employed to enhance customer interactions, providing instant assistance with policy inquiries, claims processing, and policy renewals. These chatbots leverage natural language processing to understand and respond to customer queries, improving the overall customer experience and reducing response times. In taxation, AI chatbots streamline complex processes by assisting users with tax-related questions, helping them navigate tax regulations, and providing real-time updates on changes in tax laws. These chatbots can guide users through the filing process, ensuring accuracy and compliance while simplifying the overall tax experience. Fleet management benefits from AI chatbots by automating communication and decision-making processes. Chatbots in this context can provide real-time information on vehicle locations, maintenance schedules, and fuel consumption. They enable efficient coordination of fleet activities, optimizing routes, and addressing maintenance issues promptly. This not only improves operational efficiency but also contributes to cost savings and enhanced safety. Python Developer (AI Chatbot) Education : Btech / Mtech Experience : 2-4yr Location : Gurgaon Notice Period : Immediate or Max to max 10-15 days only Job Description Developing chatbots and voice assistants on various platforms for diverse business use-cases Work on a chatbot framework/architecture using an open-source tool or library Implement Natural Language Processing (NLP) for chatbots Integration of chatbots with Management Dashboards and CRMs Resolve complex technical design issues by analyzing the logs, debugging code, and identifying technical issues/challenges/bugs in the process Ability to understand business requirements and translate them into technical requirements Open-minded, flexible, and willing to adapt to changing situations Ability to work independently as well as on a team and learn from colleagues Ability to optimize applications for maximum speed and scalability Skills Required Minimum 2+ years of experience in Chatbot Development using any open-source framework (eg Rasa, Botpress) Experience with both text-to-speech and speech-to-text Should have a good understanding of various Chatbot Experience with integration of bots for platforms like Facebook Messenger, Slack, Twitter, WhatsApp, etc. Experience in applying different NLP techniques to problems such as text classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bot's design Should be familiar with these terms : Tokenization, N-Grams, Stemmers, lemmatization, Part of speech tagging, entity resolution, ontology, lexicology, phonetics, intents, entities, and context. Knowledge of either SQL and NoSQL Databases such as MySQL, MongoDB, Cassandra, Redis, PostgreSQL (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Job Title: AI/ML + Backend Developer (SEO Automation & Technical Implementation) Location: Remote (APAC preferred) | Full-Time or Contract 🔹About the Role We’re looking for a 3-7 years experienced technical powerhouse who blends backend engineering, AI/ML experience, and hands-on SEO implementation skills. This hybrid role will support our mission to scale intelligent SEO operations and automate key parts of our publishing workflows. You’ll build custom tools and systems that integrate machine learning, backend development, and SEO performance logic — from programmatic content generation and internal linking engines to technical audits, schema injection, and Google Search Console automations. 🔹What You'll Be Doing 🔧 Backend & Automation Development Build internal tools and APIs using Python or Node.js Automate content workflows (meta/gen content, redirects, schema, etc.) Integrate third-party APIs (GSC, Ahrefs, OpenAI, Gemini, Google Sheets) 🧠 AI/ML Workflows Apply NLP models for entity recognition, summarization, topic clustering Deploy and manage ML inference pipelines Work with LLMs to scale content enhancements (FAQs, headlines, refresh logic) ⚙️ SEO Automation & Technical Implementation Run and implement technical SEO audits (crawl issues, sitemaps, indexing, Core Web Vitals) Automate internal linking, canonical tags, redirects, structured data Use tools like Screaming Frog CLI, GSC API, and Cloudflare for scalable SEO execution 📈 Performance Monitoring Set up dashboards to monitor SEO KPIs and anomaly detection Build alerting systems for performance drops, crawl issues, or deindexed content 🔹 Key Skills Required Languages & Tools: Python (FastAPI, Pandas, Scrapy, etc.) and/or Node.js Databases (PostgreSQL, MongoDB, Redis) Docker, GitHub Actions, Cloud (GCP/AWS preferred) GSC API, Screaming Frog CLI, Google Sheets API OpenAI/Gemini API, LangChain or similar frameworks SEO Knowledge: Strong understanding of on-page and technical SEO Experience with programmatic content, schema markup, and CWV improvements Familiar with common issues like crawl depth, duplication, orphan pages, and indexability 🔹 Nice to Have Experience with content/media/publishing websites Familiarity with CI/CD and working in async product teams Exposure to headless CMS or WordPress API integrations Past experience automating large-scale content or SEO systems 🔹 What You'll Get The chance to work on large-scale content automation and modern SEO problems High autonomy, technical ownership, and visibility in decision-making Flexible remote work and performance-based incentives Direct collaboration with SEO strategy and editorial stakeholders . Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
About Us: Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data and AI-first scaled operations services. Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact. Role Overview: We’re looking for a Generative AI Engineer to join our client’s team and build intelligent systems powered by large language models and other generative AI architectures. This role involves developing and deploying LLM-based features, integrating vector search, fine-tuning models, and collaborating with product and engineering teams to ship robust, scalable GenAI applications. You’ll work across the GenAI stack — from prompt design to inference optimization — and shape how generative models are used in real-world products. Responsibilities: Fine-tune and deploy LLMs (e.g., GPT, LLaMA, Mistral) using frameworks like Hugging Face Transformers or LangChain Build and optimize Retrieval-Augmented Generation (RAG) pipelines using vector databases (e.g., Pinecone, FAISS) Engineer prompts for structured, reliable outputs across use cases (chatbots, summarization, coding copilots, etc.) Implement scalable inference pipelines and optimize latency, throughput, and cost using techniques like quantization or model distillation Collaborate with product, design, and frontend teams to integrate GenAI into user-facing features Monitor, evaluate, and continuously improve model performance, safety, and accuracy in production Ensure compliance with privacy, safety, and responsible AI practices (e.g., content filtering, output sanitization) Required Skills: Strong programming skills in Python, with familiarity in modern ML tooling Practical experience with LLM frameworks (e.g., Hugging Face Transformers, LangChain, LlamaIndex) Experience building or deploying RAG pipelines, including handling embeddings and vector search Understanding of transformer models, prompt engineering, and tokenization strategies Hands-on with APIs (OpenAI, Anthropic, Cohere, etc.) and model serving (FastAPI, Flask, etc.) Experience deploying ML models using Docker, Kubernetes, and/or cloud services (AWS/GCP/Azure) Comfortable with model evaluation, monitoring, and troubleshooting inference pipelines Nice to Have: Experience with multimodal models (e.g., diffusion models, TTS, image/video generation) Knowledge of RLHF, safety alignment, or model fine-tuning best practices Familiarity with open-source LLMs (e.g., Mistral, LLaMA, Falcon, Mixtral) and optimization (LoRA, quantization) Experience with LangChain agents, tool usage, and memory management Contributions to open-source GenAI projects or published demos/blogs on generative AI Exposure to frontend technologies (React/Next.js) for prototyping GenAI tools Educational Qualifications: Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related technical field Candidates with relevant project experience or open-source contributions may be considered regardless of formal degree Show more Show less
Posted 2 weeks ago
9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skill fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. How will you make an impact in this role? The Infrastructure Data & Analytics team unifies FinOps, Data Science and Business Intelligence to enable Technology cost transparency, infrastructure performance optimization and commercial efficiency for the enterprise through consistent, high-quality data and predictive analytics. This team within Global Infrastructure aims to establish and reinforce a culture of effective metrics, data-driven business processes, architecture simplification, and cost awareness. Metric-driven cost optimization, workload-specific forecasting and robust contract management are among the tools and practices required to drive accountability for delivering business solutions that derive maximum value. The result will provide a solid foundation for decision-making around cost, quality and speed. We are seeking a strong, data-driven Senior Technical Program Manager who knows that delivering on that promise takes foresight, planning and agility. The Sr. Technical Program Manager will be a key member of the team, and will leverage their technical knowledge and project management skills to drive delivery of our data architecture target state implementation, data model migration, and data automation workstreams that underpin our Infrastructure Data Visualization Portal and other capabilities. They will translate business decisions into data analytics and visualization requirements, prioritize the team’s sprint backlog, and support engagement with data providers to ensure data is accessed and ingested consistently and correctly. This individual will be responsible for ensuring excellent and timely execution following agile practices and implementing appropriate agile ceremonies to manage risks and dependencies. This individual will require a unique blend of strong data analytics and leadership skills to manage and prioritize the data requirements across our suite of data and analytics tools and dashboards. They will bring passion for data-driven decisions, user experience, and execution to the role. Key responsibilities include: Steer execution of data architecture and data model migrations to meet the needs of FinOps, Data Science and Business Intelligence teams, as well as other key partners Lead technical program conversations on architectureal approach, system design and data management and compliance Actively manage backlog for data migration, automation, and ingestion workstreams Develop and maintain data source and feature request ticketing process in Jira Partner across ID&A teams to ensure data requirements are met and timeline risks are managed and mitigated Establish appropriate agile processes to track and manage dependencies across disciplines in staying on track to meet short-term and long-term implementation roadmaps Collaborate with product teams to refine, prioritize, and deliver data and feature requirements through technical acumen, customer-first perspective, and enterprise mindset Support development of appropriate reporting processes to measure OKRs and performance metrics for delivery of our data lake architecture Create an environment of continuous improvement by steering and delivering reflective conversation and regular retrospectives, project standups, workshops, communications, and shared processes to ensure transparency of development process and project performance Facilitate stakeholder engagement, decision-making, and building trust across data providers and critical stakeholders Work with IT Asset Management, Enterprise Architecture, and Business & Vendor Management teams to define enterprise-scalable solutions that meet the needs of multiple stakeholders Partner with data engineering teams to develop, test and deliver the defined capabilities and rapidly iterate new solutions Facilitate and prepare content for leadership updates on delivery status and key decisions needed to support project delivery and de-risk implementation obstacles Partner in PI planning meetings and other Agile ceremonies for the team: pressure testing plans for feasibility and capacity Monitor and ensure compliance with SDLC standards Ensure and instill documentation best practices to ensure designs meet requirements and processes are repeatable Leverage the evolving technical landscape as needed, including AI, Big Data, Machine Learning and other technologies to deliver meaningful business insights Establish ongoing metrics and units of measurement to clearly define success and failure points and to guide feature/capability prioritization based on business priorities Draft impactful and comprehensive communications, presentations, and talking points for key business reviews, executive presentations, and discussions; escalate and facilitate resolution of risks, issues, and changes tied to product development Act as point of contact for internal inquiries and key partnerships across Technology and business teams Minimum Requirements: 9 + years of experience delivering data lake or backend data platform capabilities and features built using modern technology and data architecture techniques Proven track record for managing large, complex features or products with multiple partners Technical understanding of event-driven architectures, API-first design, cloud-native technologies, and front-end integration patterns in order to discuss technical challenges about system design and solutioning Ability to create clarity and execute plans in ambiguity, and to inspire change without direct authority Self-starter who is able to provide thought leadership and prioritization with limited guidance and in a complex environment Experience in data analytics, data architecture, or data visualization Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross-team communication Experience facilitating Agile, Scrum or other rapid application development teams to deliver technology solutions on time, on budget, and to spec Capable of leading technology and culture change with excellent strategic and technical thought leadership, and strong program management skills High attention to organization and detail in a deadline-driven work environment Proven ability to solve problems and resolve issues with appropriate communications and escalation criteria Outstanding oral and written communication skills with strong personal presence; active listening skills, summarization skills, and lateral thinking to uncover and react to emerging opportunities Deep understanding of the full lifecycle of rpodcut development, from concept to delivery, including Test Driven Development (TDD) Understanding of complex software delivery including build, test, deployment, and operations; conversant in AI, Data Science, and Business Intelligence concepts and technology stack Experience working with technology business management, technology infrastructure or enterprise architecture teams a plus Experience with design and coding across one or more platforms and languages a plus Bachelor’s degree in computer science, data engineering, data analytics, or other technical discipline, or equivalent work experience preferred We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
India
Remote
Hi Everyone Role : Data Scientist - Gen AI, AI/ML Exp - 4+yr Shift : General IST 8 hour shift Position Type : Remote & Contractual JD : About the Role: We are seeking a passionate and results-driven Data Scientist with deep expertise in Artificial Intelligence, Machine Learning, and Generative AI (GenAI) . In this role, you will work at the intersection of data science and cutting-edge AI technologies to build intelligent systems, drive automation, and unlock business value from data. Key Responsibilities: Design, develop, and deploy AI/ML models and GenAI-based applications to solve real-world business problems. Conduct data wrangling, preprocessing, feature engineering, and advanced analytics. Build NLP, computer vision, recommendation, or predictive models using traditional and deep learning methods. Collaborate with cross-functional teams including product managers, data engineers, and software developers. Stay current with advancements in AI/ML and apply best practices to improve existing systems. Design and evaluate LLM-based pipelines , prompt engineering strategies, and fine-tuning approaches. Present insights and results to stakeholders in a clear and actionable manner. Work on cloud-based ML pipelines (AWS/GCP/Azure) and MLOps frameworks for deployment and monitoring. Key Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or related field. 4+ years of experience in data science and machine learning . Strong hands-on skills in Python , SQL , and ML libraries like scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers . Proficiency in GenAI use cases (text generation, summarization, image generation, etc.) and LLMs (OpenAI GPT, BERT, etc.). Experience with prompt engineering , RAG pipelines , LangChain , or vector databases is highly desirable. Strong understanding of data structures, algorithms, and model evaluation techniques. Exposure to cloud platforms (AWS/GCP/Azure) and containerization tools like Docker, Kubernetes is a plus. Excellent communication and problem-solving skills. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Primary Responsibilities Design and develop AI-driven web applications using Streamlit and LangChain. Implement multi-agent workflows with LangGraph. Integrate Claude 3 (via AWS Bedrock) into intelligent systems for document and image processing. Work with FAISS for vector search and similarity matching. Develop document integration solutions for PDF, DOCX, XLSX, PPTX, and image-based formats. Implement OCR and summarization features using EasyOCR, PyMuPDF, and AI models. Create features such as spell-check, chatbot accuracy tracking, and automatic re-training pipelines. Build secure apps with SSO authentication, transcript downloads, and reference link generation. Integrate external platforms like Confluence, SharePoint, ServiceNow, Veeva Vault, Outlook, G.Net/G.Share, and JIRA. Collaborate on architecture, performance optimization, and deployment. Required Skills Strong expertise in Streamlit, LangChain, LangGraph, and Claude 3 (AWS Bedrock). Hands-on experience with boto3, FAISS, EasyOCR, and PyMuPDF. Advanced skills in document parsing and image/video-to-text summarization. Proficient in modular architecture design and real-time AI response systems. Experience in enterprise integration with tools like ServiceNow, Confluence, Outlook, and JIRA. Familiar with chatbot monitoring and retraining strategies. Secondary Skills Working knowledge of PostgreSQL, JSON, and file I/O with Python libraries like os, io, time, datetime, and typing. Experience with dataclasses and numpy for efficient data handling and numerical process Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As an AI Engineer specializing in Machine Learning and Natural Language Processing, you will lead the development and deployment of state-of-the-art AI solutions and products for a diverse client base. What You’ll do Design, develop, and operationalize advanced NLP models such as summarization, question-answering, intent recognition, dialog/conversational AI, semantic search, named entity recognition, knowledge discovery, document understanding, and text classification. Work with a wide range of datasets from various sources including websites, wikis, enterprise applications, document stores, file systems, conversation platforms, social media, and databases. Employ leading-edge algorithms and models from TensorFlow, PyTorch, and Hugging Face, and engage with next-gen LLM frameworks like Langchain and Guardrails. Utilize modern MLOps practices to evaluate, manage, and deploy models efficiently and effectively in production environments. Develop and refine tools and processes to improve model performance and reproducibility across multiple customer engagements. Build and maintain robust, scalable solutions using cloud infrastructure such as AWS and Databricks to deploy LLM-powered systems. Create evaluation datasets, conduct rigorous model testing to ensure they meet high standards of accuracy and usability, and present findings and models effectively using platforms like Jupyter Notebooks. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. A Master's degree or relevant certification would be a plus. Strong experience with web frameworks like ReactJS, NextJS or Vue.js Strong programming skills in languages such as Python, Bash. Excellent analytical and problem-solving skills, and attention to detail. Exceptional communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Benefits What We Offer An opportunity to be part of an agile, highly proficient and experienced AI/ML team An opportunity to work on challenging data science and machine learning problems with customers and seeing your work deployed in action A fast-paced software development environment that uses the latest open-source tools across the development stack Benefits We provide a competitive salary and benefits package, a vibrant work environment, and numerous opportunities for professional growth. You'll have the opportunity to work with a team of industry experts on exciting projects that transform businesses and create significant value. Join us to revolutionize the way companies leverage technology for digital transformation. OnebyZero is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
India
Remote
🔍 AI Engineer — phablo.ai (Remote) Location: Remote Type: Full-time Company: Phablo.ai Team: Founding Engineering Team Experience: 1–4 years (exceptional freshers welcome) 🌟 About Us At Phablo.ai , we’re transforming how compliance works for life sciences. No more clunky tools or manual processes — we’re building an AI-powered platform that helps teams stay compliant faster, smarter, and with confidence. Co-Founded by domain and tech experts from Germany & India, headquartered in SIngapore , our mission is global: to reimagine compliance workflows for the world’s most regulated industries. 🚀 Perks & Culture 🧠 Founding Opportunity – Join as part of our early core engineering team and shape the product and tech stack from the ground up. 🌍 Remote-First, Global Team – Collaborate with co-founders and teammates across continents . Our key markets are the EU and US. 💰 Compensation – Competitive startup salary with strong growth upside. As the company grows, so will your salary and other perks. Equity opportunities available for long-term, high-impact contributors at later stages. 📚 Limitless Learning – Work on cutting-edge LLM, RAG, and document AI systems with real-world impact. ⚙️ Culture of Innovation – We move fast, experiment often, and value initiative, autonomy, and team spirit. 🎯 Responsibilities Architect and implement components of our AI compliance engine using techniques like RAG, hybrid search, summarization, and entity extraction. Fine-tune and optimize foundation models (e.g., LLaMA, Mistral, GPT) for tasks such as regulation parsing, document comparison, and compliance Q&A. Build document ingestion, chunking, and embedding pipelines using LangChain , Haystack , or LlamaIndex . Integrate with vector databases (e.g., FAISS, Qdrant, Weaviate, Pinecone) to enable scalable semantic retrieval. Develop and deploy FastAPI/Flask-based APIs for real-time or batch AI inference services. Apply prompt engineering, memory techniques, and few-shot learning to improve response quality, accuracy, and explainability. Work closely with domain experts and product teams to build AI systems aligned with real regulatory workflows . Continuously monitor and evaluate AI model performance across metrics like latency, accuracy, hallucination rate, and compliance risk. 🛠️ Required Skills Strong Python skills and experience with ML frameworks like PyTorch , TensorFlow , or JAX . Deep understanding of LLMs and NLP workflows including: NER, summarization, semantic similarity, RAG, and hybrid search Embedding generation and vector search optimization Familiar with tools like: LangChain , Haystack , LlamaIndex Vector DBs : FAISS, Weaviate, Qdrant, Pinecone FastAPI , Flask , or similar frameworks for serving models Hands-on with MLOps tooling like MLflow, Weights & Biases, HuggingFace Hub, or cloud platforms (AWS, GCP, Vertex AI). Awareness of AI safety , data privacy , and security best practices in regulated industries. Excellent problem-solving, communication, and collaboration skills. Bonus: Experience in life sciences , healthcare , legal tech , or regulatory AI is highly desirable. 🎓 Qualifications 1–4 years of experience engineering AI systems in production settings. OR, if you’re a fresher : A strong academic background in AI/ML/NLP/Data Science from a reputed institution. Demonstrated ability through research publications , open-source contributions , or top performance in AI competitions. 💬 Let’s Build the Future of Compliance If you're excited about building AI systems that solve real-world problems in healthcare, pharma, and life sciences — let’s talk! Apply now or reach out directly to ts@phablo.ai Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Description Posted Wednesday, June 11, 2025, 7:30 PM | Expires Thursday, July 31, 2025, 7:29 PM Who We Are Magnit is the future of work. Serving hundreds of the world’s most recognizable brands for the past 30+ years, Magnit offers the industry’s first holistic platform for the modern workforce. Magnit's integrated workforce management (IWM) platform supported by data, software, intelligence, and best-in-class services team is key to our clients’ success. It can adapt quickly to regional or industry economic shifts, and provides the speed, scale, flexibility, transparency, and expertise required to meet an organization’s contingent workforce management, talent strategy and broader organization goals. At Magnit, you’ll work with passionate colleagues who collaborate and deliver meaningful results that positively transform the largest companies around the globe. Senior AI & ML Platform Engineer Location: Bengaluru Experience Level: 3–10 Years About the Role We are building next-generation AI-powered platforms that leverage Large Language Models (LLMs), advanced Machine Learning (ML), NLP, and agentic workflows to automate critical business processes across multiple domains. We are looking for experienced AI/ML Engineers who are passionate about building real-world AI products—not just experimenting. You will help design, prototype, and deploy intelligent systems integrating LLMs, agents, APIs, and robust ML models for classification, prediction, and optimization. Responsibilities Build and integrate agentic AI workflows using LangChain or similar frameworks. Develop APIs and backend services to support LLM and ML-driven platforms. Implement, train, and optimize lightweight and scalable ML models for classification, prediction, and anomaly detection. Work with LLMs (OpenAI, Claude, Gemini) to fine-tune prompts, create retrieval-augmented generation (RAG) systems, and chain multi-step tasks. Design and optimize vector search implementations for LLM-driven applications. Integrate AI models with external systems (VMS platforms, ERP systems) using APIs. Implement document processing pipelines combining OCR with LLM/ML validation. Apply ML techniques for supply-demand forecasting, fraud detection, and optimization. Participate in design discussions, brainstorming sessions, and agile sprints. Collaborate with senior AI architects and SMEs to refine platform capabilities. Required Skills Strong experience in Python for ML and AI development. Expertise in ML frameworks such as TensorFlow, PyTorch, scikit-learn. Knowledge of NLP tasks (text extraction, classification, summarization). Experience building and optimizing RAG workflows with vector databases. Solid understanding of API development (FastAPI, Flask, or similar frameworks). Familiarity with prompt engineering and agent orchestration concepts. Ability to build scalable ML models for classification, forecasting, and anomaly detection. Experience integrating ML solutions with cloud services (AWS, Azure, GCP). Knowledge of OCR tools and techniques for document processing. Exposure to multi-agent architectures and reinforcement learning is a plus. Understanding of feature engineering, model evaluation, and optimization techniques. Mindset We Are Looking For Builder’s mindset: You love turning ideas into working solutions fast. Curiosity: Passionate about keeping up with the latest in GenAI and ML. Problem-solver: Comfortable working in ambiguous environments where solutions need discovery, not just execution. Collaboration: Open to learning from senior architects and iterating based on feedback. What Magnit will Offer You At Magnit, you’ll be joining an innovative, high-growth environment and can quickly make an impact to help transform the largest companies in the world. You will work with passionate colleagues who collaborate and deliver. Magnit offers all employees the opportunity for growth and development, and we want individuals to fulfill their potential and blaze their own trails! Magnit will offer you a competitive PTO and benefits package, including medical, dental, and vision coverage, retirement planning, as well as discounts and perks for tickets, travel, merchandise and more! Magnit encourages employees to participate in giving back, and we will match employee contributions to favorite charities and support corporate volunteering hours to make a difference in your community! If this role isn’t for you Stay in touch, we will let you know when we have new positions on the team. To see a complete list of our open career opportunities please visit. https://magnitglobal.com/us/en/company/careers.html To do our best work we need different viewpoints. Therefore, we celebrate diversity and embrace inclusion. As an equal opportunity employer, we are dedicated to building a team that represents a variety of backgrounds, perspectives, and skills. We strive to ensure that we maintain a positive and enriching work environment for all. By applying to this role, you consent to Magnit safely storing and managing your personal data. Please read this link to learn more. https://magnitglobal.com/us/en/privacy-notice.html Job Details Job Family Staff Jobs Pay Type Salary Bengaluru, Karnataka, India
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Salem, Tamil Nadu
On-site
Job Title: Senior Associate – Medical Record Analyst (US Healthcare) Work Mode: On-site Location: Salem, Tamil Nadu. Flexi Time: Within 8:00 AM – 7:00 PM (9 hour workday) Work Environment & Schedule: Full-time | Day Shift: Monday to Friday (No weekends or on-call duties) The position offers a modern, ergonomic workspace equipped with adjustable seating and proper lighting to support comfort and productivity throughout the day. Screen setups are optimized to ensure a healthy and comfortable working experience. Position Overview The Medical Record Analyst will analyze, organize, and summarize complex medical records for Med-Legal and life insurance projects. This role requires strong attention to detail, proficiency in medical data tools, and strict adherence to HIPAA and GDPR compliance standards. Key Responsibilities Analyze complex medical documentation to extract key diagnoses, treatments, and events for legal and insurance use. Ensure data accuracy and conduct quality control by identifying discrepancies and performing audits. Apply critical thinking to interpret complex case histories and summarize timelines and outcomes. Prepare concise, data-driven reports and communicate complex medical information clearly to stakeholders. Manage multiple projects with accuracy under tight deadlines while maintaining proactive communication. Skills Required Proficiency in data management systems and medical software tools Strong analytical and critical thinking abilities Excellent communication and reporting skills Ability to work under tight deadlines without compromising quality Quick adaptability to new tools, processes, and workflows Experience in Med-Legal or life insurance projects In-depth knowledge of HIPAA / GDPR compliance and data privacy regulations Education and Experience A degree in healthcare, medical informatics, or a related field Minimum of 2 years experience in medical data analysis, preferably in Med-Legal or insurance-based projects Benefits Paid sick leave, casual leave, and compensatory leave Statutory benefits (PF) Paid parental leave based on company norms (maternity & paternity) Educational support for employees’ children Holidays aligned with Indian and US calendars Weekends off (Saturday and Sunday) Health insurance coverage Employee reward program Night shift allowance (applicable for roles involving night shifts; not applicable to this day-shift position ) Certification assistance and career growth opportunities Industry Type: Med-Legal Department: Healthcare & Life Sciences Employment Type: Full-time, Permanent Role Category: Healthcare & Life Sciences – Other Compensation: ₹4,50,000 – ₹6,00,000 per annum (based on experience and qualifications) Team Structure At LezDo, you’ll be part of a supportive and collaborative team environment that values communication, teamwork, and shared accountability in delivering high-quality results. Why Join Us At LezDo, we’re committed to supporting our employees’ professional growth and well-being. Join a team that values precision, collaboration, and compassion. Job Type: Full-time Benefits: Health insurance Provident Fund Schedule: Monday to Friday Night shift Supplemental Pay: Overtime pay Ability to commute/relocate: Salem, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Experience: Medical summarization: 1 year (Required) Work Location: In person
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a highly skilled Generative AI Developer : Responsibilities We are looking for a highly skilled Generative AI Developer with expertise in Large Language Models (LLMs) to join our AI/ML innovation team. The ideal candidate will be responsible for building, fine-tuning, deploying, and optimizing generative AI models to solve complex real-world problems. You will collaborate with data scientists, machine learning engineers, product managers, and software developers to drive forward next-generation AI-powered Responsibilities : Design and develop AI-powered applications using large language models (LLMs) such as GPT, LLaMA, Mistral, Claude, or similar. Fine-tune pre-trained LLMs for specific tasks (e.g., text summarization, Q&A systems, chatbots, semantic search). Build and integrate LLM-based APIs into products and systems. Optimize inference performance, latency, and throughput of LLMs for deployment at scale. Conduct prompt engineering and design strategies for prompt optimization and output consistency. Develop evaluation frameworks to benchmark model quality, response accuracy, safety, and bias. Manage training data pipelines and ensure data privacy, compliance, and quality standards. Experiment with open-source LLM frameworks and contribute to internal libraries and tools. Collaborate with MLOps teams to automate deployment, CI/CD pipelines, and monitoring of LLM solutions. Stay up to date with state-of-the-art advancements in generative AI, NLP, and foundation Skills Required : LLMs & Transformers: Deep understanding of transformer-based architectures (e.g., GPT, BERT, T5, LLaMA, Falcon). Model Training/Fine-Tuning: Hands-on experience with training/fine-tuning large models using libraries such as Hugging Face Transformers, DeepSpeed, LoRA, PEFT. Prompt Engineering: Expertise in designing, testing, and refining prompts for specific tasks and outcomes. Python: Strong proficiency in Python with experience in ML and NLP libraries. Frameworks: Experience with PyTorch, TensorFlow, Hugging Face, LangChain, or similar frameworks. MLOps: Familiarity with tools like MLflow, Kubeflow, Airflow, or SageMaker for model lifecycle management. Data Handling: Experience with data pipelines, preprocessing, and working with structured and unstructured Desirable Skills : Deployment: Knowledge of deploying LLMs on cloud platforms like AWS, GCP, Azure, or edge devices. Vector Databases: Experience with FAISS, Pinecone, Weaviate, or ChromaDB for semantic search applications. LLM APIs: Experience integrating with APIs like OpenAI, Cohere, Anthropic, Mistral, etc. Containerization: Docker, Kubernetes, and cloud-native services for scalable model deployment. Security & Ethics: Understanding of LLM security, hallucination handling, and responsible AI : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 2-4 years of experience in ML/NLP roles with at least 12 years specifically focused on generative AI and LLMs. Prior experience working in a research or product-driven AI team is a plus. Strong communication skills to explain technical concepts and findings Skills : Analytical thinker with a passion for solving complex problems. Team player who thrives in cross-functional settings. Self-driven, curious, and always eager to learn the latest advancements in AI. Ability to work independently and deliver high-quality solutions under tight deadlines. (ref:hirist.tech) Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
AI & Machine Learning Engineer (Computer vision) |n India AI & Machine Learning Engineer (Computer vision), (US based or Remote in EU/India) Join North America’s largest live selling and video commerce platform, and help drive the next level of growth as our newest AI & Machine Learning Engineer. Live Selling Starts at CommentSold. CommentSold offers retailers a complete live sales solution. From onboarding and strategy, superior go-live technology, and live selling best practices to backend solutions for inventory, invoicing, and fulfillment–our team is there to guide a top-notch customer experience every step of the way. Our AI & Data team stays close to shops and creators selling through our platforms, building the Artificial Intelligence tools to make running their business easier. We make decisions fast, and priorities change as we adapt to the needs of our industry so we welcome folks that relish in the challenges of pace. We believe in quick iteration and in-the-moment feedback, so we can work collectively to build the best team and product. In this role, you will collaborate with a team of SW engineers and Cloud platform engineers, as well as data experts on our consumer-facing, live-selling platforms, and apps. This role works closely with our international & US-based product and engineering teams and will report to the leader of the AI & Data department. In this role, you will Develop and maintain code that leverages computer vision, natural language processing and machine learning algorithms and technologies in solving hot business problems Utilize machine learning concepts and algorithms, and as well as other deep learning architectures to analyze video, images and large text corpora Review and train machine learning models in and evaluate their performance Develop product similarity and product recommender routines, adjust them to efficiently work as in-app features Understand and apply natural language processing in sentiment analysis, named entity recognition, and text summarization Orchestrate computer vision tasks of image classification, object detection, image segmentation, face and pose detection, movement and facial expression Deploy machine learning models to production environments and showcase proficiency in version control systems Gear usage of the image and video generation to support the business processes of our partners and their end-customers Keep up to date with latest advances in AI and machine learning Gain a deep understanding of our product and become involved in driving out product implementation Join a rapidly growing AI & Data team with the opportunity to take on both product and technical problems If you're right for this role, you Have 4+ years of Data Science or Machine Learning experience Have strong programming skills in Python (will be tested through interview process) Have sound judgment for solving issues pragmatically without adding technical debt Demonstrate skills in designing, building, and evaluating predictive models and AI algorithms. Deep understanding of the recent landscape of possible AI solutions. Have passion for understanding business stakeholders’ problems and solving them in innovative and efficient ways Being skilled with cloud services (AWS, GCP, Azure) for deploying cloud solutions and APIs. Bring proven expertise in computer vision and NLP to the common table (will be assessed in the interview process) Be comfortable in a fast paced, pragmatic work environment Have a strong understanding of core computer science principles, hone standard CI/CD practices Experience with e-commerce and/or live-selling platforms is a strong plus Work well in a remote, collaborative team environment; Have the ability to communicate your thought process in problem-solving, verbally and over Slack Have a mindset towards high-quality output and attention to detail, comfortable being able to provide and receive feedback in code reviews Possess strong analytical and problem-solving skills, be curious about new approaches and recent research Have strong desire to learn and stay updated with the latest in AI and machine learning. Persistent and resilient in the face of challenging technical problems Experience in Machine learning Ops is a plus for candidate Are a strong written and verbal communicator; English native speaker or Advanced English language skills Must be able to flex at least 4 working hours to overlap with North American time zones; Requirement to work until 3pm EST (US) to have team interaction with teams from the other time zones. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% NLP Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Natural Language Processing Engineers with experience in text analytics, LLMs, and speech processing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Develop and optimize NLP models (NER, summarization, sentiment analysis) using transformer architectures (BERT, GPT, T5, LLaMA). Build scalable NLP pipelines for real-time and batch processing of large text data and optimize models for performance and deploy on cloud platforms (AWS, GCP, Azure). Implement CI/CD pipelines for automated training, deployment, and monitoring & integrate NLP models with search engines, recommendation systems, and RAG techniques. Ensure ethical AI practices and mentor junior engineers. Required Skills: Expert Python skills with NLP libraries (Hugging Face, SpaCy, NLTK). Experience with transformer-based models (BERT, GPT, T5) and deploying at scale (Flask, Kubernetes, cloud services). Strong knowledge of model optimization, data pipelines (Spark, Dask), and vector databases. Familiar with MLOps, CI/CD (MLflow, DVC), cloud platforms, and data privacy regulations. Nice to Have: Experience with multimodal AI, conversational AI (Rasa, OpenAI API), graph-based NLP, knowledge graphs, and A/B testing for model improvement. Contributions to open-source NLP projects or a strong publication record. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
5.0 years
6 - 15 Lacs
Thrissur
On-site
Senior AI Engineer Location: Infopark, Thrissur, Kerala Employment Type: Full-Time Experience Required: Minimum 5 Years providing AI Solutions, including expertise in ML/DL Projects About Us JK Lucent is a growing IT services provider headquartered in Melbourne, Australia, with an operations centre at Infopark, Thrissur, Kerala. We specialize in Software Development, Software Testing, Game Development, RPA, Data Analytics, and AI solutions. At JK Lucent, we are driven by innovation and committed to delivering cutting-edge technology services that solve real-world problems and drive digital transformation. About the Role We are looking for a highly skilled and experienced Senior AI Engineer to lead the development and deployment of advanced AI systems. This role requires deep expertise in Machine Learning (ML), Deep Learning (DL), and Large Language Models (LLMs). The successful candidate will work on complex AI initiatives, contribute to production-ready systems, and mentor junior engineers. A strong command of professional English and the ability to communicate technical concepts clearly is essential. Roles and Responsibilities Design and develop scalable AI and ML models for real-world applications. Build, fine-tune, and implement Large Language Models (LLMs) for use cases such as chatbots, summarization, and document intelligence. Work with deep learning frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. Collaborate with cross-functional teams to translate business problems into AI solutions with necessary visualizations using tools like Tableau or Power BI. Deploy models to production environments and implement monitoring and model retraining pipelines. Stay up to date with the latest research and trends in AI, especially in LLMs and generative models. Guide and mentor junior AI engineers, reviewing code and providing technical leadership. Contribute to technical documentation, architecture design, and solution strategies. Ensure models are developed and used ethically and comply with data privacy standards. Requirements Minimum 5 years of experience in AI/ML development with hands-on expertise in model design, development, and deployment. Strong experience working with LLMs and Generative AI tools such as Hugging Face Hub, LangChain, Haystack, LLaMA, GPT, BERT, and T5. Proficiency in Python and ML/DL libraries such as TensorFlow, PyTorch, XGBoost, scikit-learn, and Hugging Face Transformers. Solid understanding of mathematics, statistics, and applied data science principles. Experience deploying models using Docker, FastAPI, MLflow, or similar tools. Familiarity with cloud platforms (AWS, Azure, or GCP) and their AI/ML services. Demonstrated experience in working on end-to-end AI solutions in production environments. Excellent English communication skills (verbal and written) and ability to present technical topics. Strong leadership skills and experience mentoring junior developers or leading small technical teams. Bachelor's or Master's in Computer Science, AI, Data Science, or a related discipline. Job Type: Full-time Pay: ₹600,000.00 - ₹1,500,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): 1. Are you able to commute daily to Infopark, Koratty, Thrissur? (Yes/No) 2. How many years of total IT experience do you have? (Numeric) 3. How many years of experience do you have in AI/ML development? (Numeric) 4. How many years of experience do you have working with Large Language Models (LLMs)? (Numeric) 5. Are you proficient in Python? (Yes/No) 6. Have you used frameworks like TensorFlow, PyTorch, or Hugging Face? (Yes/No) 7. Have you deployed AI/ML models to production environments? (Yes/No) 8. Have you worked with cloud platforms like AWS, Azure, or GCP? (Yes/No) * 9. Do you have professional-level proficiency in English? (Yes/No) 10. What is your current notice period in days? (Numeric) 11. What is your expected salary in LPA? (Numeric) 12. What is your current or last drawn salary in LPA? (Numeric) Work Location: In person
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Fractal is one of the most prominent players in the Artificial intelligence space. Fractal's mission is to power every human decision in the enterprise and brings Al, engineering, and design to help the world's most admire Fortune 500® companies. Fractal's products include Qure.ai to assist radiologists in making better diagnostic decisions, Crux Intelligence to assist CEOs and senior executives make better tactical and strategic decisions, Theremin.ai to improve investment decisions, Eugenie.ai to find anomalies in high-velocity data, Samya.ai to drive next-generation Enterprise Revenue Growth Manage- ment, Senseforth.ai to automate customer interactions at scale to grow top-line and bottom-line and Analytics Vidhya is the largest Analytics and Data Science community offering industry-focused training programs. Fractal has more than 3600 employees across 16 global locations, including the United States, UK, Ukraine, India, Singapore, and Australia. Fractal has consistently been rated as India's best companies to work for, by The Great Place to Work® Institute, featured as a leader in Customer Analytics Service Providers Wave™ 2021, Computer Vision Consultancies Wave™ 2020 & Specialized Insights Service Providers Wave™ 2020 by Forrester Research, a leader in Analytics & Al Services Specialists Peak Matrix 2021 by Everest Group and recognized as an "Honorable Vendor" in 2022 Magic Quadrant™ for data & analytics by Gartner. For more information, visit fractal.ai Job Description: (Senior Data Scientist – Generative AI) We’re looking for a passionate Data Scientist – Generative AI who thrives at the intersection of AI research & real-world applications. This role is ideal for someone who’s eager to build, experiment & scale LLM-powered solutions in enterprise environments. This role blends hands-on Problem solving, Research, Engineering & collaboration across multidisciplinary team driving innovation across industries/domains. Responsibilities: • Design and implement advanced solutions utilizing Large Language Models (LLMs). • Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. • Conduct research and stay informed about the latest developments in generative AI and LLMs. • Develop and maintain code libraries, tools, and frameworks to support generative AI development. •Participate in code reviews and contribute to maintaining high code quality standards. • Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. • Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. • Possess strong analytical and problem-solving skills. • Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills: • Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. AND/OR • Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. • Generative AI: o Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents) Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. • Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. • Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git Must Have Skills 5–10 years of experience in Data Science - NLP, with at least 2 years in GenAI/LLMs Proficiency in Python, SQL & ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face) Hands-on experience with GenAI tools like LangChain, LlamaIndex, RAG pipelines, Prompt Engineering & vector databases (e.g., FAISS, ChromaDB) Strong understanding of NLP techniques including embeddings, topic modeling, text classification, semantic search, summarization, Q&A, chatbots, etc Experience with cloud platforms (GCP, AWS, or Azure) and CI/CD pipelines Experience integrating LLMs via Azure OpenAI, Google Vertex AI or AWS Bedrock Ability to work independently & drive projects end-to-end from development to production Strong problem-solving, data storytelling & communication skills Show more Show less
Posted 2 weeks ago
5.0 - 7.0 years
4 - 8 Lacs
Gurgaon
On-site
Title: Manager - 1 Date: Jun 9, 2025 Location: Gurgaon - R&D Company: Sun Pharmaceutical Industries Ltd Job Description Job Title: Operations Project Management (Data Management and Analytics). Purpose: Primarily responsible of consolidated portfolio management of Global Projects managed within RnD, Data and repository management, analytics helping management and leadership with Key decision making which is based on data, Budget Management, Support in Defining Shared Goals Targets and tracking the same, Capacity mapping, Review Management, Analytics on activities performed within RnD based on Historical data, Tracking overall project progress and RAG Reporting, Working with Dashboards like Tableau, Critical Chain Project Reports supporting Project Management Team and RnD Functional Heads on various Data driven inputs resulting to corrective actions where ever required. Roles and Responsibilities Role requires awareness on Project management methodologies and end to end project life cycle knowledge. Risk/Issue Management, Change Management, Re-prioritization, Optimization and Automating activities will be key requirement for the role Experience with working, managing and analysing huge data sets. RAG Reporting and escalate things on timely manner to avoid impact on deliverables. Prioritization, On Time Delivery, Excellent in Data Handling, Analysing and Summarization of the outcome. Automations, Dashboard and CCPM Tool report management, Budget and work plan management, global portfolio and project tracking, Review management, and MIS Readiness will be some of the key activities where in a person will be supporting Meeting Management, Stake holder’s management and understanding requirement and delivering outcome oriented analytics, which can help in decision making which is data driven. Key Skills Role will require excellent communication skills, good in co-ordinations, team player and will be required to work with all stake holders and departments within and outside R&D. Good with analytics, logical and lateral thinking, advance excel with key formulas, MS Office, MS Projects and power point knowledge, added advantage if aware of macros, SQL Queries and Dashboards. Innovative thinker, Flexible Approach, Go-getter with leadership skills as role requires interactions and getting work done with support of Peers, juniors, seniors and Leadership team. Qualification Bachelors or Masters in Any Field. PMP/Prince 2 Certified in Project Management Sun Pharmaceutical India Ltd JOB DESCRIPTION Position : Executive/Sr Executive Grade : G12B/G11 Experience : 5-7 years Job Location : Baroda/Gurgaon Education : Bachelors in any Stream/MBA/PMP or Prince 2 certified
Posted 2 weeks ago
0 years
0 Lacs
Ahmedabad
On-site
Job Description Design and build smart automation workflows using n8n , Zapier , and Make.com . Integrate APIs and connect third-party apps to streamline business processes. Use LLMs (e.g., OpenAI, Cohere) for tasks like summarization, data extraction, and decision logic. Build RAG pipelines with vector databases like Pinecone , ChromaDB , or Weaviate . Develop and test autonomous agents using LangChain , AutoGen , or similar frameworks. Write clean, modular code in Python or JavaScript to support custom workflow logic. Prototype ideas quickly and ship real features used in production environments. Document your workflows and collaborate with developers, consultants, and product teams. Key Skills Final-year students: Only final year students in Computer Science, AI/ML, Data Science, Information Systems, or related fields., who are willing to work full time after internship. Curiosity & Initiative : You love experimenting with new tools/technologies and aren’t afraid to break things to learn. Basic to Intermediate Coding Skills : Comfortable writing Python or JavaScript/TypeScript. Able to read API docs and write modular code. Familiarity (or willingness to learn) Workflow Platforms : Exposure to n8n, Zapier, Make.com, or similar; if you haven’t used n8n yet, we’ll help you onboard. API Knowledge : Understanding of RESTful APIs, JSON, authentication mechanisms. Interest in AI/LLMs : You know the basics of LLMs or are eager to dive in—prompt engineering, embeddings, RAG concepts. Problem-Solving Mindset : You can break down complex tasks into smaller steps, map flows, and foresee edge cases. Communication & Documentation : You can explain your workflows, document steps, and write clean README/instructions. Team Player : Open to feedback, collaborate in agile/scrum-like setups, and help peers troubleshoot.
Posted 2 weeks ago
2.0 - 3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Since 1976, Phosphea, a subsidiary of Groupe Roullier (France), has been producing and selling Inorganic Feed Phosphates and innovative specialty macro-mineral products for the animal nutrition industry. We have been a pioneer in research and innovation to bring added value to our customers and products truly adapted to their needs. Phosphea has a culturally diverse workforce of 550 employees on 5 continents and a presence in over 100 countries. Our technical expertise and proximity to our customers represent our key strengths.Our ambition is to answer the current challenges of the industry which are the economical and zootechnical performance while at the same time protecting animal welfare and the environment. We are a looking for an Operations Executive for our Chennai-based South-Asian regional office to support our growth through Excellency in logistics services. This includes warehouse/CFS management, shipment follow-up, quality control documentation, SAP inventory management, transport inbound and out bound. Operations Executive –M/F Based in Chennai (India) Your main tasks : Arrange and manage daily inbound and outbound operations from plant to stock and stock to clients in India (full order cycle). Will be fully involved in groundwork and operational activities daily 80% time at Warehouse and CFS and 20% at office. Maintain metrics and analyze data to assess performance and implement improvements Supervise, coach and train the warehouse workforce/CFS as per our needs. Resolve any arising problems or complaints within area of management. Keep track of quality, quantity, stock levels, delivery times, transport costs and overall efficiency Your profile : Strong communication skills in English and solid summarization and articulation skills Bachelor’s degree in Logistics or Supply Chain is preferred but not mandatory for relevant experienced candidates Local customs and trade compliance knowledge Proven working experience as a logistics executive for a MNC would be preferred. SAP proficient (SAP Business One preferably) Excellent analytical, problem solving and strong orientation toward rigor Ability to work independently with remote supervision Experience of 2-3 years in the relevant field Willingness to take up operational activities out of normal working hours since it involves CFS timing cannot be fixed to 9 to 6 as operations happens 24/7. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Description Intozi is a leading provider of a comprehensive Video Analytics Platform, specializing in AI solutions for real-time computer vision and deep learning applications across various industries such as Smart Cities, Manufacturing, Traffic, Retail, and Warehousing. Our expertise includes computer vision, deep learning, machine learning, and IoT technologies, making us industry pioneers. Pre-Sales Executive – Video Analytics Solutions Location : Gurugram, India Experience : 3–7 Years Department : Sales & Solution Engineering Industry : AI-based Video Analytics / Surveillance / Smart Cities 🧾 Job Summary We are looking for a proactive and technically adept Pre-Sales Executive to support Intozi’s growth across government, enterprise, and smart city verticals. The ideal candidate will bridge the gap between client requirements and Intozi’s AI-powered video analytics offerings, with a strong focus on tender response , RFP/RFQ management , technical documentation , and regulatory compliance . 🎯 Key Responsibilities Tender & RFP Management Analyze public and private sector RFPs, EoIs, and tender documents Coordinate proposal creation, BOQ mapping, pre-bid queries, and submissions Ensure adherence to buyer-specific compliance formats and technical checklists Solution Design & Technical Mapping Understand client needs and map them to Intozi’s product suite (e.g., ANPR, traffic enforcement, smart surveillance, billing compliance, etc.) Create technical architectures, solution diagrams, and integration plans with third-party systems Client Engagement & Demo Support Present product capabilities to clients via demos, POCs, and on-site presentations Work closely with the sales team to develop value propositions aligned to vertical use cases Regulatory & Standards Compliance Ensure all proposed solutions comply with standards such as STQC, CE, NDA policy, or state-specific guidelines Coordinate with internal tech/legal teams for certifications, warranty, SLA, and data policies Market Intelligence Track industry tenders, competitor bids, empanelment opportunities, and upcoming smart city initiatives Maintain repository of reusable content, past submissions, and techno-commercial templates 🧠 Requirements Bachelor's degree in Engineering / IT / Electronics or related field 3+ years in pre-sales, bid management, or solution consulting Experience with RFPs, government procurement processes, and tender portals (e.g., GeM, eProc) Strong understanding of CCTV, AI-based video analytics, VMS, edge computing, or compliance-based surveillance Excellent documentation and presentation skills (Word, Excel, PPT, Visio) Ability to coordinate with internal tech and sales teams under tight timelines ✅ Good to Have Familiarity with ANPR, face recognition, video summarization, or traffic enforcement use cases Working knowledge of STQC, MoRTH, NHAI, and smart city mission tenders Experience in preparing compliance matrices and pre-bid clarifications Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title: AI Lead - Video & Image Analytics, GenAI & NLP (AWS) Location: Chennai Company: Datamoo AI About Us: Datamoo AI is an innovative AI-driven company focused on developing cutting-edge solutions in workforce and contract management, leveraging AI for automation, analytics, and optimization. We are building intelligent systems that enhance business efficiency through advanced AI models in video analytics, image processing, Generative AI, and NLP, deployed on AWS. Job Overview: We are seeking a highly skilled and experienced AI Lead to drive the development of our AI capabilities. This role requires expertise in video analytics, image analytics, Generative AI, and Natural Language Processing (NLP), along with hands-on experience in deploying AI solutions on AWS. The AI Lead will be responsible for leading a team of AI engineers, researchers, and data scientists, overseeing AI strategy, and ensuring the successful execution of AI-powered solutions. Key Responsibilities: Lead and mentor a team of AI engineers and data scientists to develop innovative AI-driven solutions. Design and implement AI models for video analytics, image processing, and NLP applications. Drive the development of Generative AI applications tailored to our product needs. Optimize and deploy AI/ML models on AWS using cloud-native services like SageMaker, Lambda, and EC2. Collaborate with cross-functional teams to integrate AI solutions into Datamoo AI’s workforce and contract management applications. Ensure AI solutions are scalable, efficient, and aligned with business objectives. Stay updated with the latest advancements in AI and ML and drive adoption of new technologies where applicable. Define AI research roadmaps and contribute to intellectual property development. Required Skills & Qualifications: 4+ years of experience in AI, ML, or Data Science with a focus on video/image analytics, NLP, and GenAI. Strong hands-on experience with deep learning frameworks such as TensorFlow, PyTorch, or OpenCV. Expertise in Generative AI, including transformer models (GPT, BERT, DALL·E, etc.). Proficiency in computer vision techniques, including object detection, recognition, and tracking. Strong experience in NLP models, including text summarization, sentiment analysis, and chatbot development. Proven track record of deploying AI solutions on AWS (SageMaker, EC2, Lambda, S3, etc.). Strong leadership skills with experience in managing AI/ML teams. Proficiency in Python, SQL, and cloud computing architectures. Excellent problem-solving skills and ability to drive AI strategy and execution. Preferred Qualifications: Experience with MLOps, model monitoring, and AI governance. Knowledge of blockchain and AI-powered contract management systems. Understanding of edge AI deployment for real-time analytics. Published research papers or contributions to open-source AI projects. What We Offer: Opportunity to lead AI innovation in a fast-growing AI startup. Collaborative work environment with cutting-edge AI technologies. Competitive salary and stock options. Flexible work environment (remote/hybrid options available). Access to AI research conferences and continuous learning programs. If you are an AI expert passionate about pushing the boundaries of AI and leading a dynamic team, we’d love to hear from you! How to Apply: Send your resume and a cover letter to hr@datamoo.ai. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description About Us: We are a fast-growing AI domain registrar and website builder focused on automation, scalability, and user experience. Our development team is based in Pune and includes front-end, back-end, and QA engineers. We're looking for a hands-on Software Architect to lead the technical direction of our product and champion the use of AI across the development lifecycle. Job Overview: As our Software Architect, you will work closely with the Product Owner to determine how we build what we build. You will guide the architectural direction, evaluate and select technologies, mentor developers, and ensure our codebase is scalable, maintainable, and reliable. A core part of your role will be driving AI adoption within the engineering team to improve development speed, product quality, and customer experience. Responsibilities: Define and evolve the technical architecture for front-end and back-end systems. Guide key platform choices: frameworks, infrastructure, data architecture, and deployment. Lead and enforce code quality practices, including CI/CD, peer reviews, and definitions of done. Conduct and oversee code reviews to maintain best practices. Mentor new developers and onboard them efficiently. Set and maintain coding standards, branching strategies, and deployment pipelines. Drive the use of AI tools in development workflows (e.g., Copilot, test automation, PR summarization). Collaborate with the Product Owner on translating requirements into scalable technical solutions. Lead response efforts for platform outages and postmortem improvements. Continuously reduce technical debt and ensure long-term maintainability. Technologies You'll Work With: Back End: Python, FastAPI Front End: React, Next.js Data: MySQL, NoSQL Infrastructure: Kubernetes (K8s), containerization, CI/CD pipelines What We’re Looking For: 8+ years of experience in software development, with at least 3 years in a technical leadership or architect role. Strong experience with our stack: FastAPI, React, Next.js, MySQL, Kubernetes. Demonstrated ability to lead technical projects and mentor developers and review code. Experience introducing AI into engineering or product workflows is a major plus. Excellent communication and documentation skills. Highly collaborative, organized, and proactive. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane