Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
20 - 35 Lacs
Navi Mumbai
Hybrid
Were Hiring | AI Data Analyst (8+ Years, Tech Lead Level) Location: Mahape, Navi Mumbai (Hybrid Work from Office) Joining: Immediate Joiners Preferred Role Overview We are looking for an experienced AI Data Analyst with 8+ years of professional experience, including leadership in tech projects. The ideal candidate should have strong expertise in Python , Machine Learning , AI APIs , and Large Language Models (LLMs) . Youll work on cutting-edge AI solutions, including vector-based search and data-driven business insights. Experience Must Include 2+ year of hands-on experience as a Data Analyst 1+ year of practical experience with AI systems (LLMs, AI APIs, or vector-based search) 2+ years of experience working with Machine Learning models/solutions 5+ years of hands-on Python programming Exposure to vector databases (e.g., pgvector, ChromaDB) is a plus Key Responsibilities Perform data exploration, profiling, and cleaning of large datasets Design, build, and evaluate ML and AI models to solve real-world problems Leverage LLM APIs, foundation models, and vector databases to deliver AI-driven insights Build end-to-end ML pipelines from preprocessing to deployment Create dashboards and visualizations for reporting and analysis Analyze model outputs and provide actionable insights to stakeholders Collaborate with engineering, product, and business teams to scale AI solutions Required Technical Skills Data Analysis Experience with real-world datasets, EDA, and data wrangling Visualization using Pandas, Seaborn, Plotly Machine Learning & AI Practical experience in classification, regression, clustering techniques Experience with Generative AI, LLMs, OpenAI/Hugging Face APIs, vector search Knowledge of RAG architecture and modern AI design patterns Skilled in model evaluation and hyperparameter tuning Programming & Tools Proficiency in Python (5+ years), with strong use of scikit-learn, NumPy, Pandas Expertise in SQL/PostgreSQL Hands-on experience with vector databases like pgvector, ChromaDB Familiarity with LLMs, embeddings, and foundation models Exposure to cloud platforms (AWS/SageMaker), Git, REST APIs, and Linux Nice to Have Experience with Scrapy, SpaCy, or OpenCV Knowledge of MLOps, model deployment, CI/CD pipelines Familiarity with PyTorch or TensorFlow Soft Skills Strong analytical thinking and problem-solving abilities Excellent verbal and written communication Self-motivated, proactive, and a great team player To Apply: Send your resume to [Insert Email] or apply via kajal.uklekar@arrkgroup.com Lets shape the future of AI together! Thanks & Regards, Kajal Uklekar Arrk Group Senior Manager-Talent Acquisition www.arrkgroup.com
Posted 3 weeks ago
1.0 years
6 - 8 Lacs
India
On-site
Job Title: Full Stack Developer – AI Integration & Prompt Engineering Employment type: Full Time About the Role: We’re looking for a talented Full Stack Developer with strong JavaScript skills and hands-on experience integrating AI models like ChatGPT into web applications. If you’re passionate about AI, prompt engineering , and building intelligent, responsive web apps, this is the perfect opportunity. Key Responsibilities Build and maintain web applications using JavaScript frameworks (React.js, Node.js, Vue.js). Integrate AI/ML models and APIs (e.g., OpenAI) into scalable web applications. Design, test, and refine effective prompts to drive accurate and meaningful AI responses. Create smooth and responsive AI-driven user experiences , including chat interfaces. Optimize application performance, ensuring high responsiveness and scalability. Experiment with different prompting strategies to improve AI performance and user interaction. Collaborate closely with AI engineers, UI/UX designers, and backend developers to deliver cohesive features. Stay updated on the latest trends in AI/ML, NLP, and JavaScript frameworks . What We’re Looking For Strong experience with JavaScript, React, Redux, and TypeScript Proven ability to integrate AI APIs (e.g., OpenAI, GPT models) Hands-on experience in prompt engineering and AI model tuning Good understanding of Vector Search and AI application workflows Excellent English communication skills , both verbal and written Familiarity with frontend and backend web development best practices Why Join Us? Be at the forefront of AI innovation in real-world applications Work on exciting projects that merge web technology and artificial intelligence Collaborate with a passionate, forward-thinking team Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Ability to commute/relocate: Peelamedu, Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Experience: AI: 1 year (Required) Work Location: In person
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Job Listing Detail Summary Gainwell is seeking LLM Ops Engineers and ML Ops Engineers to join our growing AI/ML team. This role is responsible for developing, deploying, and maintaining scalable infrastructure and pipelines for Machine Learning (ML) models and Large Language Models (LLMs). You will play a critical role in ensuring smooth model lifecycle management, performance monitoring, version control, and compliance while collaborating closely with Data Scientists, DevOps. Your role in our mission Core LLM Ops Responsibilities: Develop and manage scalable deployment strategies specifically tailored for LLMs (GPT, Llama, Claude, etc.). Optimize LLM inference performance, including model parallelization, quantization, pruning, and fine-tuning pipelines. Integrate prompt management, version control, and retrieval-augmented generation (RAG) pipelines. Manage vector databases, embedding stores, and document stores used in conjunction with LLMs. Monitor hallucination rates, token usage, and overall cost optimization for LLM APIs or on-prem deployments. Continuously monitor models for its performance and ensure alert system in place. Ensure compliance with ethical AI practices, privacy regulations, and responsible AI guidelines in LLM workflows. Core ML Ops Responsibilities: Design, build, and maintain robust CI/CD pipelines for ML model training, validation, deployment, and monitoring. Implement version control, model registry, and reproducibility strategies for ML models. Automate data ingestion, feature engineering, and model retraining workflows. Monitor model performance, drift, and ensure proper alerting systems are in place. Implement security, compliance, and governance protocols for model deployment. Collaborate with Data Scientists to streamline model development and experimentation. What we're looking for Bachelor's/Master’s degree in computer science, Engineering, or related fields. Strong experience with ML Ops tools (Kubeflow, MLflow, TFX, SageMaker, etc.). Experience with LLM-specific tools and frameworks (LangChain,Lang Graph, LlamaIndex, Hugging Face, OpenAI APIs, Vector DBs like Pinecone, FAISS, Weavite, Chroma DB etc.). Solid experience in deploying models in cloud (AWS, Azure, GCP) and on-prem environments. Proficient in containerization (Docker, Kubernetes) and CI/CD practices. Familiarity with monitoring tools like Prometheus, Grafana, and ML observability platforms. Strong coding skills in Python, Bash, and familiarity with infrastructure-as-code tools (Terraform, Helm, etc.).Knowledge of healthcare AI applications and regulatory compliance (HIPAA, CMS) is a plus. Strong skills in Giskard, Deepeval etc. What you should expect in this role Fully Remote Opportunity – Work from anywhere in the India Minimal Travel Required – Occasional travel opportunities (0-10%). Opportunity to Work on Cutting-Edge AI Solutions in a mission-driven healthcare technology environment. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Position: Backend Engineer (Python) Experience: 5–8 Years Contract Duration: 3 Months Location: Remote Employment Type: Contract Start Date: Immediate/ASAP Job Summary: We are seeking a highly skilled Backend Engineer (Python) for a 3-month remote contract. You will be responsible for developing robust and scalable APIs, ensuring test-driven development (TDD) practices, and building backend systems that can scale efficiently. Prior experience with Large Language Model (LLM) integration is a strong plus. Key Responsibilities: Design, develop, and maintain RESTful APIs and backend services using Python Follow and advocate Test-Driven Development (TDD) practices to ensure code quality and reliability Build scalable and resilient backend systems that can handle high traffic and complex data flows Integrate external services and systems through APIs Collaborate closely with frontend engineers, product managers, and DevOps Write clean, maintainable, and well-documented code Optimize performance and troubleshoot production issues Conduct code reviews and mentor junior engineers when needed Required Skills: 5–8 years of professional backend development experience Strong proficiency in Python and frameworks such as FastAPI , Django , or Flask Experience with API design , RESTful principles , and OAuth/JWT Solid understanding of Test-Driven Development (TDD) using tools like pytest or unittest Experience building and maintaining scalable, distributed systems Proficiency in working with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB) Familiarity with Docker , Git , and CI/CD pipelines Good understanding of software architecture and system design Preferred / Bonus Skills: Experience integrating or working with LLMs (Large Language Models) such as OpenAI, HuggingFace, etc. Experience with message queues (e.g., RabbitMQ, Kafka) Familiarity with cloud services (AWS/GCP/Azure) Knowledge of asynchronous programming (e.g., asyncio, aiohttp) What We Offer: Competitive contract compensation Flexible remote work setup Opportunity to work on innovative backend systems Exposure to cutting-edge AI/LLM use cases Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Responsible for building, deploying, and maintaining AI/ML models and pipelines, with a focus on technical integrations and MLOps. Education: Bachelors in Computer Science, AI, or related field Skills: Python, Azure ML, Databricks, TensorFlow, Kubernetes, Docker, Prompt Engineering Integrations/GenAI: Generative LLMs (e.g., Azure OpenAI APIs) Collaboration Tools: Azure DevOps, Confluence Show more Show less
Posted 3 weeks ago
12.0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
About ViewTrade: ViewTrade is the force that powers fintech and cross-border investing for financial services firms throughout the world. We provide the technology, support and brokerage services that business innovators need to quickly launch or enhance a retail, fintech, wealth, or cross boarder investing experience. Now in our third decade, our approach has helped 300+ firms – from technology startups to large banks, brokers, super apps, advisory, and wealth management – create the differentiating investment experiences their customer’s demand. With clients in over 29 countries and a team that brings decades of experience and understanding of financial services technology and services, we help our business clients deliver the investment access and financial solutions they require. Our Values: Expertise Integrity Solution Driven. Teamwork Long Term success Always winning Always learning Job Summary: Seeking an experienced Cloud Solutions Architect to design and oversee robust, scalable, secure, and cost-effective multi-cloud and hybrid (on-prem) infrastructure solutions. This role requires deep expertise in AI, particularly Generative AI workloads, and involves translating business needs into technical designs, providing technical leadership, and ensuring best practices across diverse environments . Key Responsibilities: Design and architect complex solutions across multi-cloud and hybrid (on-prem) environments (preferably AWS). Translate business/technical requirements into clear architectural designs and documentation. Develop cloud and hybrid adoption strategies and roadmaps. Architect infrastructure and data pipelines for AI/ML and Generative AI workloads (compute, storage, MLOps). Design on-premise to cloud/hybrid migration plans. Evaluate and recommend cloud services and technologies across multiple providers. Define and enforce architectural standards (security, cost, reliability). Provide technical guidance and collaborate with engineering, ops, and security teams. Architect third-party integrations and collaborate on cybersecurity initiatives. Required Qualifications: Around 12+ years IT experience with 5+ years as a Cloud/Solutions Architect role Proven experience architecting/implementing solutions on at least two major public cloud platforms (e.g., AWS, Azure, GCP). AWS preferred. Strong hybrid (on-prem) and migration experience. Demonstrated experience architecting infrastructure for AI/ML/GenAI workloads (compute, data, deployment patterns). Deep understanding of cloud networking, security, and cost optimization. Proficiency in IaC (Terraform/CloudFormation/ARM), containers (Docker/Kubernetes), serverless. Familiarity with FinOps concepts. Excellent communication and problem-solving skills. Preferred Qualifications: Experience with cloud cost management and optimization strategies (FinOps) in a multi-setup environment. Experience with specific GenAI && MLOps platforms and tools ( OpenAI, Google AI, hugging face, Github Co-pilot , AWS sagemaker, AWS Bedrock, MLflow, Kubeflow, Feast, ZenML ) Good understanding of on-premise data center architecture, infrastructure (compute, storage, networking), virtualization, and experience designing and implementing hybrid cloud solutions and on-premise to cloud migration strategies. Experience in the Fintech industry. What does ViewTradebring to the table : Opportunity to do what your current firm may be hesitant to do. An informal and self-managed work culture. Freedom to experiment with new ideas and technologies. A highly motivating work environment where you learn exponentially and grow with the organization. An opportunity to create an impact at scale. Location: GIFT CITY, Gandhinagar Experience: 12+ years We are an equal opportunity employer and all qualified applicants will receive consideration for employment withoutregard to race, color, religion,sex, disability status,or any other characteristic protected by the law. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location: Bangalore Business & Team: Analytics BB Impact & contribution: As a Data Scientist, you will have the opportunity to apply your quantitative and computational skills within an applied and production-oriented R&D function within the group, focusing on cutting-edge deep learning and generative AI techniques. Your role will have a significant impact on our innovation capabilities and business processes by leveraging these advanced technologies Roles & Responsibilities: Model Development: Design and implement deep learning models, focusing on Generative AI applications like text generation, image synthesis, personalized recommendations, or autonomous decision-making. Fine-tune and adapt pre-trained models (e.g., GPT, DALL-E, Stable Diffusion) for specific tasks. Develop foundational components of multi-agent systems where agents use AI to collaborate or solve problems. Multi-Agent Integration: Build and test individual AI agents and integrate them into a multi-agent framework using libraries such as Ray, OpenAI API, or custom architectures. Design communication protocols between agents and their environment. End-to-End Deployment: Contribute to the deployment of at least one Generative AI model or a multi-agent application in production, ensuring scalability and performance. Collaboration and Research: Work closely with cross-functional teams to integrate Generative AI models into multi-agent solutions. Stay updated with advancements in Generative AI and multi-agent systems and experiment with cutting-edge technologies. Documentation: Maintain detailed documentation of experiments, models, and processes for reproducibility and team collaboration. Essential Skills: 5+ years of experience Proficiency with PyTorch, TensorFlow, or similar frameworks. Experience with LLM fine-tuning, prompt engineering, and model optimization. Familiarity with multi-agent frameworks like Ray, LangChain, or custom architectures. Working knowledge of distributed systems and cloud platforms (AWS, GCP, Azure). Education Qualifications: Bachelor’s degree in Engineering ( Computer Science/Information Technology). If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 18/06/2025 Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Who You'll Work With You are someone who thrives in a high-performance environment, bringing a growth mindset and entrepreneurial spirit to tackle meaningful challenges that have a real impact. In return for your drive, determination, and curiosity, we’ll provide the resources, mentorship, and opportunities to help you quickly broaden your expertise, grow into a well-rounded professional, and contribute to work that truly makes a difference. When you join us, you will have Continuous learning Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. Exceptional benefits In addition to a competitive salary (based on your location, experience, and skills), we offer a comprehensive benefits package, including medical, dental, mental health, and vision coverage for you, your spouse/partner, and children. Your Impact You will engage with our world-class consulting, engineering and program teams to lead financial reporting based on ERP. In this role, you will operate cross-functionally in firm economics initiatives and translate client business needs to a reporting solution using BI tools and advanced analytics (AI/ML) concepts. You will be involved in innovative generative AI (gen AI) development leveraging SAP Business Technology Platform (BTP) AI services along with basic HANA Cloud skills. You will be responsible for designing, developing, and deploying gen AI-powered solutions that enhance enterprise processes and decision-making. This role will involve working with SAP AI Core/AI Launchpad, integrating Large Language Models (LLMs), and embedding gen AI capabilities into SAP applications and data flows using tools like HANA Cloud Vector Engine. You will focus on business drivers and system functionality to influence senior firm and client stakeholders while providing architecture and delivery of the full reporting and business system vision. You will work side-by-side with strategic consultants, leveraging, and learning best practices in client service, issue synthesis and problem resolution. The scope of your work will include advanced analytics transactional core, analytics hubs, and reporting and visualization layers. Additionally, you will be a core member of the McKinsey tech team with responsibilities that range from shaping and implementing strategic products and solutions to ensuring that McKinsey’s craft stays on the leading edge of technology and industry best practices. You will be based in Gurgaon as part of McKinsey Global Capabilities & Services Pvt Limited (MGCS). As part of this group, you’ll join a global team working on everything from IT modernization and strategy to agile, cloud, cybersecurity, and digital transformation. You’ll typically work on Finance function and will be fully integrated with the rest of our global firm. You’ll also work with colleagues from across McKinsey & Company to help our partner & finance communities to understand their studies financials and provide insights. Our office culture is casual and social, with an emphasis on education and innovation. We have the freedom to try new ideas, experiment and are expected to be constantly learning and growing. There is also a strong emphasis on mentoring others in the group, enabling them to grow and learn. Your Qualifications and Skills University degree in computer science, engineering or equivalent area. 3+ years of experience in Design and implementation of conversational agents, co-pilot-style UIs, and AI-based assistants for enterprise use cases. At least 1 End to End implementation experience in developing and deploying gen AI solutions using SAP BTP, SAP AI Core, and SAP AI Launchpad. Utilize SAP HANA Cloud Vector Engine to store, retrieve, and query embeddings for retrieval-augmented generation (RAG) use cases and knowledge on knowledge graph. Fine-tune and/or integrate Large Language Models (e.g., OpenAI, Falcon, LLaMA) to solve business problems. Good knowledge of SQL. SAP Financial reporting knowledge is plus. Proven track record of applying expertise to deliver business impact and influencing executives to set strategic directions. Strong analytical and problem-solving skills paired with the ability to develop creative and efficient solutions. Ability to collaborate effectively at all levels of an organization and to work both independently and in various team settings. Ability to work under pressure and manage client expectations effectively. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: • Design and develop GenAI-based solutions using LLMs (e.g., Bedrock, OpenAI, Claude) for text, image, table, diagram, and multi-modal applications. • Implement a multi-agent system that integrates structured and unstructured data sources, including knowledge graphs, embeddings, and vector databases. • Build and deploy agentic AI workflows capable of autonomous task completion, using frameworks like LangChain, LangGraph, or CrewAI. • Perform fine-tuning, retraining, or adaptation of open-source or proprietary LLMs for specific domain tasks. • Collaborate with data scientists and domain experts to curate and preprocess training datasets. • Integrate models with scalable backend APIs or pipelines (REST, FastAPI, gRPC) for real-world applications. • Stay updated with state-of-the-art research and actively contribute to enhancing model performance and interpretability. • Optimize inference, model serving, and memory management for deployment at scale. Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, AI/ML, Data Science, or related field. • 5+ years of hands-on experience in Deep Learning, NLP, and LLMs. • Proven experience with at least one end-to-end project involving multi-modal RAG and Agentic AI. • Proficient in Python and ML/DL libraries such as PyTorch, TensorFlow, Transformers (HuggingFace), LangChain, LangGraph, Bedrock, or similar • Experience in fine-tuning or adapting LLMs (using LoRA, QLoRA, PEFT, or full fine-tuning). • Experience in building a multi-agent system. • Strong understanding of knowledge graphs, embeddings, vector databases (e.g., FAISS, Chroma, Weaviate), and prompt engineering. • Strong understanding and experience of a cloud platform like AWS. • Familiarity with containerization (Docker, Kubernetes) Preferred Skills • Experience in the Biopharma industry. • Design and implement user-friendly interfaces for AI applications. • Utilize modern web frameworks (e.g., React, Vue.js) to create engaging user experiences. • Develop scalable and efficient backend systems to support the deployment of AI models. • Integrate with cloud platforms (AWS) for infrastructure management. • Hands-on experience in vision-language models (e.g., CLIP, BLIP, LLaVA). • Publications, Kaggle competitions, or GitHub projects in GenAI Show more Show less
Posted 3 weeks ago
0.0 - 2.0 years
0 Lacs
Mohali, Punjab
On-site
Job Summary: We are looking for a highly motivated and technically versatile professional to lead, build, and train teams on No-Code/Low-Code AI automation solutions . The ideal candidate will spearhead automation projects, implement scalable AI-driven workflows, and deliver comprehensive training to empower teams and clients. Key Responsibilities: Automation Development & Implementation Design and build end-to-end automation solutions using no-code/low-code platforms (e.g., Power Automate, Zapier, Make, Airtable, Glide, AppSheet). Integrate AI tools (e.g., OpenAI API, ChatGPT, Microsoft Copilot, Google Vertex AI) with automation workflows. Automate business processes including CRM, HR, finance, marketing, and customer support. Ensure solutions are scalable, secure, and compliant with industry standards. Technical Leadership & Project Management Lead automation projects from concept to deployment. Collaborate with cross-functional teams to understand business needs and convert them into automation solutions. Define best practices, governance models, and documentation standards for no-code/AI projects. Evaluate and implement new tools in the no-code/AI ecosystem. Training & Enablement Develop training programs, learning materials, and documentation for internal teams or external clients. Conduct live sessions, hands-on workshops, and onboarding sessions on no-code platforms and AI integrations. Mentor junior developers and business users on no-code thinking and automation strategies. Stay up-to-date with platform updates and AI trends, sharing insights with the team. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Systems, or a related field (or equivalent experience). 2-3 years of experience with automation tools such as Power Automate, Zapier, Integromat, Airtable, Notion, etc. Proficiency in integrating APIs and AI tools like ChatGPT, GPT-4, LangChain, Hugging Face models, etc. Strong problem-solving and process mapping skills. Experience in delivering technical training or workshops. Excellent communication and stakeholder management skills. Working Location: Mohali(Punjab) Job Type: Full-time Pay: Up to ₹50,000.00 per month Benefits: Health insurance Schedule: Day shift Monday to Friday Morning shift Experience: AI Automation: 2 years (Preferred) Work Location: In person
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🔍 We're Hiring: AI Engineer / GenAI Engineer (3+ years) 📍 Location: Hyderabad (Full-time, Hybrid) 🏢 Join Our Mission to Build the Future with GenAI Are you passionate about Generative AI, LLMs , and pushing the boundaries of Agentic AI ? Do you thrive at the intersection of innovation and impact? We’re looking for a skilled AI Engineer to join our dynamic team developing multi-modal, intelligent, and autonomous AI systems that transform the way industries operate. 🚀 What You'll Do: Design and develop cutting-edge GenAI solutions using LLMs (OpenAI, Claude, Bedrock) across text, image, and tabular data. Build multi-agent AI workflows using LangChain, LangGraph, CrewAI —integrated with knowledge graphs and vector databases. Fine-tune or adapt open-source/proprietary LLMs (LoRA, QLoRA, PEFT) for domain-specific tasks. Integrate scalable APIs and pipelines using FastAPI, REST, or gRPC . Collaborate with data scientists to curate datasets and optimize real-world performance. Drive innovation by staying on top of the latest research and development in GenAI, VLMs, and RAG. ✅ What We're Looking For: 5+ years of hands-on experience in Deep Learning, NLP, and LLMs . Experience in building multi-modal RAG systems and Agentic AI solutions . Proficiency in Python , PyTorch , HuggingFace , LangChain , Bedrock , etc. Strong knowledge of embeddings, vector databases (FAISS, Chroma, Weaviate), and prompt engineering . Experience with cloud platforms (AWS) and containerization tools (Docker/Kubernetes) . ⭐ Bonus Skills: Experience in Biopharma domain Hands-on with Vision-Language Models (CLIP, BLIP, LLaVA) Strong frontend (React/Vue.js) and backend (FastAPI, Flask) experience GitHub projects, publications, or Kaggle contributions in GenAI Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Full Stack GenAI Lead Location: Gurgaon Job Summary: We are seeking a Full stack Lead Developer with strong expertise in Python, Angular, and Java, along with hands-on experience or exposure to Agentic AI Framework or Generative AI (GenAI) technologies. The ideal candidate will be responsible for designing and developing end-to-end solutions, leading development teams, and integrating GenAi capabilities into scalable applications. Knowledge of Agile/Scrum methodologies and SDLC, release/change management Key Responsibilities: Work closely with cross-functional teams to gather and refine requirements, and align technical solutions with business goals. Sound knowledge of Angular and Python , AWS bedrock, Lambda, Step function Implement and manage CI/CD pipelines optimized for GenAI model deployment. Integrate GenAI models (LLMs, etc.) into existing and new software systems. Ensure data pipelines for GenAI model training and inference are efficient and reliable. Troubleshoot and resolve issues across the full stack and GenAI integrations. Stay current with GenAI advancements and full-stack technologies Proficiency in multiple programming languages (e.g., Python, JavaScript, Java). Strong experience with front-end frameworks (e.g., React, Angular, Vue.js). Solid back-end development skills (e.g., Node.js, Python frameworks like Flask/Django, Spring). Experience with databases (SQL, NoSQL). Familiarity with cloud platforms (AWS). Understanding of SDLC principles and agile methodologies. Hands-on experience integrating AI/ML models into applications. Understanding of Generative AI concepts (LLMs, transformers, etc.).work with Agentic AI framework or other Gen AI models/APIs (e.g., OpenAI, Hugging Face, Google Vertex AI). Excellent problem-solving and analytical skills. Strong communication and leadership abilities. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 AI Engineering Intern (SDE) – Founding Tech Interns | Opportunity of a Lifetime Location: Gurgaon (In-Office) Duration: 3–6 months (Flexible based on academic schedule) Start Date: Immediate openings Open to: Tier 1 college students graduating in 2025 only Compensation: Stipend + Pre-Placement Offer potential + Founders’ recommendation for global fellowships (Google, Meta, etc.) 🧠 About Us – Darwix AI Darwix AI is on a mission to solve a problem no one's cracked yet — building real-time, multilingual conversational intelligence for omnichannel enterprise sales teams using the power of Generative AI. We're building India’s answer to Gong + Refract + Harvey AI — trained on 1M+ hours of sales conversations, and packed with industry-first features like live agent coaching , speech-to-text in 11 Indic languages , and autonomous sales enablement nudges . We’ve got global clients, insane velocity, and a team of ex-operators from IIMs, IITs, and top-tier AI labs. 🌌 Why This Internship is Unlike Anything Else 💡 Work on a once-in-a-decade problem — pushing the boundaries of GenAI + Speech + Edge compute. 🛠️ Ship real products used by enterprise teams across India & the Middle East. 🧪 Experiment freely — train models, optimize pipelines, fine-tune LLMs, or build scrapers that work in 5 languages. 🚀 Move fast, learn faster — direct mentorship from the founding engineering and AI team. 🏆 Proof-of-excellence opportunity — stand out in every future job, B-school, or YC application. 💻 What You'll Do Build and optimize core components of our real-time agent assist engine (Python + FastAPI + Kafka + Redis). Train, evaluate, and integrate whisper, wav2vec, or custom STT models on diverse datasets. Work on LLM/RAG pipelines , prompt engineering, or vector DB integrations. Develop internal tools to analyze, visualize, and scale insights from conversations across languages. Optimize for latency, reliability, and multilingual accuracy in dynamic customer environments. 🌟 Who You Are Pursuing a B.Tech/B.E. or dual degree from IITs, IIITs, BITS, NIT Trichy/Warangal/Surathkal, or other top-tier institutes. Comfortable with Python , REST APIs, and database operations. Bonus: familiarity with FastAPI, Langchain, or HuggingFace. Passionate about AI/ML, especially NLP, GenAI, ASR, or multimodal systems. Always curious, always shipping, always pushing yourself beyond the brief. Looking for an internship that actually matters — not one where you're just fixing CSS. 🌐 Tech You’ll Touch Python, FastAPI, Kafka, Redis, MongoDB, Postgres Whisper, Deepgram, Wav2Vec, HuggingFace Transformers OpenAI, Anthropic, Gemini APIs LangChain, FAISS, Pinecone, LlamaIndex Docker, GitHub Actions, Linux environments 🎯 What’s in it for you A pre-placement offer for the best performers. A chance to be a founding engineer post-graduation. Exposure to the VC ecosystem , client demos, and GTM strategies. Stipend + access to tools/courses/compute resources you need to thrive. 🚀 Ready to Build the Future? If you’re one of those rare folks who can combine deep tech with deep curiosity , this is your call to adventure. Join us in building something that’s never been done before. Apply now at careers@cur8.in Attach your CV + GitHub/Portfolio + a line on why this excites you. Bonus points if you share a project you’ve built or an AI problem you’re obsessed with. Darwix AI | GenAI for Revenue Teams | Built from India for the World Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About Us We’re a stealth-mode startup building India’s smartest fashion search and discovery engine. Today, online fashion search is broken — dominated by SEO tricks, ad-first results, and fragmented marketplaces that bury the best options, especially from indie brands. We’re changing that. Our vision is simple but bold: enable anyone to find exactly what they’re looking for — whether it’s “that yellow dress Lily Collins wore in Emily in Paris” or a vibe from a Pinterest board — using cutting-edge AI, LLMs, and visual search. Think “Google for fashion,” but actually intuitive and personalized. Backed by investors and led by a team from Georgia Tech, we’re building a deep-tech stack that indexes every major Indian fashion retailer and brand. Our tools go beyond literal search — they understand inspiration, moodboards, and context. Whether you’re a consumer looking for affordable originality or a brand looking to integrate smarter discovery, we’re reimagining how fashion is found in India. Role: AI Engineer Intern(LLM + Backend) Location: Remote Stipend: ₹10,000–₹20,000 per month What You'll Do We’re looking for an AI Engineering Intern who’s excited to build real tools with large language models and backend infrastructure. You won’t be writing toy code — you’ll help shape the backend of AI features used by real people. From chaining LLMs and crafting clever prompts to building robust APIs, you’ll be part of the core engine that makes the product tick. You’ll work directly with the founding team and have ownership from day one. If you’re someone who enjoys messy, fast-moving environments and loves to learn by building — this is for you. Your Responsibilities Build and maintain backend services for AI features using JavaScript/TypeScript (Node.js preferred) Work with LLMs (OpenAI, Claude, open-source models, etc.) to create intelligent workflows Design and test REST APIs for features like AI search, query rewriting, and personalization Integrate multiple AI models and tools (e.g. LangChain, LlamaIndex, vector DBs) into working prototypes Collaborate with designers and frontend developers to ship product features Research and experiment with new model capabilities and toolchains You might be a fit if you Have hands-on experience building with LLMs or AI APIs (even side projects count!) Are confident writing backend code in JavaScript or TypeScript Understand REST API design, authentication, and basic deployment practices (Vercel/Render/Fly.io/etc.) Are curious, scrappy, and able to figure things out fast Love the idea of blending fashion, search, and AI to solve real problems Bonus Points for: Experience with LangChain, LlamaIndex, or similar orchestration frameworks Familiarity with full-stack frameworks like Next.js or React Knowledge of NLP, embeddings, or neural networks Prior startup experience or shipped personal AI tools/projects Why Join Us Build something from 0 to 1 that could shape how people shop across India Work closely with the founding team in a fast-paced, high-ownership environment Get exposure to real product decisions and learn how AI and design intersect Play with the latest AI tech , experiment freely, and ship quickly Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Position: Tools and Automation Architect Location: Greater Noida Experience: -8+ to 10+ Years Must have: - It’s a Tools and Automation Architect role with AIops OR GenAI or other AI Tools. Note : It’s a Tools and Automation Architect role with AIops OR GenAI or other AI Tools. Tools & Automation for Cloud and Infrastructure Management Services: Keywords Enterprise IT Infrastructure Automation Architect. Gen AI, AI and ML Enterprise IT Infrastructure Automation Architect. TOGAF or Equivalent Architecture framework Certified Professional. In-depth hands-on experience in Cloud, Compute, Storage, Directory Services, OS Network, and Web & Middleware task automation. Role Description We are looking for an Enterprise IT Infrastructure Automation Architect with hands on experience in - design, deployment, configuration and management of IT Infrastructure Automation Platform. Who is a self-starter and can work in a dynamic environment. Prepare high level and low-level design documentations. Use Agile Infrastructure Technologies, Automation & Autonomics. Strong Understanding of Core Infrastructure and Cloud Concepts Expertise in enterprise automation platforms including GenAI, AI & ML. (Technical expertise areas) Hands-on Experience with one or more of the following tools and technologies: Configuration Management Ansible. : Provisioning Management, Terraform. Orchestration- Camunda Orchestrator, ServiceNow Orchestrator, vRealize Orchestrator. Container Orchestration Platforms : Kubernetes, Openshift. Open-Source Monitoring Tools : Nagios, Prometheus, Elastic Search. GenAI Platforms OpenAI. Show more Show less
Posted 3 weeks ago
0.0 - 1.0 years
0 Lacs
Peelamedu, Coimbatore, Tamil Nadu
On-site
Job Title: Full Stack Developer – AI Integration & Prompt Engineering Employment type: Full Time About the Role: We’re looking for a talented Full Stack Developer with strong JavaScript skills and hands-on experience integrating AI models like ChatGPT into web applications. If you’re passionate about AI, prompt engineering , and building intelligent, responsive web apps, this is the perfect opportunity. Key Responsibilities Build and maintain web applications using JavaScript frameworks (React.js, Node.js, Vue.js). Integrate AI/ML models and APIs (e.g., OpenAI) into scalable web applications. Design, test, and refine effective prompts to drive accurate and meaningful AI responses. Create smooth and responsive AI-driven user experiences , including chat interfaces. Optimize application performance, ensuring high responsiveness and scalability. Experiment with different prompting strategies to improve AI performance and user interaction. Collaborate closely with AI engineers, UI/UX designers, and backend developers to deliver cohesive features. Stay updated on the latest trends in AI/ML, NLP, and JavaScript frameworks . What We’re Looking For Strong experience with JavaScript, React, Redux, and TypeScript Proven ability to integrate AI APIs (e.g., OpenAI, GPT models) Hands-on experience in prompt engineering and AI model tuning Good understanding of Vector Search and AI application workflows Excellent English communication skills , both verbal and written Familiarity with frontend and backend web development best practices Why Join Us? Be at the forefront of AI innovation in real-world applications Work on exciting projects that merge web technology and artificial intelligence Collaborate with a passionate, forward-thinking team Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Ability to commute/relocate: Peelamedu, Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Experience: AI: 1 year (Required) Work Location: In person
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Company iLink Digital is a Global Software Solution Provider and Systems Integrator, delivers next-generation technology solutions to help clients solve complex business challenges, improve organizational effectiveness, increase business productivity, realize sustainable enterprise value and transform your business inside-out. iLink integrates software systems and develops custom applications, components, and frameworks on the latest platforms for IT departments, commercial accounts, application services providers (ASP) and independent software vendors (ISV). iLink solutions are used in a broad range of industries and functions, including healthcare, telecom, government, oil and gas, education, and life sciences. iLink’s expertise includes Cloud Computing & Application Modernization, Data Management & Analytics, Enterprise Mobility, Portal, collaboration & Social Employee Engagement, Embedded Systems and User Experience design etc. What makes iLink's offerings unique is the fact that we use pre-created frameworks, designed to accelerate software development and implementation of business processes for our clients. iLink has over 60 frameworks (solution accelerators), both industry-specific and horizontal, that can be easily customized and enhanced to meet your current business challenges. Requirements Job Summary: We are looking for a skilled and motivated Python and AWS Developer to join our dynamic engineering team. The ideal candidate will have strong expertise in Python (3.x) and modern web frameworks, AWS cloud services and solid knowledge of relational and graph databases. You’ll be responsible for designing, developing, and deploying scalable cloud-native applications and serverless architectures. Key Responsibilities - Develop, test, and maintain Python-based backend systems with a focus on object-oriented design, coding standards, and scalability. - Build and maintain RESTful APIs using frameworks such as Django and FastAPI. - Design and implement AWS cloud solutions leveraging services like Lambda, Step Functions, ECS, API Gateway, CloudWatch, and S3. - Write clean, maintainable, and efficient code following industry best practices. - Participate in architecture discussions, code reviews, and agile development cycles. - Collaborate with cross-functional teams to define, design, and ship new features. - Monitor and troubleshoot performance issues in production environments. - Maintain CI/CD pipelines. - Design and optimize relational databases (RDS) such as: PostgreSQL, MySQL, and data warehousing solutions like Snowflake. Preferred/Bonus Skills - Experience with OpenAI APIs, Large Language Models (LLMs), or Prompt Engineering. - Familiarity with CI/CD pipelines using tools like GitHub Actions, or GitLab CI. - Exposure to asynchronous programming and message queues. Benefits Competitive salaries Medical Insurance Employee Referral Bonuses Performance Based Bonuses Flexible Work Options & Fun Culture Robust Learning & Development Programs In-House Technology Training Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Full Stack Developer - AI-Powered Systems Location: In-Person | HSR Layout, Bengaluru Type: Full-Time What We’re Building At Sapience1 , we’re on a mission to transform how families discover and access youth services — from academics and enrichment to life skills and care — using behavioral AI, intelligent design, and seamless technology. We’re not building just another app. We’re engineering the future of Human Experience Tech — where services feel smart, personal, and human. We’re looking for a builder — someone who thrives in fast-moving environments and loves turning complex challenges into real-world products. You’ll work directly with our CTO and core product team to build powerful, scalable, and AI-integrated features. What you'll do: Build full-stack features using React.js, Node.js, MongoDB/PostgreSQL Work on core AI-powered experiences like: ▸ Smart service matching ▸ Personalized dashboards ▸ Behavioral insights & summaries Write clean, modular code with an eye for scale, speed, and user delight Collaborate with UI/UX, AI, and data teams to turn ideas into reality Influence product architecture and roadmap through hands-on contribution You’ll Thrive Here If You… Have 2–5 years of hands-on full-stack experience Know your way around JavaScript/TypeScript , React.js , and Node.js Are curious about (or have worked with) AI tools and APIs Understand DB design and API integration (MongoDB/PostgreSQL) Want to build fast , experiment often, and deploy continuously Value clarity, ownership, and momentum over fluff and hierarchy Our Stack Includes Frontend: React.js, Next.js Backend: Node.js, Express Database: MongoDB, PostgreSQL DevOps: Azure, GitHub, CI/CD AI: OpenAI, LangChain Tools: MS Teams, Figma, Jira Compensation ₹80,000 – ₹1,20,000/month , depending on skillset and performance Why Now Is the Best Time to Join You’ll be shaping the core product alongside the founding team, with mentorship from our CTO and exposure to rapid cycles of build, test, and deploy. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
About 23 Ventures 23 Ventures specializes in building technology to help startups and early-stage ideas achieve product-market fit, scale, and stay focused. We partner with startups and early-stage ideas to provide resources, practical advice, and expertise needed to scale. From navigating product-market fit to leveraging fractional resources, we ensure agility and efficiency, empowering founders to focus on their vision while staying adaptable. About this job We are seeking a talented and motivated QA Engineer with strong expertise in both manual and automation testing to join our dynamic team. In this role, you will be responsible for ensuring the quality of our software products through thorough testing, both manual and automated, as well as working with Jenkins for continuous integration and deployment. You will work closely with the development team to build, execute, and maintain comprehensive test cases that ensure the reliability and stability of our applications. Key responsibilities Develop and execute test cases for both manual and automated testing of our applications. Design and implement REST API test cases, using java or python tools Write automated tests using Selenium WebDriver, Java, Python, and other related frameworks. Perform manual testing of web applications across multiple browsers and platforms. Identify and report bugs/issues, track them, and verify the resolution. Integrate automated test cases with Jenkins for continuous integration and deployment. Collaborate with the development team to ensure seamless integration of testing into the development pipeline Participate in sprint planning, reviews, and retrospectives. Qualifications Proficient to fluent in Java/Python for API development and scripting Proficient in AWS services like Lambda, API Gateway, DynamoDB, S3, etc. Understanding of distributed systems and REST API design Experience writing unit and integration tests Experience working with LLMs like OpenAI or Gemini Preferred Skills Strong knowledge of REST API testing, using tools in python or similar. Expertise in Selenium WebDriver with Python for automated testing. Solid understanding of the software development lifecycle and agile methodologies. Strong debugging and problem-solving skills. Good communication skills and the ability to work collaboratively within a team. Job location Hybrid (Remote with proximity to Pune for on-site) Other benefits Work on exciting projects in diverse domains such Heatlhtech and AI. Flexible remote work environment Opportunity to collaborate with talented teams and grow professionally Why 23 Ventures? Inclusive Culture: We value diversity and strive to create an inclusive environment for all employees. Professional Growth: Opportunities for learning and development to help you grow in your career. Comprehensive Benefits: Competitive salary, health insurance, and more. Work-Life Balance: Flexible working arrangements to help you balance your personal and professional life. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Surat, Gujarat, India
On-site
Back Openings: 01 Experience: 2 years Location: Surat - Varachha Benefits 5-Days Working Paid Leaves Complimentary Health Insurance Overtime Pay Fun Activities Personal Loan Employee Training Positive Work Environment Professional Developments We are looking for an experienced AI/ML cum Python Developer with 2 years of hands-on work in machine learning, Python development, and API integration. The ideal candidate should also have experience building AI agentssmart systems that can plan tasks, make decisions, and work independently using tools like LangChain, AutoGPT, or similar frameworks. Youll be part of a collaborative team, working on real-world AI projects and helping us build intelligent, scalable solutions. Job Responsibility Key Responsibilities Develop, train, and deploy machine learning models using frameworks such as TensorFlow, PyTorch, or Scikit-learn. Develop AI agents capable of decision-making and multi-step task execution. Write efficient and maintainable Python code for data processing, automation, and backend services. Design and implement REST APIs or backend services for model integration. Handle preprocessing, cleaning, and transformation of large datasets. Evaluate model accuracy and performance, and make necessary optimizations. Collaborate with cross-functional teams including UI/UX, QA, and product managers. Stay updated with the latest trends and advancements in AI/ML. Key Performance Areas (KPAs): Development of AI/ML algorithms and backend services. AI agent development and performance. Model evaluation, testing, and optimization. Seamless deployment and integration of models in production. Technical documentation and project support. Research and implementation of emerging AI technologies. Key Performance Indicators (KPIs): Accuracy and efficiency of AI models delivered. Clean, reusable, and well-documented Python code. Timely delivery of assigned tasks and milestones. Issue resolution and minimal bugs in production. Contribution to innovation and internal R&D efforts. Required Skills & Qualification: Bachelors or Masters degree in Computer Science, IT, or related field. Minimum 2 years of experience in Python and machine learning. Hands-on with AI agent tools like LangChain, AutoGPT, OpenAI APIs, Pinecone, etc. Strong foundation in algorithms, data structures, and mathematics. Experience with Flask, FastAPI, or Django for API development. Good understanding of model evaluation and optimization techniques. Familiarity with version control tools like Git. Strong communication and team collaboration skills. Interview Process: HR Round Technical Round Practical Round Salary Negotiation Offer Release Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role –Senior Gen AI Engineer Location - Coimbatore Mode of Interview - In Person Job Description Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description Zycus is looking for a passionate and curious AI Intern to join our innovative team. If you’re eager to work with cutting-edge technologies like LLMs (Large Language Models), open-source AI frameworks, and advanced NLP systems, this internship is your gateway to a high-impact career in Artificial Intelligence. What You Will Learn And Work On Assist in building AI solutions using open-source models (e.g., Llama 2, Mistral, Hugging Face) and third-party LLM APIs like OpenAI and Anthropic. Contribute to research and experimentation on advanced techniques such as Retrieval-Augmented Generation (RAG), GraphRAG, and Agent Systems using tools like LangChain or LlamaIndex. Support the team in deploying AI models using scalable tools like vLLM, FastAPI, or Flask. Help in integrating AI functionalities into real-world enterprise applications. Participate in developing data pipelines for AI projects – from data preprocessing to model evaluation. Stay updated on the latest AI trends and assist the team in identifying areas for innovation and improvement. Job Requirement Experience & Qualifications Bachelor’s/master’s in computer science, AI, or related field. Strong interest in AI/ML, with some exposure to Python and frameworks such as PyTorch, TensorFlow, or Hugging Face. Familiarity with at least one web framework (FastAPI or Flask preferred). Understanding of how LLMs work, including concepts like embeddings, fine-tuning, and prompt engineering (project work or self-learning is welcome). Good problem-solving skills and willingness to learn complex AI workflows in a production setting. Why Intern with Zycus? Real-world Projects: Work on live AI initiatives and contribute to Zycus’ AI-driven products. Mentorship: Learn from top AI professionals and collaborate with experienced engineers. Innovation Culture: Get hands-on experience with the latest in GenAI and open-source advancements. Global Exposure: Collaborate with teams working on international deployments and enterprise-grade solutions. Growth Potential: Many of our interns are offered full-time roles based on performance and fit. About Zycus Zycus is a pioneer in Cognitive Procurement software and a global leader in Source-to-Pay solutions. Powered by our Merlin AI Suite , Zycus automates tactical work, surfaces insights, and transforms enterprise procurement experiences. Join us and be part of the next generation of AI innovation. Start your #CognitiveProcurement journey with us – you are #MeantforMore Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location: Chennai, TN/ Hyderabad, TG/ Bangalore, KA/ Pune, MH Must-Have Skills: Backend Development: Python (FastAPI / Flask) AI Integration: OpenAI API (GPT-4o) Cloud Services: Azure Functions, Azure Storage Account, Cosmos DB Database & Caching: NoSQL (Cosmos DB) APIs & Authentication: RESTful APIs, OAuth, JWT DevOps & Deployment: CI/CD pipelines, Azure DevOps Code Quality: Unit Testing, Debugging, Performance Optimization Good-to-have Skills: Vector Databases for AI: Azure AI Search Logging & Monitoring: Application Insights, Azure Monitor Security: Azure Active Directory (AAD), Role-Based Access Control (RBAC) Retrieval-Augmented Generation with vector databases LLM Orchestration: LangChain or Semantic Kernel Experience in OCR, Automation, Natural language data analytics and AI agents is preferred. Experience in CRM, Personalization, Virtual Agents, and Self-service AI apps is preferred Work Hours: 04:00AM - 12:00PM PST Show more Show less
Posted 3 weeks ago
5.0 - 10.0 years
16 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Were looking for a skilled Generative AI Engineer to build and deploy AI-powered applications using OpenAI models and Azure Cloud services . The ideal candidate will have experience in Python , RAG architectures , and OCR integration . Key Responsibilities: Develop GenAI apps using OpenAI APIs (ChatGPT, GPT-4, Embeddings). Deploy and manage solutions on Azure (App Services, Azure Functions, Cognitive Services). Build RAG pipelines with vector databases (e.g., Azure Cognitive Search). Integrate OCR using tools like Azure Vision or Tesseract . Automate workflows using Python and Azure SDKs. Ensure performance, scalability, and security in production environments. Skills Required: OpenAI (ChatGPT, GPT-4, Embeddings) Azure Cloud (Azure OpenAI, Azure Functions, Cognitive Services) Python (LangChain, FastAPI, SDKs) RAG frameworks & vector databases OCR tools (Tesseract, Azure OCR) Docker, Git, CI/CD pipelines Preferred: Experience with LangChain or LLM orchestration Familiarity with Azure DevOps or GitHub Actions
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane