Jobs
Interviews

80 Langgraph Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 5.0 years

0 Lacs

jaipur, rajasthan

On-site

As an AI/ML Engineer (Python) at Telepathy Infotech, you will be responsible for building and deploying machine learning and GenAI applications in real-world scenarios. You will be part of a passionate team of technologists working on innovative digital solutions for clients across industries. We value continuous learning, ownership, and collaboration in our work culture. To excel in this role, you should have strong Python skills and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. Experience in GenAI development using APIs such as Google Gemini, Hugging Face, Grok, etc. is highly desirable. A solid understanding of ML, DL, NLP, and LLM concepts is essential along with hands-on experience in Docker, Kubernetes, and CI/CD pipeline creation. Familiarity with Streamlit, Flask, FastAPI, MySQL/PostgreSQL, AWS services (EC2, Lambda, RDS, S3, API Gateway), LangGraph, serverless architectures, and vector databases like FAISS, Pinecone, will be advantageous. Proficiency in version control using Git is also required. Ideally, you should have a B.Tech/M.Tech/MCA degree in Computer Science, Data Science, AI, or a related field with at least 1-5 years of relevant experience or a strong project/internship background in AI/ML. Strong communication skills, problem-solving abilities, self-motivation, and a willingness to learn emerging technologies are key qualities we are looking for in candidates. Working at Telepathy Infotech will provide you with the opportunity to contribute to impactful AI/ML and GenAI solutions while collaborating in a tech-driven and agile work environment. You will have the chance to grow your career in one of India's fastest-growing tech companies with a transparent and supportive company culture. To apply for this position, please send your CV to hr@telepathyinfotech.com or contact us at +91-8890559306 for any queries. Join us on our journey of innovation and growth in the field of AI and ML at Telepathy Infotech.,

Posted 8 hours ago

Apply

11.0 - 15.0 years

0 Lacs

karnataka

On-site

As an AI Research Scientist, your role will involve developing the overarching technical vision for AI systems that cater to both current and future business needs. You will be responsible for architecting end-to-end AI applications, ensuring seamless integration with legacy systems, enterprise data platforms, and microservices. Collaborating closely with business analysts and domain experts, you will translate business objectives into technical requirements and AI-driven solutions. Working in partnership with product management, you will design agile project roadmaps that align technical strategy with market needs. Additionally, you will coordinate with data engineering teams to guarantee smooth data flows, quality, and governance across various data sources. Your responsibilities will also include leading the design of reference architectures, roadmaps, and best practices for AI applications. You will evaluate emerging technologies and methodologies, recommending innovations that can be integrated into the organizational strategy. Identifying and defining system components such as data ingestion pipelines, model training environments, CI/CD frameworks, and monitoring systems will be crucial aspects of your role. Leveraging containerization (Docker, Kubernetes) and cloud services, you will streamline the deployment and scaling of AI systems. Implementing robust versioning, rollback, and monitoring mechanisms to ensure system stability, reliability, and performance will also be part of your duties. Project management will be a key component of your role, overseeing the planning, execution, and delivery of AI and ML applications within budget and timeline constraints. You will be responsible for the entire lifecycle of AI application development, from conceptualization and design to development, testing, deployment, and post-production optimization. Enforcing security best practices throughout each phase of development, with a focus on data privacy, user security, and risk mitigation, will be essential. Furthermore, providing mentorship to engineering teams and fostering a culture of continuous learning will play a significant role in your responsibilities. In terms of mandatory technical and functional skills, you should possess a strong background in working with or developing agents using langgraph, autogen, and CrewAI. Proficiency in Python, along with robust knowledge of machine learning libraries such as TensorFlow, PyTorch, and Keras, is required. You should also have proven experience with cloud computing platforms (AWS, Azure, Google Cloud Platform) for building and deploying scalable AI solutions. Hands-on skills with containerization (Docker), orchestration frameworks (Kubernetes), and related DevOps tools like Jenkins and GitLab CI/CD are necessary. Experience using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation to automate cloud deployments is essential. Additionally, proficiency in SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) and expertise in designing distributed systems, RESTful APIs, GraphQL integrations, and microservices architecture are vital for this role. Knowledge of event-driven architectures and message brokers (e.g., RabbitMQ, Apache Kafka) is also required to support robust inter-system communications. Preferred technical and functional skills include experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and operational performance. Familiarity with cutting-edge libraries such as Hugging Face Transformers, OpenAI's API integrations, and other domain-specific tools is advantageous. Experience in large-scale deployment of ML projects, along with a good understanding of DevOps/MLOps/LLM Ops and training and fine-tuning of Large Language Models (SLMs) like PALM2, GPT4, LLAMA, etc., is beneficial. Key behavioral attributes for this role include the ability to mentor junior developers, take ownership of project deliverables, contribute to risk mitigation, and understand business objectives and functions to support data needs. If you have a Bachelor's or Master's degree in Computer Science, certifications in cloud technologies (AWS, Azure, GCP), and TOGAF certification (good to have), along with 11 to 14 years of relevant work experience, this role might be the perfect fit for you.,

Posted 9 hours ago

Apply

4.0 - 5.0 years

7 - 8 Lacs

Ahmedabad

Work from Office

Summary: We are seeking an experienced AI Engineer with 4 to 5 years of experience to join our team. The ideal candidate will have a strong background in artificial intelligence and machine learning technologies, as well as a proven track record of developing innovative solutions. Roles and Responsibilities: - Design and develop AI algorithms and models to solve complex business problems - Implement and optimize machine learning algorithms for real-time applications - Collaborate with cross-functional teams to integrate AI solutions into existing systems - Conduct research and stay up-to-date on the latest advancements in AI technology - Provide technical guidance and mentorship to junior team members - Participate in code reviews and contribute to the overall software development process Qualifications: - Bachelor's degree in Computer Science, Engineering, or related field - 4 to 5 years of experience in AI development - Strong programming skills in languages such as Python, Java, or C++ - Experience working with Langchain, LangGraph, and vector databases (e.g., Pinecone, ChromaDB, FAISS). - Hands-on experience with Hugging Face ecosystem including transformers etc. - Proficiency in model serving and deployment, using tools like FastAPI, Hugging Face Inference Endpoints, or AWS SageMaker for scalable and production-grade AI applications. - Strong understanding of prompt engineering, retrieval-augmented generation (RAG), and tokenization/token-level optimization. - Proven experience with fine-tuning and deploying LLMs (e.g., LLaMA, Mistral, Falcon, OpenAI, or Hugging Face Transformers). -Experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn - Knowledge of cloud computing platforms such as AWS or Azure - Excellent problem-solving and analytical skills - Strong communication and teamwork abilities

Posted 21 hours ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Chennai

Work from Office

We are seeking a Software Engineer with expertise in AI/ML and Full stack development to contribute to the development of core platform features. This role involves designing, developing, and optimizing high-performance AI-driven applications, building scalable microservices, and ensuring seamless integration across AI, backend, and frontend systems. You will play a key role in developing workflow automation modules, AI-powered search engines, and scalable enterprise solutions. Skills: P1 (must to have Skills) Generative AI Expertise/ AI Agents, Agentic Workflows/ RAG/ Prompt Engineering / Advanced Python Programming / LangChain / LangGraph / Vector Databases. P2 (need to have Skills ) FastAPI, NodeJS / Intuitive UI/UX Design: Either React or Typescript or Next.js. P3 (nice to have Skills) LlamaIndex / Integration of SLM/LLMs. Required Skills Generative AI Expertise : Advanced knowledge in AI Agents, Agentic Workflows, Retrieval-Augmented Generation (RAG), and Prompt Engineering. Advanced Python Programming : Proficiency in Python, particularly for AI-driven applications, with experience in frameworks like LangChain and LangGraph. Vector Databases : Strong skills in managing and utilizing vector databases for AI solutions. FastAPI & NodeJS : Experience in building backend services using FastAPI or NodeJS. UI/UX Design : Ability to design and implement intuitive user interfaces, with beginner to intermediate proficiency in React. LlamaIndex & SLM/LLM Integration : Familiarity with LlamaIndex and expertise in integrating System Language Models/Large Language Models into applications. Preferred Education and Experience: Bachelors/masters degree in computer science, AI, Machine Learning, or related field. At least 3 years in full-stack development with a focus on AI. Proven track record leading small engineering teams and delivering complex AI-driven products. Direct client-facing experience, from requirements gathering to final delivery.

Posted 22 hours ago

Apply

6.0 - 8.0 years

25 - 27 Lacs

Hyderabad

Work from Office

We seek a Senior Full Stack AI Engineer passionate about rapidly developing and demonstrating Agentic AI and workflow solutions. This role focuses on quickly translating innovative ideas into tangible Proofs of Concept (POCs), leveraging full-stack capabilities to build attractive user interfaces, integrate sophisticated AI agents, and deploy working solutions for impactful demonstrations. Key Responsibilities Rapid Agentic AI POC Development: Lead agile development of end-to-end Agentic AI and complex workflow POCs from concept to demonstration, ensuring rapid iteration and delivery. Design and implement multi-turn conversational Agents, Tools, and Chains. Full Stack Application Development: Develop attractive and intuitive front-end user interfaces (e.g., React, Next.js) and robust back-end APIs (e.g., FastAPI, Flask). Implement comprehensive testing strategies across the full stack. AI Orchestration & Integration: Utilize frameworks like Langgraph (or equivalents such as CrewAI, AutoGen) to orchestrate complex AI agent systems and multi-step workflows. Implement Agent-to-Agent (A2A) communication and adhere to Model Connect Protocol (MCP) for inter-model interactions. Apply advanced prompt engineering techniques. Deployment & Demonstration: Containerize applications (Docker) and deploy on Kubernetes for scalable POCs. Establish and utilize CI/CD pipelines for continuous integration and delivery. Prepare and deliver compelling technical demonstrations. Experience: 6+ years in ML + data engineering, with 2+ years in LLM/GenAI projects. Proven rapid prototyping track record. Technical Skills: Programming: Python, JavaScript/TypeScript (React, Next.js). AI/ML Fundamentals: LLMs, prompt engineering, Agentic AI architectures. Agentic AI & LLM Frameworks: Langgraph, CrewAI, AutoGen, A2A, MCP, Chains, Tools, Agents. Front-end: React, Next.js. Back-end: FastAPI, Flask. Databases: Vector databases (Pinecone, Weaviate), SQL/NoSQL. DevOps: Docker, Kubernetes (K8s), CI/CD, Testing. Cloud Platforms: AWS/Azure/GCP. Good to have: Experience with cloud-native AI/ML services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI). Familiarity with UI/UX design principles for creating engaging interfaces. Experience with asynchronous programming patterns in Python. Knowledge of caching mechanisms (e.g., Redis) for optimizing application performance. Understanding of general Machine Learning concepts and common algorithms beyond LLMs.

Posted 1 day ago

Apply

12.0 - 16.0 years

0 Lacs

hyderabad, telangana

On-site

The Senior Technical Architect - Generative AI and Agent Factory is accountable for overseeing the overall architecture, design, and strategic advancement of PepsiCo's enterprise-level GenAI platforms, namely PepGenX, Agent Factory, and PepVigil. Your primary objective will be to establish scalable, event-driven agent orchestration frameworks, modular agent templates, and integration strategies that empower intelligent, governed, and reusable agent ecosystems, thus expediting the implementation of AI-driven solutions across various commercial, reporting, and enterprise automation scenarios. You will be responsible for architecting and managing the development of scalable, modular AI agent frameworks (such as Agent Mesh, Orchestrator, Memory, Canvas) for broad organizational utility. Your role will also involve defining event-driven orchestration and agentic execution patterns (e.g., Temporal, LangGraph, AST-RAG, reflection) to facilitate intelligent, context-aware workflows. Additionally, you will drive platform integration across PepGenX, Agent Factory, and PepVigil to ensure uniformity in observability, security, and orchestration approaches. Developing reusable agent templates, blueprints, and context frameworks (e.g., MCP, semantic caching) to expedite use case onboarding across various domains will also fall under your purview. Moreover, you will be instrumental in setting up architecture standards to incorporate Responsible AI (RAI), data privacy, and policy enforcement within GenAI agents, and lead technical architecture planning for delivery sprints, vendor integration tracks, and GenAI product releases. To be considered for this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with a minimum of 12 years of experience in enterprise software or AI architecture roles, with recent specialization in LLMs, Generative AI, or intelligent agents. You must have a proven track record in designing and implementing agent-based frameworks or AI orchestration platforms, and a solid understanding of technologies like LangGraph, Temporal, vector databases, multimodal RAGs, event-driven systems, and memory management. Hands-on experience with Kubernetes, Azure AI/AKS, REST APIs, and observability stacks is essential, as is the ability to influence enterprise architecture and provide guidance to cross-functional engineering teams. Excellent communication skills are a must, allowing you to effectively convey intricate technical concepts to a diverse set of stakeholders.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a skilled, motivated, and quick-learning Full Stack Developer to join our team dedicated to cutting-edge Gen AI development work. As a Full Stack Developer, you will be responsible for creating innovative applications and solutions that encompass both frontend and backend technologies. While our solutions often involve the use of Retrieval Augmented Generation (RAG) and Agentic frameworks, your role will extend beyond these technologies to encompass a variety of AI tools and techniques. Your responsibilities will include developing and maintaining web applications using Angular, NDBX frameworks, and other modern technologies. You will design and implement databases in Postgres DB, employ ingestion and retrieval pipelines utilizing pgvector and neo4j, and ensure the implementation of efficient and secure data practices. Additionally, you will work with various generative AI models and frameworks such as LangChain, Haystack, and LlamIndex for tasks like chucking, embeddings, chat completions, and integration with different data sources. You will collaborate with team members to integrate GenAI capabilities into applications, write clean and efficient code adhering to company standards, conduct testing to identify and fix bugs, and utilize collaboration tools like GitHub for effective team working and code management. Staying updated with emerging technologies and applying them to operations will be essential, showcasing a strong desire for continuous learning. Qualifications and Experience: - Bachelor's degree in Computer Science, Information Technology, or a related field with at least 6 years of working experience. - Proven experience as a Full Stack Developer with a focus on designing, developing, and deploying end-to-end applications. - Knowledge of front-end languages and libraries such as HTML/CSS, JavaScript, XML, and jQuery. - Experience with Angular and NDBX frameworks, as well as database technologies like Postgres DB and vector databases. - Proficiency in developing APIs following OpenAPI standards. - Familiarity with generative AI models on cloud platforms like Azure and AWS, including techniques like Retrieval Augmented Generation, Prompt engineering, Agentic RAG, and Model context protocols. - Experience with collaboration tools like GitHub and docker images for packaging applications. At Allianz, we believe in fostering a diverse and inclusive workforce. We are proud to be an equal opportunity employer that values bringing your authentic self to work, regardless of background, appearance, preferences, or beliefs. Together, we can create an environment where everyone feels empowered to explore, grow, and contribute to a better future for our customers and the global community. Join us at Allianz and let's work together to care for tomorrow.,

Posted 1 day ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Bengaluru

Remote

Title : GenAI Engineer Experience : 5+ Years (Relevant) No. Of Positions : 5 Location : Remote Mandatory Skills : Python, Langraph/Langchain, AWS, EKS, CICD, RAG, LLM. 5x GenAI Platform: Support BAU (recently onboarded projects), build out platform, experienced engineer, platform thinking, Python, Java, EKS, ideally GenAI o Understand RAG, LLMs, etc - To the level they're able to help build. o Experience working on GenAI platform projects.

Posted 1 day ago

Apply

6.0 - 11.0 years

9 - 18 Lacs

Gurugram

Hybrid

About Nirvna Nirvna Solutions is a financial technology & services provider that delivers integrated and modular front, middle, and back-office solutions to a wide array of financial firms, including hedge funds, private equity firms, asset managers, prime brokers, and fund administrators. Nirvna 's ability to electronically ingest data from the inception of a portfolio and seamlessly integrate its day-to-day workflow from front to back office makes it stand out from the crowd. The complexity of the application poses interesting challenges and facilitates multitude of learning opportunities to the one who wants to dive in. At Nirvna, we strive to build a close- knit competitive team environment. We believe in team players. A successful team gives better results than an accomplished individual. For further information about us, please visit our website www.nirvanasolutions.com. Nirvna Solutions headquarter is in the financial capital of the world - Manhattan, NY, USA. Our offshore development centre is in Gurugram and is a wholly owned subsidiary of the U.S Entity. The offshore development and client service Centre Is a critical piece to the company's overall success and will continue to play an increasingly important role in the future We are hiring for Senior Python Developer. Job Description Summary: We are seeking an experienced Senior Python Developer (6+ years) to join our team in building and scaling our AI-driven Copilot platform. This platform leverages LangChain, LangGraph, LangMem, vector databases, and advanced agentic flows to provide intelligent assistance to users across our investment management solutions. The role focuses on designing, developing, and implementing modern AI techniques to build and maintain multi-modal conversational systems enabling voice, text, and potentially image-based interactions while planning future enhancements with MCP (Model Control Protocol) to deliver exceptional user experiences that simplify workflows and fulfill user queries with engaging UX. Employment Type: Permanent Job Location: Gurugram (Hybrid Work Model) Salary Offered : As per industry standard Qualifications: BE/B.Tech in Computer Science from a top IT college or a Masters degree in a relevant field such as Statistics, Data Science, or Applied Mathematics. Experience: 6+ years of professional Python development experience in complex applications or AI/ML platforms. Job Responsibilities: Build and refine the design and development of scalable, modular Python applications for the Copilot platform. Architect, implement, and refine LangChain/LangGraph agentic flows and long-term memory structures (using LangMem). Build robust API layers using FastAPI and integrate with internal/external APIs (e.g., NirvanaOne API, MCP Servers). Collaborate with product managers, UX designers, and engineering teams to translate requirements into technical designs. Provide technical mentoring, and code reviews for developers. Design and document clear architecture diagrams and technical specifications. Refine and integrate secure Azure-based authentication mechanisms across tenants. Ensure code quality through automated testing, CI/CD, and adherence to secure coding practices. Monitor system performance, identify bottlenecks, and proactively propose improvements. Stay updated with AI/ML advancements and advise on their integration into our architecture. Requirements: Strong expertise in Python and related libraries. Experience with LangChain, LangGraph, LangMem, vector DBs and embeddings-based search. Solid knowledge of backend api frameworks like FastAPI for building high-performance APIs. Hands-on experience designing agentic or conversational AI architectures. Experience integrating with identity providers (Azure AD preferred) and working with refresh/access tokens. Proficiency with RDBMS (PostgreSQL, SQL Server) and NoSQL databases. Familiarity with cloud services (Azure preferred) and containerization (Docker). Knowledge of modern AI/ML techniques, including LLM-based applications. Experience working on multi-tenant architectures and secure API integrations. Strong software engineering principles: version control, automated testing, CI/CD. Excellent problem-solving skills and ability to think strategically. Strong communication skills and ability to work in cross-functional teams. Nice to Have: Exposure to the investment management domain or financial services applications. Experience with MCP (or equivalent frameworks for multi-modal/agentic systems). Familiarity with Kafka-based event-driven architectures. Why Nirvna Work on the Test environment which gives you an opportunity to enhance and modify with respect to your desired skills. Opportunity to become a subject matter expert by way of certifications and relevant assignments. Get early opportunity to take product ownership for fast-paced growth. Latest software engineering practices. Opportunity to directly work with top leadership (including the CEO) and be recognized for the good work. Take the initiative to implement new technology/frameworks/processes to delight our clients with a wonderful product. Exposure to the finance domain (Security Markets) is one of our distinctive advantages. A conducive working environment with several employee benefits. Friendly Culture. 5 Days Working with flexibility of work from home / hybrid working model

Posted 1 day ago

Apply

6.0 - 8.0 years

25 - 27 Lacs

Hyderabad

Work from Office

We seek a Senior Gen AI Engineer with strong ML fundamentals and data engineering expertise to lead scalable AI/LLM solutions. This role focuses on integrating AI models into production, optimizing machine learning workflows, and creating scalable AI-driven systems. You will design, fine-tune, and deploy models (e.g., LLMs, RAG architectures) while ensuring robust data pipelines and MLOps practices. Key Responsibilities Agentic AI & Workflow Design: Lead design and implementation of Agentic AI systems and multi-step AI workflows. Build AI orchestration systems using frameworks like LangGraph. Utilize Agents, Tools, and Chains for complex task automation. Implement Agent-to-Agent (A2A) communication and Model Connect Protocol (MCP) for inter-model interactions. Production MLOps & Deployment: Develop, train, and deploy ML models optimized for production. Implement CI/CD pipelines (GitHub), automated testing, and robust observability (monitoring, logging, tracing) for Gen AI solutions. Containerize models (Docker) and deploy on cloud (AWS / Azure/ GCP) using Kubernetes. Implement robust AI/LLM security measures and adhere to Responsible AI principles. AI Model Integration: Integrate LLMs and models from HuggingFace. Apply deep learning concepts with PyTorch or TensorFlow. Data & Prompt Engineering: Build scalable data pipelines for unstructured/text data. Design and implement embedding/chunking strategies for scalable data processing. Optimize storage/retrieval for embeddings (e.g., Pinecone, Weaviate). Utilize Prompt Engineering techniques to fine-tune AI model performance. Solution Development: Develop GenAI-driven Text-to-SQL solutions. Programming: Python. Foundation Model APIs: AzureOpenAI, OpenAI, Gemini, Anthropic, or AWS Bedrock. Agentic AI & LLM Frameworks: LangChain, LangGraph, A2A, MCP, Chains, Tools, Agents. Ability to design multi-agent systems, autonomous reasoning pipelines, and tool-calling capabilities for AI agents. MLOps/LLMOps: Docker , Kubernetes (K8s) , CI/CD , Automated Testing, Monitoring, Observability, Model Registries, Data Versioning. Cloud Platforms: AWS/Azure/GCP. Vector Databases: Pinecone, Weaviate, or similar leading platforms. Prompt Engineering. Security & Ethics: AI/LLM solution security, Responsible AI principles. Version Control: GitHub. Databases: SQL/NoSQL.

Posted 1 day ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Pune

Work from Office

To ensure youre set up for success, you will bring the following skillset & experience: You have 4+ years of experience with application development using Python/Fast API, Java, RESTful services, high-performance, and multi-threading. Bachelors or masters degree in computer science, Statistics, Mathematics, Data Science, or a related field. Full Stack Developer with emphasis on [Python, Relational (PostGres) and Vector databases (FAISS, Weaviate, Milvus), Cloud Platforms/Services (AWS, GCP, Azure)] Good to have Frontend Development (Angular, React and SSR) experience Experience with machine learning frameworks and libraries (PyTorch, TensorFlow, scikit-learn). Experience with LangChain, LangGraph and LlamaIndex and LiteLLM for building LLM-powered applications Understanding of time-series modeling, anomaly detection, or clustering techniques. Experience with data engineering (ETL pipelines, data warehouses, etc.) and Hands-on exposure to model tracking (MLflow), serving (FastAPI, vLLM), and container orchestration frameworks (Kubernetes, DockerSwarm)

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a GenAI Developer at Vipracube Tech Solutions, you will be responsible for developing and optimizing AI models, implementing AI algorithms, collaborating with cross-functional teams, conducting research on emerging AI technologies, and deploying AI solutions. This full-time role requires 5 to 6 years of experience and is based in Pune, with the flexibility of some work from home. Your key responsibilities will include fine-tuning large language models tailored to marketing and operational use cases, building Generative AI solutions using various platforms like OpenAI (GPT, DALLE, Whisper) and Agentic AI platforms such as LangGraph and AWS Bedrock. You will also be building robust pipelines using Python, NumPy, Pandas, applying traditional ML techniques, handling CI/CD & MLOps, using AWS Cloud Services, collaborating using tools like Cursor, and effectively communicating with stakeholders and clients. To excel in this role, you should have 5+ years of relevant AI/ML development experience, a strong portfolio of AI projects in marketing or operations domains, and a proven ability to work independently and meet deadlines. Join our dynamic team and contribute to creating smart, efficient, and future-ready digital products for businesses and startups.,

Posted 2 days ago

Apply

11.0 - 15.0 years

0 Lacs

hyderabad, telangana

On-site

As an AI Azure Architect, your primary responsibility will be to develop the technical vision for AI systems that cater to the existing and future business requirements. This involves architecting end-to-end AI applications, ensuring seamless integration with legacy systems, enterprise data platforms, and microservices. Collaborating closely with business analysts and domain experts, you will translate business objectives into technical requirements and AI-driven solutions. Additionally, you will partner with product management to design agile project roadmaps aligning technical strategies with market needs. Coordinating with data engineering teams is essential to ensure smooth data flows, quality, and governance across different data sources. Your role will also involve leading the design of reference architectures, roadmaps, and best practices for AI applications. Evaluating emerging technologies and methodologies to recommend suitable innovations for integration into the organizational strategy is a crucial aspect of your responsibilities. You will be required to identify and define system components such as data ingestion pipelines, model training environments, CI/CD frameworks, and monitoring systems. Leveraging containerization (Docker, Kubernetes) and cloud services will streamline the deployment and scaling of AI systems. Implementation of robust versioning, rollback, and monitoring mechanisms to ensure system stability, reliability, and performance will be part of your duties. Moreover, you will oversee the planning, execution, and delivery of AI and ML applications, ensuring they are completed within budget and timeline constraints. Managing project goals, allocating resources, and mitigating risks will fall under your project management responsibilities. You will be responsible for overseeing the complete lifecycle of AI application developmentfrom conceptualization and design to development, testing, deployment, and post-production optimization. Emphasizing security best practices during each development phase, focusing on data privacy, user security, and risk mitigation, is crucial. In addition to technical skills, the ideal candidate for this role should possess key behavioral attributes such as the ability to mentor junior developers, take ownership of project deliverables, and contribute towards risk mitigation. Understanding business objectives and functions to support data needs is also essential. Mandatory technical skills for this position include a strong background in working with agents using langgraph, autogen, and CrewAI. Proficiency in Python, along with knowledge of machine learning libraries like TensorFlow, PyTorch, and Keras, is required. Experience with cloud computing platforms (AWS, Azure, Google Cloud Platform), containerization tools (Docker), orchestration frameworks (Kubernetes), and DevOps tools (Jenkins, GitLab CI/CD) is essential. Proficiency in SQL and NoSQL databases, designing distributed systems, RESTful APIs, GraphQL integrations, and event-driven architectures are also necessary. Preferred technical skills include experience with monitoring and logging tools, cutting-edge libraries like Hugging Face Transformers, and large-scale deployment of ML projects. Training and fine-tuning of Large Language Models (LLMs) is an added advantage. Educational qualifications for this role include a Bachelor's/Master's degree in Computer Science, along with certifications in Cloud technologies (AWS, Azure, GCP) and TOGAF certification. The ideal candidate should have 11 to 14 years of relevant work experience in this field.,

Posted 2 days ago

Apply

2.0 - 4.0 years

2 - 7 Lacs

Noida

Work from Office

At InnovationM , we're shaping the future of enterprise AI and are looking for a Senior Agentic AI Engineer to lead the design and development of cutting-edge agentic frameworks. Youll help build a scalable AI foundation that empowers multiple teams to deploy intelligent agents with autonomy, memory, and reasoning capabilities. Key Skills: Python | LangGraph | CrewAI | RAG | LLMs | Agentic Frameworks Tool Integration | Vector Databases | Autonomous Decision-Making Human-in-the-Loop | Secure, Scalable Architectures What Youll Do: Architect enterprise-grade AI frameworks Build reusable agentic patterns using LangGraph, CrewAI, AutoGPT Enable tools like APIs, vector stores, RPA, and internal services Collaborate across teams to bring autonomy into complex workflows Mentor and guide teams on the future of agentic AI Why Join Us? Work on the latest in AI/ML and agentic systems Engage with large-scale clients and impactful projects Thrive in a culture of innovation, learning & development Location: Noida Sec-126 l 5 Days work from Office Apply now: neha.sharma@innovationm.com

Posted 2 days ago

Apply

3.0 - 6.0 years

15 - 19 Lacs

Bengaluru

Remote

Job Title: AI Ops Engineer Experience: 35 years About the Role We are seeking a hands-on and proactive AI Ops Engineer to operationalize and support the deployment of large language model (LLM) workflows, including agentic AI applications, across Marvell’s enterprise ecosystem. This role requires strong prompt engineering capabilities, the ability to triage AI pipeline issues, and a deep understanding of how LLM-based agents interact with tools, memory, and APIs. You will be expected to diagnose and remediate real-time problems, from prompt quality issues to model behavior anomalies. Key Responsibilities Design, fine-tune, and manage prompts for various LLM use cases tailored to Marvell’s enterprise operations. Operate, monitor, and troubleshoot agentic AI applications , including identifying whether issues stem from: Prompt quality or structure Model configuration or performance Tool usage, API failures, or memory/recall issues Build diagnostics and playbooks to triage LLM-driven failures , including handling fallback strategies, retries, or re-routing to human workflows. Collaborate with architects, ML engineers, and DevOps to optimize agent orchestration across platforms like LangGraph, CrewAI, AutoGen, or similar. Support integration of agentic systems with enterprise apps like Jira, ServiceNow, Glean, or Confluence using REST APIs, webhooks, and adapters. Implement observability and logging best practices for model outputs, latency, and agent performance metrics. Contribute to building self-healing mechanisms and alerting strategies for production-grade AI workflows. Required Qualifications 3–6 years of experience in software engineering, DevOps, or ML Ops with exposure to AI/LLM workflows. Strong foundation in prompt engineering and experience with LLMs like GPT, Claude, LLaMA, etc. Practical understanding of AIOps platforms or operational AI use cases (incident triage, log summarization, root cause analysis, etc.). Exposure to agentic AI architectures , such as LangGraph, AutoGen, CrewAI, etc. Familiarity with scripting (Python), RESTful APIs, and basic system debugging. Strong analytical skills and the ability to trace issues across multi-step pipelines and asynchronous agents. Good-To-Have Glean DevRev Codium Cursor Atlassian AI Databricks Mosaic AI Role & responsibilities Preferred candidate profile

Posted 2 days ago

Apply

3.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Role & responsibilities Design and implement AI agent workflows. Develop end-to-end intelligent pipelines and multi-agent systems (e.g., LangGraph/LangChain workflows) that coordinate multiple LLM-powered agents to solve complex tasks. Create graph-based or state-machine architectures for AI agents, chaining prompts and tools as needed. Build and fine-tune generative models. Develop, train, and fine-tune advanced generative models (transformers, diffusion models, VAEs, GANs, etc.) on domain-specific data. Deploy and optimize foundation models (such as GPT, LLaMA, Mistral) in production, adapting them to our use cases through prompt engineering and supervised fine-tuning. Develop data pipelines. Build robust data collection, preprocessing, and synthetic data generation pipelines to feed training and inference workflows. Implement data cleansing, annotation, and augmentation processes to ensure high-quality inputs for model training and evaluation. Implement LLM-based agents and automation. Integrate generative AI agents (e.g., chatbots, AI copilots, content generators) into business processes to automate data processing and decision-making tasks. Use Retrieval-Augmented Generation (RAG) pipelines and external knowledge sources to enhance agent capabilities. Leverage multimodal inputs when applicable. Optimize performance and safety. Continuously evaluate and improve model/system performance. Use GenAI-specific benchmarks and metrics (e.g., BLEU, ROUGE, TruthfulQA) to assess results, and iterate to optimize accuracy, latency, and resource efficiency. Implement safeguards and monitoring to mitigate issues like bias, hallucination, or inappropriate outputs. Collaborate and document. Work closely with product managers, engineers, and other stakeholders to gather requirements and integrate AI solutions into production systems. Document data workflows, model architectures, and experimentation results. Maintain code and tooling (prompt libraries, model registries) to ensure reproducibility and knowledge sharing. Education: Bachelors or Master’s degree in Computer Science, Data Science, Artificial Intelligence. Programming proficiency: Expert-level skills in Python experience in machine learning and deep learning frameworks (PyTorch, TensorFlow). Generative model expertise: Demonstrated ability to build, fine-tune, and deploy large-scale generative models Familiarity with transformer architectures and generative techniques (LLMs, diffusion models, GANs) Experience working with model repositories and fine-tuning frameworks (Hugging Face, etc.). LLM and agent frameworks: Strong understanding of LLM-based systems and agent-oriented AI patterns. Experience with frameworks like LangGraph/LangChain or similar multi-agent platforms. AI integration and MLOps: Experience integrating AI components with existing systems via APIs and services. . Proficiency in retrieval-augmented generation (RAG) setups, vector databases,Familiarity with machine learning deployment and MLOps tools (Docker, Kubernetes, MLflow, KServe, etc.) Familiarity with GenAI tools: Hands-on experience with state-of-the-art GenAI models and APIs (OpenAI GPT, Anthropic, Claude, etc.) and with popular libraries (Hugging Face Transformers, LangChain, etc.). Awareness of the current GenAI tooling ecosystem and best practices.

Posted 2 days ago

Apply

10.0 - 18.0 years

20 - 35 Lacs

Chennai

Work from Office

Must have Strong proficiency in Python with the ability to write clean, modular, and efficient code for AI/ML workflows and backend services. Hands-on experience building AI agents and pipelines using LangChain and LangGraph to orchestrate LLM-based reasoning, tool usage, and decision making. Familiarity with using, fine-tuning, and evaluating popular foundation models (e.g., OpenAI, Claude, Mistral, LLaMA) for text generation, classification, and summarisation tasks. Practical experience with vector databases like Pinecone, QDrant, Weaviate, or FAISS for semantic search, retrieval-augmented generation (RAG), and embedding management. Proficiency with IaC tools such as Terraform, AWS CloudFormation, or Pulumi to automate infrastructure provisioning and deployment in cloud environments. Strong understanding of cloud-native architecture patterns and experience deploying AI workloads on AWS services such as Lambda, ECS, S3, and SageMaker. Experience designing and consuming RESTful APIs and integrating AI models into applications via APIs (e.g., OpenAI, Bedrock, Hugging Face). Knowledge of prompt design, few-shot learning, prompt templates, and optionally, fine-tuning or adapters like LoRA. Exposure to modern DevOps practices, containerisation (Docker), and CI/CD pipelines for deploying ML/AI systems. Strong analytical mindset, ability to work independently, and collaborate across teams including data scientists, engineers, and product managers. Req 2 - Good to have Experience with deep learning frameworks like PyTorch or TensorFlow to build and train custom AI/ML models. Familiarity with ML model tracking tools such as MLflow or SageMaker Experiments for managing experiments and monitoring model performance. Understanding of model deployment at scale, including real-time inference using tools like SageMaker, TorchServe, or ONNX. Exposure to MLOps practices, including pipeline automation, model versioning, and integration with CI/CD tools for seamless deployment and monitoring.

Posted 2 days ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Bengaluru, Karnataka, India

On-site

What You ll Do Design & Build: Develop mutli-agent AI systems for the UCaaS platform, focusing on NLP, speech recognition, audio intelligence and LLM powered interactions. Rapid Experiments: Prototype with open-weight models (Mistral, LLaMA, Whisper, etc.) and scale what works. Code for Excellence: Write robust code for AI/ML libraries and champion software best practices. Optimize for Scale & Cost: Engineer scalable AI pipelines, focusing on latency, throughput, and cloud costs. Innovate with LLMs: Fine-tune and deploy LLMs for summarization, sentiment and intent detection, RAG pipelines, multi-modal inputs and multi-agentic task automation. Own the Stack: Lead multi-agentic environments from data to deployment and scale. Collaborate & Lead: Integrate AI with cross-functional teams and mentor junior engineers. What You Bring Experience:6-10 yearsof professional experience, with a mandatory minimum of 2 years dedicated to a hands-on role in a real-world, production-level AI/ML project. Coding & Design: Expert-level programming skills inPythonand proficiency in designing and building scalable, distributed systems. ML/AI Expertise: Deep, hands-on experience with coreML/AI libraries and frameworks, Agentic Systems, RAG pipelines Hands-on experience in usingVector DBs LLM Proficiency: Proven experience working with and fine-tuning Large Language Models (LLMs). Scalability & Optimization Mindset: Demonstrated experience in building and scaling AI services in the cloud, with a strong focus on performance tuning and cost optimization of agents specifically. Nice to Have Youve tried outagent frameworkslike LangGraph, CrewAI, or AutoGen and can explain the pros and cons of autonomous vs. orchestrated agents. Experience with MLOps tools and platforms (e.g., Kubeflow, MLflow, Sagemaker). Real-time streaming AI experience token-level generation, WebRTC integration, or live transcription systems Contributions to open-source AI/ML projects or a strong public portfolio (GitHub, Kaggle).

Posted 3 days ago

Apply

3.0 - 8.0 years

12 - 22 Lacs

Gurugram, Bengaluru, Mumbai (All Areas)

Work from Office

Key Skills : Large Language Models (LLM) : Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG) : Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing. Streaming & Real-time Processing : Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization. API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficienc

Posted 5 days ago

Apply

5.0 - 9.0 years

15 - 18 Lacs

Bangalore Rural, Gurugram, Delhi / NCR

Work from Office

AI Architect,C-level stakeholders, AI/ML,modern AI frameworks (DSPy, LangGraph, or similar),AI/ML architecture roles with enterprise clients,Customer relationship management and sentiment analysis

Posted 5 days ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As a skilled AI expert, you will be responsible for partnering with Product Managers, Engineers, and other key stakeholders to deeply understand business requirements and translate them into actionable technical roadmaps. You will identify and prioritize AI use cases aligned with organizational goals, develop a scalable and sustainable implementation roadmap, and conduct ROI analysis for on-prem LLM deployments. Your role will involve creating sophisticated software designs driven by AI-powered experiences, focusing on performance, scalability, security, reliability, and ease of maintenance. You will define and develop complex enterprise applications through AI Agentic frameworks, ensuring responsiveness, responsibility, traceability, and reasoning. Utilizing modeling techniques like UML and Domain-Driven Design, you will visualize intricate relationships between components and ensure seamless integrations. Leading large-scale platform projects, you will deliver No-code workflow management, HRMS, Collaboration, Search engine, Document Management, and other services for employees. Championing automated testing, continuous integration/delivery pipelines, MLOps, and agile methodologies across multiple teams will also be a key aspect of your role. To excel in this position, you should hold a BTech/MTech/PhD in Computer Sciences with specialization in AI/ML. A proven track record of leading large-scale digital transformation projects is required, along with a minimum of 3+ years of hands-on experience in building AI-based applications using Agentic frameworks. With a minimum of 12-14 years of experience in software design & development, you should have expertise in designing and developing applications and workflows using the ACE framework. Your skillset should include hands-on experience in developing AI Agents based on heterogenous frameworks such as LangGraph, AutoGen, Crew AI, and others. You should also be proficient in selecting and fine-tuning LLMs for enterprise needs and designing efficient inference pipelines for system integration. Expertise in Python programming, developing agents/tools in an AI Agentic framework, and building data pipelines for structured and unstructured data is essential. Additionally, experience in leveraging technologies like RAG (Retrieval Augmented Generation), vector databases, and other tools to enhance AI models is crucial. Your ability to quickly learn and adapt to the changing technology landscape, combined with past experience in the .NET Core ecosystem and front-end development featuring Angular/React/Javascript/Html/Css, will be beneficial. Hands-on experience managing full-stack web applications built upon Graph/RESTful APIs/microservice-oriented architectures and familiarity with large-scale data ecosystems is also required. Additionally, you should be skilled in platform telemetry capture, ingestion, and intelligence derivation, with a track record of effectively mentoring peers and maintaining exceptional attention to detail throughout the SDLC. Excellent verbal and written communication abilities, as well as outstanding presentation and public speaking talents, are necessary to excel in this role. Please note: Beware of recruitment scams.,

Posted 6 days ago

Apply

0.0 - 1.0 years

1 - 3 Lacs

Bengaluru

Work from Office

Role & responsibilities Experience: 0-1 year Location - Bengaluru Mode of work - Work from office Mode of interview - Face to Face We are looking for a highly motivated and passionate AI/ML Engineer with a strong foundation in machine learning and hands-on project experience in Generative AI. As an early-career team member, you will work on cutting-edge technologies involving Large Language Models (LLMs), Agentic AI, and toolkits like Lang Chain, Lang Graph, and Phi Data. This role is ideal for someone who has recently completed academic or personal projects and is eager to apply their knowledge to real-world use cases. Key Responsibilities Build, experiment with, and fine-tune Generative AI models using popular LLMs (e.g., OpenAI, LLaMA, Mistral, etc.). Develop agent-based workflows using tools such as LangChain, LangGraph, and PhiData. Create interactive and dynamic AI pipelines for tasks like summarization, Q&A, sentiment analysis, and more. Collaborate with the AI/ML team to design and deploy prototypes and PoCs. Write clean, modular Python code and maintain version control via Git/GitHub. Document projects clearly and effectively for internal and external stakeholders. Required Skills Solid understanding of Machine Learning, Deep Learning fundamentals, and model evaluation. Experience with Generative AI projects (transformers, text generation, prompt engineering, etc.). Familiarity with frameworks like LangChain, LangGraph, PhiData. Proficient in Python, along with common ML libraries (e.g., PyTorch, Hugging Face Transformers, scikit-learn). Good understanding of LLM APIs (OpenAI, Anthropic, Cohere, etc.). Strong problem-solving ability and communication skills. Active GitHub profile showcasing relevant projects is mandatory Good to Have Knowledge of REST APIs, basic backend frameworks like Flask or FastAPI. Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate). Basic understanding of RAG (Retrieval-Augmented Generation) pipelines. Knowledge of prompt chaining and tool-calling via LLM agents. Knowledge on web application development

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Senior Manager - Senior Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are seeking a highly experienced Senior data scientist with 8+ years of expertise in machine learning, focusing on NLP, Generative AI, and advanced LLM ecosystems. This role demands leadership in designing and deploying scalable AI systems leveraging the latest advancements such as Google ADK, Agent Engine, and Gemini LLM. You will spearhead building real-time inference pipelines and agentic AI solutions that power complex, multi-user applications with cutting-edge technology. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 7+ years in ML engineering, applied AI, or senior data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Practical knowledge of LLM inference scaling with tools like vLLM, Groq, Triton Inference Server, and Google ADK. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization. Exposure to event-driven architectures or streaming pipelines (Kafka, Redis).

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Lead Assistant Manager - Data Scientist (NLP & Generative AI) Location: PAN India / Remote Employment Type: Full-time About the Role We are looking for a motivated Data Scientist with 5+ years of experience in machine learning and data science, focusing on NLP and Generative AI. You will contribute to the design, development, and deployment of AI solutions centered on Large Language Models (LLMs) and agentic AI technologies, including Google ADK, Agent Engine, and Gemini. This role involves working closely with senior leadership to build scalable, real-time inference systems and intelligent applications. Key Responsibilities Lead the architecture, development, and deployment of scalable machine learning and AI systems centered on real-time LLM inference for concurrent users. Design, implement, and manage agentic AI frameworks leveraging Google Adk, Langgraph or custom-built agents. Integrate foundation models (GPT, LLaMA, Claude, Gemini) and fine-tune them for domain-specific intelligent applications. Build robust MLOps pipelines for end-to-end lifecycle management of models-training, testing, deployment, and monitoring. Collaborate with DevOps teams to deploy scalable serving infrastructures using containerization (Docker), orchestration (Kubernetes), and cloud platforms. Drive innovation by adopting new AI capabilities and tools, such as Google Gemini, to enhance AI model performance and interaction quality. Partner cross-functionally to understand traffic patterns and design AI systems that handle real-world scale and complexity. Required Skills & Qualifications Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or related fields. 5+ years in ML engineering, applied AI, or data scientist roles. Strong programming expertise in Python and frameworks including PyTorch, TensorFlow, Hugging Face Transformers. Deep experience with NLP, Transformer models, and generative AI techniques. Hands-on experience deploying AI models to concurrent users with high throughput and low latency. Skilled in cloud environments (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Familiarity with vector databases (FAISS, Pinecone, Weaviate) and retrieval-augmented generation (RAG). Experience with agentic AI using Adk, LangChain, Langgraph and Agent Engine Preferred Qualifications Experience with Google Gemini and other advanced LLM innovations. Contributions to open-source AI/ML projects or participation in applied AI research. Knowledge of hardware acceleration and GPU/TPU-based inference optimization.

Posted 1 week ago

Apply

7.0 - 10.0 years

10 - 15 Lacs

Faridabad

Work from Office

Position: Experience : 3 to 5 Years Location : Mohan Corporate Office (Work from Office Only) Job Type: Full-Time Salary: To be discussed during the interview Key Responsibilities : - Design, develop, and deploy AI/ML models for real-world applications. - Work with NLP, deep learning, and traditional ML algorithms to solve complex business problems. - Develop end-to-end ML pipelines, including data preprocessing, feature engineering, model training, and deployment. - Optimize model performance using hyperparameter tuning and model evaluation techniques. - Implement AI-driven solutions using TensorFlow, PyTorch, Scikit-learn, Open AI APIs, Hugging Face, and similar frameworks. - Work with structured and unstructured data, performing data wrangling, transformation, and feature extraction. - Deploy models in cloud environments (AWS, Azure, or GCP) using SageMaker, Vertex AI, or Azure ML. - Collaborate with cross-functional teams to integrate AI models into production systems. - Ensure scalability, performance, and efficiency of AI/ML solutions. - Stay updated with emerging AI trends and technologies to drive innovation. Required Skills : - Strong experience in machine learning, deep learning, NLP, and AI model development. - Implement Retrieval-Augmented Generation (RAG) using vector databases - Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and OpenAI GPT models. - Expertise in NLP techniques (Word2Vec, BERT, transformers, LLMs, text classification). - Hands-on experience with computer vision (CNNs, OpenCV, YOLO, custom object detection models). - Solid understanding of ML model deployment and ML Ops (Docker, Kubernetes, CI/CD for ML models). - Experience in working with cloud platforms (AWS, Azure, GCP) for AI/ML model deployment. - Strong knowledge of SQL, NoSQL databases, and big data processing tools (PySpark, Databricks, Hadoop, Kafka, etc. - Familiarity with API development using Django, Flask, or FastAPI for AI solutions. - Strong problem-solving, analytical, and communication skills. Preferred Skills : - Experience with AI-powered chatbots and OpenAI API integration. - Exposure to LLMs (GPT, LLaMA, Falcon, etc.) for real-world applications. - Hands-on experience in generative AI models.

Posted 1 week ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies