Jobs
Interviews

2959 Openai Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: AI Lead – Generative AI & ML Systems Key Responsibilities Generative AI Development Design and implement LLM-powered solutions and generative AI models for use cases such as predictive analytics, automation workflows, anomaly detection, and intelligent systems. · RAG & LLM Applications Build and deploy Retrieval-Augmented Generation (RAG) pipelines, structured generation systems, and chat-based assistants tailored to business operations. Full AI Lifecycle Management Lead the complete AI lifecycle—from data ingestion and preprocessing to model design, training, testing, deployment, and continuous monitoring. · Optimization & Scalability Develop high-performance AI/LLM inference pipelines, applying techniques like quantization, pruning, batching, and model distillation to support real-time and memory-constrained environments. MLOps & CI/CD Automation Automate training and deployment workflows using Terraform, GitLab CI, GitHub Actions, or Jenkins, integrating model versioning, drift detection, and compliance monitoring. Cloud & Deployment Deploy and manage AI solutions using AWS, Azure, or GCP with containerization tools like Docker and Kubernetes. AI Governance & Compliance Ensure model/data governance and adherence to regulatory and ethical standards in production AI deployments. Stakeholder Collaboration Work cross-functionally with product managers, data scientists, and engineering teams to align AI outputs with real-world business goals. Required Skills & Qualifications Bachelor’s degree (B.Tech or higher) in Computer Science, IT, or a related field is required. 8-12 Year exp- from the Ai team with overall experience in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) solution development. Minimum 2+ years of hands-on experience in Generative AI and LLM-based solutions, including prompt engineering, fine-tuning, Retrieval-Augmented Generation (RAG) pipelines with full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Proven expertise in both open-source and proprietary Large Language Models (LLMs), including LLaMA, Mistral, Qwen, GPT, Claude, and BERT. Expertise in C/C++ & Python programming with relevant ML/DL libraries including TensorFlow, PyTorch, and Hugging Face Transformers. Experience deploying scalable AI systems in containerized environments using Docker and Kubernetes. Deep understanding of the MLOps/LLMOps lifecycle, including model versioning, deployment automation, performance monitoring, and drift detection. Familiarity with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins) and DevOps for ML workflows. Working knowledge of Infrastructure-as-Code (IaC) tools like Terraform for cloud resource provisioning and reproducible ML pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Designed and documented High-Level Design (HLD) and Low-Level Design (LLD) for ML/GenAI systems, covering data pipelines, model serving, vector search, and observability layers. Documentation included component diagrams, network architecture, CI/CD workflows, and tabulated system designs. Provisioned and managed ML infrastructure using Terraform, including compute clusters, vector databases, and LLM inference endpoints across AWS, GCP, and Azure. Experience beyond notebooks: shipped models with logging, tracing, rollback mechanisms, and cost control strategies. Hands-on ownership of production-grade LLM workflows, not limited to experimentation. Full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Preferred Qualifications (Good To Have) Experience with LangChain, LlamaIndex, AutoGen, CrewAI, OpenAI APIs, or building modular LLM agent workflows. Exposure to multi-agent orchestration, tool-augmented reasoning, or Autonomous AI agents and agentic communication patterns with orchestration. Experience deploying ML/GenAI systems in regulated environments, with established governance, compliance, and Responsible AI frameworks. Familiarity with AWS data and machine learning services, including Amazon SageMaker, AWS Bedrock, ECS/EKS, and AWS Glue, for building scalable, secure data pipelines and deploying end-to-end AI/ML workflows.

Posted 1 week ago

Apply

7.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Full Stack Developer Overview Are you passionate about crafting intuitive, high-performance end to end intelligent products (full stack) along with impactful user interfaces giving unique Experience? Do you thrive in translating complex functionality into elegant, pixel-perfect design systems? At the Automation & AI Center of Excellence (CoE), we’re reimagining enterprise platforms with a user-first mindset—placing intuitive Products at the heart of our technology. Our team develops full stack products with modern, responsive web interfaces that bring automation and AI capabilities to life. Join us as we lead the evolution of digital experiences through seamless UI, thoughtful design, and scalable front-end engineering. About The Role We are looking for a dedicated Full Stack Developer, 7 to 12 years of Experience with deep expertise in both front-end and back-end development to design and deliver modern user interfaces and robust server-side solutions for enterprise-grade applications . This role is on crafting immersive UIs using frameworks like React.js/Angular and TypeScript , while architecting scalable and secure backend systems using technologies such as Node.js and Python . You will collaborate with cross-functional teams to integrate functionality through APIs and cloud services , ensuring seamless interaction between the front-end and back-end components. Additionally, you will be responsible for managing and optimizing database solutions , leveraging technologies like SQL, NoSQL, and cloud-based databases to ensure efficient data storage and retrieval. You’ll work across a range of projects, from dashboards and real-time interaction layers to workflows powered by automation and Generative AI . While deep AI expertise is not required, a willingness to explore and integrate with AI-driven services is highly encouraged. This role offers the opportunity to influence how thousands of users experience our tools—combining visual design, performance engineering, scalable architecture, and efficient data management into one impactful craft. Key Responsibilities Frontend Development Develop responsive and performant UI using React.js (Preferred)/Angular, TypeScript, HTML5, CSS3, and Bootstrap. Ensure seamless user interactions with accessibility, cross-browser compatibility, and responsive layouts. Utilize advanced CSS frameworks or preprocessors like Sass or Less for enhanced styling capabilities. Implement state management libraries such as Redux or MobX to maintain efficient UI architecture. Backend & API Integration Integrate frontend with Python-based APIs (e.g., FastAPI, Flask) or services built in Node.js/Django. Consume and manage RESTful APIs, handling asynchronous/synchronous data flow effectively. Support building lightweight backend logic for UI-specific use cases when required. Explore the use of GraphQL for more flexible data querying and integration. Cloud, Devops & Deployment Host and deploy full-stack applications using Azure App Service, Static Web Apps, or Docker/Kubernetes. Work with CI/CD tools such as GitHub Actions, Azure DevOps/Jenkins, or equivalent to streamline delivery pipelines. Experience working with DevOps including but not limited to container technologies like Docker & Kubernetes, as well as Cloud Native technology stack such as Argo, Helm, etcd, and Envoy Monitor app performance and debug issues across environments. Implement infrastructure as code using tools like Terraform or AWS CloudFormation for automated deployments. Enterprise Ready Development of FullStack Applications Architect and manage Virtual Private Clouds (VPCs) to establish secure, scalable, and efficient network environments. Design and configure load balancers and application gateways to ensure optimal traffic distribution and high availability across server clusters. Ensure robust network security through strategic firewall configurations, network security groups, and VPN implementations to safeguard data and maintain compliance. Agile Collaboration Participate in Agile ceremonies and contribute to sprint-based development cycles Collaborate across teams, communicate technical insights, and maintain clean documentation Write reusable frontend components and enforce best practices in UI architecture and state management Required Technical Skills Strong experience with React.js (Preferred)/Angular, JavaScript/TypeScript, HTML/CSS, Bootstrap. Hands-on experience with Python APIs (FastAPI/Flask) and REST architecture. Strong hands-on experience with backend ecosystems like Node.js (Preferred)/Django. Experience with Azure hosting/Jenkins , Git, CI/CD pipelines, and container tools like Docker/Kubernetes Solid grasp of responsive design, cross-browser optimization, and modular code structures. Experience with SQL/NoSQL databases for frontend data access and visualization. Understanding of security best practices and tools (e.g., OWASP, JWT) for secure application development. Familiarity with advanced CSS frameworks or preprocessors like Sass or Less. Proficiency in state management libraries such as Redux or MobX. Knowledge of GraphQL for flexible data querying and integration. Experience with infrastructure as code tools like Terraform or AWS CloudFormation. Ability to architect and manage Virtual Private Clouds (VPCs). Skills in designing and configuring load balancers and application gateways. Proficiency in implementing robust network security through firewalls, network security groups, and VPN. Bonus Skills: Generative AI Knowledge Experience with integrating AI-driven services into web applications, particularly using Azure OpenAI Service. Familiarity with Generative AI models such as GPT enhancing user interfaces. Ability to leverage AI tools like Azure Cognitive Services to automate workflows and improve data interaction layers. Good to have experience with frameworks such as TensorFlow or PyTorch for developing and deploying AI models. Understanding of AI ethics and best practices in application development, ensuring responsible use of AI technologies. Soft Skills Problem Solver: Capable of analyzing UI performance or integration issues and resolving them with precision Adaptable & Curious: Eager to explore automation and Generative AI technologies alongside core UI responsibilities Agile-Minded: Comfortable working in Scrum-based teams and iterative development lifecycles Effective Communicator: Articulates technical topics clearly to stakeholders and collaborates fluidly with designers, developers, and analysts Accountable & Quality-Driven: Takes ownership of components from development through deployment with attention to scalability and maintainability Basic understanding or experience in AI Tools. Allianz Group is one of the most trusted insurance and asset management companies in the world. Caring for our employees, their ambitions, dreams and challenges, is what makes us a unique employer. Together we can build an environment where everyone feels empowered and has the confidence to explore, to grow and to shape a better future for our customers and the world around us. We at Allianz believe in a diverse and inclusive workforce and are proud to be an equal opportunity employer. We encourage you to bring your whole self to work, no matter where you are from, what you look like, who you love or what you believe in. We therefore welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation. Join us. Let's care for tomorrow.

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Vijay Nagar, Indore, Madhya Pradesh

On-site

Hiring For AI Enginner - Python Developer :- Job Description:- We are seeking a talented Python Developer with hands-on experience in AI chatbot development and familiarity with Model Context Protocol (MCP) to join our AI team. You will be responsible for developing intelligent, context-aware conversational systems that integrate seamlessly with our internal knowledge base and enterprise services. The ideal candidate is technically proficient ,proactive, and capable of translating complex AI interactions into scalable backend solutions. Key Responsibilities 1. Design and develop robust AI chatbots using Python and integrate them with LLM APIs(e.g., OpenAI, Google AI, etc.). 2. Implement and manage Model Context Protocol (MCP) for optimize context injection, session management, and model-aware interactions. 3.Build and maintain secure pipelines for knowledge base access that allow the chatbot to accurately respond to internal queries. 4.Work with internal teams to define and evolve the contextual metadata strategy (roles, user state, query history, etc.). 5.Contribute to internal tooling and framework development for contextual AI applications. Required Skills & Experience 1. 3+ years of professional Python development experience. 2. Proven track record in AI chatbot development, particularly using LLMs. 3. Understanding of Model Context Protocol (MCP) and its role in enhancing AI interactionfidelity and relevance. 4. Strong experience integrating with AI APIs (e.g., OpenAI, Azure OpenAI). 5. Familiarity with Retrieval-Augmented Generation (RAG) pipelines and vector-basedsearch (e.g., Pinecone, Weaviate, FAISS). 6. Experience designing systems that ingest and structure unstructured knowledge (e.g., PDF,Confluence, Google Drive docs). 7. Comfortable working with RESTful APIs, event-driven architectures, and context-awareservices.8.Good understanding of data handling, privacy, and security standards related to enterpriseAI use. Job Location: Indore Joining: Immediate Share resume at talent@jstechalliance.com or can Contact Here :- 0731-3122400 WhatsApp : 8224006397 Job Type: Full-time Application Question(s): Immediate Joiner Have you completed your Bachelor's\Master's Degree? Experience: Python: 3 years (Required) Model Context Protocol (MCP): 3 years (Required) LLM APIs: 3 years (Required) Artificial Intelligence: 2 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person

Posted 1 week ago

Apply

25.0 years

0 Lacs

Tondiarpet, Tamil Nadu, India

On-site

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: What you need to know about the role- Data scientists are highly motivated team players with strong analytical skills who specialize in creating, driving and executing initiatives to mitigate fraud on PayPal’s platform and improve the experience for PayPal’s hundreds of millions of customers, while guaranteeing compliance with regulations. Meet our team Data scientists in the Fraud Risk team are problem solvers suited to approach varied challenges in complex big data environments. Our core goals are to enable seamless and delightful experiences to our customers, while preventing threat actors from accessing customers’ financial instruments and personal information. As part of our day-to-day job, we are collaborating with a wide variety of partners: product owners, data scientists, security experts, legal consults, and engineers, to bring our data science insights to life, impacting the experience and security of millions of customers around the globe. Job Description: Your way to impact Data scientists deeply understand PayPal’s business objectives, as their impact on PayPal’s top and bottom lines is immense. As a data scientist, you will develop key AIML capabilities, tools, and insights with the aim of adapting PayPal’s advanced proprietary fraud prevention and experience mechanisms and enabling growth. Your day to day Day-to-day duties include data analysis, monitoring and forecasting, creating the logic for and implementing risk rules and strategies, providing requirements to data scientists and technology teams on attribute, model and platform requirements, and communicating with global stakeholders to ensure we deliver the best possible customer experience while meeting loss rate targets. What Do You Need To Bring- Strong proficiency in Python for data analysis, machine learning, and automation. Solid understanding of supervised and unsupervised AI/machine learning methods (e.g., XGBoost, LightGBM, Random Forest, clustering, isolation forests, autoencoders, neural networks, transformer-based architectures). Experience in payment fraud, AML, KYC, or broader risk modeling within fintech or financial institutions. Experience developing and deploying ML models in production using frameworks such as scikit-learn, TensorFlow, PyTorch, or similar. Hands-on experience with LLMs (e.g., OpenAI, LLaMA, Claude, Mistral), including use of prompt engineering, retrieval-augmented generation (RAG), and agentic AI to support internal automation and risk workflows. Ability to work cross-functionally with engineering, product, compliance, and operations teams. Proven track record of translating complex ML insights into business actions or policy decisions. BS/BA degree with 5+ years of related professional experience or master’s degree with 4+ years of related experience. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don't hesitate to apply. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply. REQ ID R0127046

Posted 1 week ago

Apply

15.0 years

3 - 10 Lacs

Gurgaon

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a highly skilled and passionate Senior Solution Architect – Generative AI to join our team and lead the design and implementation of cutting-edge AI solutions. This role is ideal for a seasoned professional with extensive experience in AI/ML development and architecture, a deep understanding of generative AI technologies, and a strategic mindset to align innovations with business requirements. Responsibilities Develop and oversee architectural designs for generative AI models, frameworks, and solutions tailored to business needs Design scalable pipelines for integrating and deploying generative AI solutions in alignment with enterprise architecture Perform in-depth research to stay current on advancements in generative AI, including GPT, DALL-E, and Stable Diffusion to evaluate their applicability Collaborate with stakeholders to assess business requirements and translate them into concrete AI strategies and actionable implementation plans Lead the end-to-end development, testing, and deployment of generative AI systems, acting as a technical guide for teams Advocate for the adoption of best practices, tools, and frameworks to enhance enterprise AI capabilities Create APIs and services to integrate generative AI tools within business workflows or customer-facing platforms Work closely with data engineers and scientists to ensure high-quality data preprocessing for AI model training Implement responsible AI protocols to address governance, ethical usage, and regulatory compliance Identify and mitigate data biases, ensuring data privacy and security concerns are addressed Act as a mentor for junior AI developers and other cross-functional team members in understanding generative AI technologies Facilitate cross-disciplinary collaboration with data scientists, engineers, product managers, and business stakeholders to drive project success Benchmark emerging generative AI tools like OpenAI models, Hugging Face, and custom-built frameworks for potential integration and improvements Conduct periodic evaluations of deployed AI systems, recommending adjustments and enhancements for improved operational efficiency Requirements 15-23 years of overall IT experience with at least 5+ years of proven experience in AI/ML development or architecture roles Background in designing and implementing generative AI solutions, including areas like NLP, computer vision, or code generation Familiarity with foundational models such as GPT, BERT, and their customization for enterprise use cases Knowledge of AI/ML frameworks/tools such as TensorFlow, PyTorch, or Hugging Face models Skills in cloud platforms (AWS, Azure, Google Cloud) and container platforms like Docker or Kubernetes Proficiency in Python, R, or similar programming languages for building generative AI solutions Understanding of MLOps principles, pipeline orchestration tools (Kubeflow, Airflow), and CI/CD practices Expertise in responsible AI governance, ethical frameworks, and compliance with data privacy regulations Capability to collaborate with multidisciplinary teams to align AI strategies with organizational goals We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title and Summary Data Scientist II Job Title – Data Scientist II – Data & Analytics Our Purpose We work to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. We cultivate a culture of inclusion for all employees that respects their individual strengths, views, and experiences. We believe that our differences enable us to be a better team – one that makes better decisions, drives innovation and delivers better business results. Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships, and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Our Team We are an Artificial Intelligence Centre of Excellence working on initiatives in Core and Commercial Payments. Our focus is to create value and improvements through digital intervention inspired by state of the art AI and Machine Learning. As part of the team, you will play a key role in building new AI / ML models, monitoring long term performance, innovate with research – all while creating significant business impact. Are you excited about coding and the solutions we can build with it? Do you love doing hands on work with opportunities to learn new tech? Do you believe that AI has huge potential to improve business processes? Are you recreational with Mathematics and Statistics? If yes, then this role is for you! The Role The candidate, will be working on numerous AI and ML initiatives, spanning across different use cases and stages of delivery. You will expected to work and code hands-on, keeping up to date with latest best practices and advances in the field of AI. You will be required to work closely in collaboration with multiple internal business groups across Mastercard. You are also responsible for creating design documents, including data models, data flow diagrams, and system architecture diagrams. All about You Majors in Computer Science, Data Science, Analytics, Mathematics, Statistics, or a related engineering field or equivalent work experience 4+ Years of experience in using Python with knowledge of client server architecture 2+ Years of experience on building, deploying and maintaining ML models 1+ Years of experience in working on Gen AI projects including knowledge of modern frameworks like LangChain, LangGraph, OpenAI Chat Completion APIs Demonstrated success interacting with stakeholders to understand technical needs and ensuring analyses and solutions meet their needs effectively. Able to work in a fast-paced, deadline-driven environment as part of a team and as an individual contributor. Ability to easily move between business, analytical, and technical teams and articulate solution requirements for each group. Experience with Enterprise Business Intelligence Platform/Data platform i.e. Tableau, PowerBI, Streamlit will be a plus. Experience with cloud-based (SaaS) solutions, ETL processes or API integrations will be a plus. Experience on Cloud Data Platforms Azure/AWS/Databricks will be a plus. Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills Analytical/Problem Solving Strong attention to detail and quality Creativity/Innovation Self-motivated, operates with a sense of urgency Project Management/Risk Mitigation Able to prioritize and perform multiple tasks simultaneously Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 1 week ago

Apply

5.0 years

6 - 7 Lacs

Hyderābād

On-site

Our Company: At Teradata, we're not just managing data; we're unleashing its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store are empowering the world's largest enterprises to derive unprecedented value from their most complex data. We're rapidly pushing the boundaries of what's possible with Artificial Intelligence, especially in the exciting realm of autonomous and agentic systems We’re building intelligent systems that go far beyond automation — they observe, reason, adapt, and drive complex decision-making across large-scale enterprise environments. As a member of our AI engineering team, you’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes. You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems. If you're passionate about building intelligent systems that are not only powerful but observable, resilient, and production-ready, this role offers the opportunity to shape the future of enterprise AI from the ground up. We are seeking a highly skilled Senior AI Engineer to drive the development and deployment of Agentic AI systems with a strong emphasis on AI observability and data platform integration. You will work at the forefront of cutting-edge AI research and its practical application—designing, implementing, and monitoring intelligent agents capable of autonomous reasoning, decision-making, and continuous learning. Ignite the Future of AI at Teradata! What You'll Do: Shape the Way the World Understands Data As a senior Agentic AI Engineer at Teradata, you’ll build cutting-edge intelligent agents that transform how users explore data, derive insights, and automate workflows across industries such as healthcare, finance, and telecommunications. You will: Design and implement autonomous AI agents for semantic search, text-to-SQL translation, and analytical task execution. Develop modular prompts, reasoning chains, and decision graphs tailored to complex enterprise use cases. Enhance agent performance through experimentation with LLMs, prompt tuning, and advanced reasoning workflows. Integrate agents with Teradata’s Model Context Protocol (MCP) to enable seamless interaction with model development pipelines. Build tools that allow agents to monitor training jobs, evaluate models, and interact with unstructured and structured data sources. Work on retrieval-augmented generation (RAG) pipelines and extend agents to downstream ML systems. Who You'll Work With: Join Forces with the Best You’ll collaborate with a world-class team of AI architects, ML engineers, and domain experts at Silicon Valley , working together to build the next generation of enterprise AI systems. You’ll also work cross-functionally with: Product managers and UX designers to craft agentic workflows that are intuitive and impactful. Domain specialists to ensure solutions align with real-world business problems in regulated industries. Infrastructure and platform teams responsible for training, evaluation, and scaling AI workloads. This is a rare opportunity to shape foundational AI capabilities within a global, data-driven company. This is a deeply collaborative environment where technical innovation meets real-world application, where your ideas are not only heard but implemented to shape the next generation of data interaction. What Makes You a Qualified Candidate: Skills in Action 5+ years of product engineering experience in AI/ML, with strong software development fundamentals. Proficiency with LLM APIs (e.g., OpenAI, Claude, Gemini) and agent frameworks such as AutoGen, LangGraph, AgentBuilder, or CrewAI. Experience designing multi-step reasoning chains, prompt pipelines, or intelligent workflows. Familiarity with agent evaluation metrics: correctness, latency, failure modes. Passion for building production-grade systems that bring AI to life. What You Bring: Passion and Potential Master’s or Ph.D. in Computer Science, AI, or a related field, or equivalent industry experience. Experience working with multimodal inputs, retrieval systems, or structured knowledge sources. Deep understanding of enterprise data workflows and scalable AI architectures. Prior exposure to MCP or similar orchestration/protocol systems. #LI-VB1

Posted 1 week ago

Apply

8.0 years

6 - 7 Lacs

Hyderābād

On-site

Our Company: Ignite the Future of AI at Teradata! At Teradata, we're not just managing data; we're unleashing its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store are empowering the world's largest enterprises to derive unprecedented value from their most complex data. We're rapidly pushing the boundaries of what's possible with Artificial Intelligence, especially in the exciting realm of autonomous and agentic systems We’re building intelligent systems that go far beyond automation — they observe, reason, adapt, and drive complex decision-making across large-scale enterprise environments. As a member of our AI engineering team, you’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes. In this role, you’ll architect foundational components for production-grade AI systems — from agent frameworks and LLM pipelines to observability and evaluation layers that ensure reliability, accountability, and performance. You’ll be responsible not just for building models, but for making them measurable, debuggable, and trustworthy in real-world, high-stakes deployments. You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems. If you're passionate about building intelligent systems that are not only powerful but observable, resilient, and production-ready, this role offers the opportunity to shape the future of enterprise AI from the ground up. What You'll Do: Shape the Way the World Understands Data As a staff Agentic AI Engineer at Teradata, you’ll build cutting-edge intelligent agents that transform how users explore data, derive insights, and automate workflows across industries such as healthcare, finance, and telecommunications. You will: Design and implement autonomous AI agents for semantic search, text-to-SQL translation, and analytical task execution. Develop modular prompts, reasoning chains, and decision graphs tailored to complex enterprise use cases. Enhance agent performance through experimentation with LLMs, prompt tuning, and advanced reasoning workflows. Integrate agents with Teradata’s Model Context Protocol (MCP) to enable seamless interaction with model development pipelines. Build tools that allow agents to monitor training jobs, evaluate models, and interact with unstructured and structured data sources. Work on retrieval-augmented generation (RAG) pipelines and extend agents to downstream ML systems. Who You'll Work With: Join Forces with the Best You’ll collaborate with a world-class team of AI architects, ML engineers, and domain experts in Silicon Valley , working together to build the next generation of enterprise AI systems. You’ll also work cross-functionally with: Product managers and UX designers need to craft agentic workflows that are intuitive and impactful. Domain specialists to ensure solutions align with real-world business problems in regulated industries. Infrastructure and platform teams responsible for training, evaluation, and scaling AI workloads. This is a rare opportunity to shape foundational AI capabilities within a global, data-driven company. This is a deeply collaborative environment where technical innovation meets real-world application, where your ideas are not only heard but implemented to shape the next generation of data interaction. What Makes You a Qualified Candidate: Skills in Action 8+ years of software engineering experience, with 5+ years focused on AI/ML, intelligent systems, or agent-based architectures. Deep understanding of software design principles and scalable architecture patterns. Strong experience with LLM APIs (e.g., OpenAI, Claude, Gemini) and agentic frameworks (e.g., AutoGen, LangGraph, AgentBuilder, CrewAI). Proven ability to build complex multi-step workflows using prompt pipelines, tools, and adaptive reasoning. Proficiency in Python and experience with vector databases, API integration, and orchestration tools. Familiarity with agent evaluation metrics: correctness, latency, grounding, and tool use accuracy. Experience leading AI projects from inception to deployment in a production setting. What You Bring: Passion and Potential Master’s or Ph.D. in Computer Science, AI, or a related field, or equivalent industry experience. Experience working with multimodal inputs, retrieval systems, or structured knowledge sources. Hands-on experience with prompt engineering, function-calling agents, RAG patterns, and evaluation harnesses. Prior work with Model Composition Protocol (MCP) or similar orchestration frameworks is a strong plus. Excellent cross-team communication and stakeholder engagement skills. Passion for shipping high-quality AI products that are safe, explainable, and valuable. #LI-VB1

Posted 1 week ago

Apply

15.0 years

3 - 10 Lacs

Hyderābād

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a highly skilled and passionate Senior Solution Architect – Generative AI to join our team and lead the design and implementation of cutting-edge AI solutions. This role is ideal for a seasoned professional with extensive experience in AI/ML development and architecture, a deep understanding of generative AI technologies, and a strategic mindset to align innovations with business requirements. Responsibilities Develop and oversee architectural designs for generative AI models, frameworks, and solutions tailored to business needs Design scalable pipelines for integrating and deploying generative AI solutions in alignment with enterprise architecture Perform in-depth research to stay current on advancements in generative AI, including GPT, DALL-E, and Stable Diffusion to evaluate their applicability Collaborate with stakeholders to assess business requirements and translate them into concrete AI strategies and actionable implementation plans Lead the end-to-end development, testing, and deployment of generative AI systems, acting as a technical guide for teams Advocate for the adoption of best practices, tools, and frameworks to enhance enterprise AI capabilities Create APIs and services to integrate generative AI tools within business workflows or customer-facing platforms Work closely with data engineers and scientists to ensure high-quality data preprocessing for AI model training Implement responsible AI protocols to address governance, ethical usage, and regulatory compliance Identify and mitigate data biases, ensuring data privacy and security concerns are addressed Act as a mentor for junior AI developers and other cross-functional team members in understanding generative AI technologies Facilitate cross-disciplinary collaboration with data scientists, engineers, product managers, and business stakeholders to drive project success Benchmark emerging generative AI tools like OpenAI models, Hugging Face, and custom-built frameworks for potential integration and improvements Conduct periodic evaluations of deployed AI systems, recommending adjustments and enhancements for improved operational efficiency Requirements 15-23 years of overall IT experience with at least 5+ years of proven experience in AI/ML development or architecture roles Background in designing and implementing generative AI solutions, including areas like NLP, computer vision, or code generation Familiarity with foundational models such as GPT, BERT, and their customization for enterprise use cases Knowledge of AI/ML frameworks/tools such as TensorFlow, PyTorch, or Hugging Face models Skills in cloud platforms (AWS, Azure, Google Cloud) and container platforms like Docker or Kubernetes Proficiency in Python, R, or similar programming languages for building generative AI solutions Understanding of MLOps principles, pipeline orchestration tools (Kubeflow, Airflow), and CI/CD practices Expertise in responsible AI governance, ethical frameworks, and compliance with data privacy regulations Capability to collaborate with multidisciplinary teams to align AI strategies with organizational goals We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

About Us: At Objectways, we are at the forefront of AI innovation, solving real-world problems using cutting-edge machine learning and reasoning technologies. Our team works on ambitious projects involving agent trajectory prediction, complex reasoning systems, multimodal intelligence, and preference-based learning. We're looking for a talented Data Scientist eager to push the boundaries of applied AI. Role Overview: We are seeking an experienced Data Scientist (minimum 4 years) to work across a diverse range of AI projects, including: Machine Learning model development Agent Trajectory prediction and modeling Complex Reasoning and Decision Systems Preference Ranking & Reinforcement Learning Multimodal Reasoning (Text, Image, Audio) Prompt Validation and Evaluation for LLMs Key Responsibilities: Design and build ML models for structured and unstructured data Conduct agent-based simulations and develop trajectory learning systems Apply preference learning techniques for fine-tuned ranking outcomes Handle multimodal datasets and develop reasoning pipelines across modalities Validate and optimize prompts for large language model performance Translate research into scalable, production-level implementations Collaborate with cross-functional teams (Engineering, Product, Research) Required Qualifications: Minimum 4 years of hands-on experience in Data Science or Machine Learning roles Strong foundation in Python, PyTorch/TensorFlow, scikit-learn, and ML lifecycle tools Proven expertise in one or more: trajectory modeling, preference ranking, or multimodal systems Experience with LLM prompt engineering and evaluation techniques Solid understanding of complex reasoning algorithms, graph-based methods, or causal inference Strong problem-solving, analytical thinking, and communication skills Preferred Skills: Familiarity with tools like LangChain, Hugging Face, or OpenAI APIs Exposure to RLHF (Reinforcement Learning from Human Feedback) or prompt-tuning Experience with deploying ML models in production environments (e.g., using Docker, MLflow) Why Join Us: Work on high-impact, next-generation AI challenges Collaborate with top talent across domains Competitive salary + benefits + learning budget Flexible and remote-friendly work culture

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

Hyderābād

On-site

Job Title: Senior Data Scientist Location:Hyderabad/Bangalore Department: Advanced Analytics & AI Solutions About the Role We are seeking a highly skilled Senior Data Scientist to lead the end-to-end design, enhancement, and deployment of a machine learning solution that predicts the virality of press articles. This role is part of a high-impact digital media analytics project designed to support brand reputation management by forecasting which headlines are likely to gain rapid traction. The ideal candidate brings deep expertise in natural language processing (NLP), deep learning (especially transformer models like BERT), and hands-on experience with Azure AI services. You will drive the evolution of a proof-of-concept Python model into a scalable, production-grade Azure-based solution integrated with downstream generative AI interfaces. Key Responsibilities Lead the development and optimization of a BERT-based pairwise comparison model for predicting press article virality. Incorporate temporal dynamics into modeling strategy to enhance prediction accuracy over time. Design and implement scalable ML pipelines using Azure AI Hub, Foundry, Blob Storage, and Cosmos DB. Collaborate with architecture and platform teams to ensure smooth lift-and-shift deployment of the solution into client environments. Integrate outputs with Azure OpenAI (GPT-4o) to build conversational interfaces via Microsoft Copilot Studio or Power Apps. Evaluate model performance using robust metrics (accuracy, AUC-ROC, ranking correlation). Manage compute budgets (under $1,000/month) through cost-efficient compute planning and Azure monitoring tools. Work closely with stakeholders to translate business requirements into ML model specifications and insights delivery. Stay up-to-date with advances in NLP, ranking models, and Azure ML ecosystem. Required Skills & Experience 6–10 years of experience in Data Science or Machine Learning, with a strong focus on NLP or content-based recommendation systems. Deep proficiency in Python, with expertise in transformers, PyTorch/TensorFlow, Hugging Face, and data science libraries (pandas, scikit-learn). Hands-on experience with BERT, pairwise learning, and ranking models. Prior exposure to temporal modeling techniques (e.g., sliding windows, time-based feature engineering). Strong experience with the Azure ecosystem, including: Azure AI Hub and Foundry Azure Machine Learning Azure OpenAI Azure Cosmos DB and Blob Storage Azure Monitor and Application Insights Experience designing or integrating generative AI solutions (e.g., GPT-4, Claude, Gemini). Familiarity with CRISP-DM methodology or other structured data science lifecycles. Preferred Qualifications Experience working with media or digital marketing datasets (clickstream, headlines, user interactions). Knowledge of semantic search or vector databases. Understanding of Copilot/Chatbot development on Microsoft platforms. Experience managing data science projects with budget constraints and deployment pipelines. Published research or prototypes in NLP or content virality modeling is a strong plus. Certifications DP-203: Azure Data Engineer Associate AI-102: Azure AI Engineer Associate DP-600: Fabric Analytics Engineer Associate Hugging Face Transformers Certification TensorFlow Developer Certificate or PyTorch Fundamentals by Meta Google Cloud Certified Professional Data Engineer (if cross-cloud experience is relevant) Certified Specialist in NLP (e.g., DeepLearning.AI NLP Specialization on Coursera)

Posted 1 week ago

Apply

0 years

3 - 9 Lacs

Hyderābād

On-site

We are seeking a skilled Agentic AI Developer to design and implement intelligent agent systems powered by Large Language Models (LLMs) . This role involves developing LLM-based pipelines that can ingest transcripts, documents, or business narratives and generate structured artifacts such as workflows, decision trees, action plans, or contextual recommendations. You will collaborate with cross-functional teams to deploy autonomous AI agents capable of reasoning, planning, memory, and tool usage in enterprise environments — primarily within the Microsoft ecosystem (Azure, Power Platform, Copilot, and M365 integrations). Key Responsibilities Build and deploy autonomous agent systems using frameworks such as LangChain, AutoGen, CrewAI, or Semantic Kernel. Develop pipelines to process natural language input and generate structured outputs tailored to business needs. Implement agentic features such as task orchestration, memory storage, tool integration , and feedback loops. Fine-tune LLMs or apply prompt engineering to optimize accuracy, explainability, and responsiveness. Integrate agents with Microsoft 365 services (Teams, Outlook, SharePoint) and Power Platform components (Dataverse, Power Automate). Collaborate with business and product teams to define use cases, test scenarios, and performance benchmarks. Participate in scenario-based UAT testing, risk evaluation, and continuous optimization. Must-Have Skills Proficiency in Python and hands-on experience with ML/AI libraries and frameworks (Transformers, PyTorch, LangChain). Strong understanding of LLMs (e.g., GPT, Claude, LLaMA, Mistral) and prompt engineering principles. Experience developing agent workflows using ReAct, AutoGen, CrewAI, or OpenAI function calling . Familiarity with Vector Databases (FAISS, Pinecone, Qdrant) and RAG-based architectures . Skills in Natural Language Processing (NLP) : summarization, entity recognition, intent classification. Integration experience with APIs, SDKs , and enterprise tools (preferably Microsoft stack). Preferred Certifications (Candidates with the following certifications will have a strong advantage) : ✅ Microsoft Certified: Azure AI Engineer Associate (AI-102) ✅ Microsoft Certified: Power Platform App Maker (PL-100) ✅ Microsoft 365 Certified: Developer Associate (MS-600) ✅ OpenAI Developer Certifications or Prompt Engineering Badge ✅ Google Cloud Certified: Professional Machine Learning Engineer ✅ NVIDIA Deep Learning Institute Certifications ✅ Databricks Generative AI Pathway (optional)

Posted 1 week ago

Apply

10.0 years

0 Lacs

Delhi

On-site

About the Role: We are looking for a passionate and skilled Full Stack .NET Developer to join our growing engineering team. You’ll be responsible for designing and developing scalable web applications and APIs using .NET Core on the backend and modern JavaScript frameworks on the frontend. You’ll play a key role in implementing end-to-end features, optimizing performance, and ensuring system robustness across the stack. Key Responsibilities: Develop, test, and deploy backend APIs using .NET Core / ASP.NET Core Build dynamic and responsive frontend interfaces using React , Angular , or Blazor Design and implement RESTful or gRPC services for scalable microservice-based systems Integrate with databases such as SQL Server , PostgreSQL , or MongoDB Write clean, maintainable, and well-documented code following SOLID principles and design patterns. Event-driven and microservices architecture. Collaborate closely with product managers, UI/UX designers, QA, and other engineers in an Agile/Scrum team Implement CI/CD pipelines for building, testing, and deploying code (e.g., using Azure DevOps , GitHub Actions , or Jenkins ) Contribute to code reviews , architecture discussions , and technical documentation Ensure security, performance, and scalability across the entire stack Nice to have ⁠.NET 9 with latest runtime optimizations AI/ML integration (Azure AI Services, OpenAI APIs) Modern frontend frameworks (Next.js 14+, Nuxt.js, React) Server-Side Rendering ( SSR ) and Static Site Generation ( SSG ) ⁠Observability and distributed tracing Technical Skills Required: Backend: C# and .NET Core (3.1, 5/6/7/8/9) ASP.NET Core Web API Entity Framework Core or Dapper RESTful APIs, gRPC (optional), GraphQL (optional) Frontend: JavaScript / TypeScript Frameworks: Angular , React , Blazor , or Vue.js HTML5, CSS3, Bootstrap, or Tailwind CSS Databases: Microsoft SQL Server, PostgreSQL, or MongoDB Writing complex queries, stored procedures, and performance tuning DevOps & Cloud (Preferred): Azure or AWS (App Services, Functions, Storage, etc.) Docker, Kubernetes (AKS/EKS) CI/CD tools (Azure DevOps, GitHub Actions) Others: Git, GitFlow branching Unit testing frameworks (xUnit, NUnit, Moq, Jest, etc.) Familiarity with Clean Architecture, DDD, or CQRS is a plus Nice to Have: Experience with microservices and distributed systems Familiarity with authentication protocols (OAuth2, JWT, IdentityServer) Experience working in Agile teams using Scrum or Kanban Exposure to message queues (RabbitMQ, Azure Service Bus) ⁠ Experience with AI-powered development tools (GitHub Copilot, etc.) Knowledge of WebAssembly (WASM) and Progressive Web Apps Familiarity with vector databases and semantic search Experience with real-time applications using Signal Understanding of Zero Trust security principles ⁠Test-Driven Development (TDD) Feature flags and A/B testing DevSecOps and security-first mindset Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 10+ years of experience in full-stack development Excellent problem-solving and communication skills Why Join Us: Work with modern tools, frameworks, and cloud platforms Flat team structure and open, collaborative culture Opportunities to grow as a tech lead or architect Competitive salary, benefits, and learning support Job Types: Full-time, Permanent Location Type: In-person Schedule: Day shift Work Location: In person Expected Start Date: 21/07/2025

Posted 1 week ago

Apply

3.0 years

3 - 6 Lacs

Delhi

On-site

Job Title: Data Science Trainer Location: Ducat India – [Delhi & Noida] Job Type: Full-Time Experience Required: Minimum 3+ Years in Training or Industry Projects About Ducat India: Ducat India, a leading name in IT training and development, has been shaping tech professionals for over two decades. With 9 branches across the NCR including Noida, New Delhi, Gurugram, Faridabad, and Noida Extension, we offer cutting-edge training in the latest technologies. To know more, visit: www.ducatindia.com Job Description: We are looking for a passionate and experienced Data Science Trainer to deliver high-quality training sessions and workshops. The ideal candidate should have deep knowledge of data science concepts and practical experience in machine learning, artificial intelligence and associated technologies. Key Responsibilities: Conduct training sessions on Data Science, Machine Learning, Deep Learning, Gen AI, Python and AI Tools . Design and deliver engaging training content, assessments and projects. Guide students on capstone projects and industry-based use cases. Keep updated with the latest trends and advancements in the data and AI ecosystem. Evaluate students’ performance and provide constructive feedback. Collaborate with internal team for curriculum development and improvements. Required Skillsets: Strong expertise in Python programming Hands-on experience with Data Science libraries (Pandas, NumPy, Matplotlib, etc.) Proficient in Machine Learning algorithms & model building Knowledge of Deep Learning frameworks Experience with Gen AI tools like OpenAI, LLMs, ChatGPT, LangChain, etc. Understanding of AI concepts, NLP, CV, and deployment techniques Good presentation and communication skills Prior teaching or mentoring experience is preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, Data Science, Engineering or a related field Professional certifications in AI/ML/Data Science will be a plus Perks & Benefits: Exposure to real-time projects and tools Opportunity to train aspiring professionals and freshers Flexible working options (for part-time/freelancers) Dynamic and growth-driven environment How to Apply: Interested candidates can contact at +91-9205783661 OR share their resume at hr@ducatindia.com with subject line: “Application for Data Science Trainer ” . Job Type: Full-time Pay: ₹300,000.00 - ₹600,000.00 per year Schedule: Day shift Experience: Training: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

2.0 - 3.0 years

1 - 7 Lacs

Mohali

On-site

We are looking for a highly motivated AI Developer with 2 to 3 years of experience in building AI-powered chatbot agents and strong proficiency in Python. The ideal candidate should have hands-on experience with conversational AI platforms, natural language processing (NLP), and deploying intelligent virtual assistants that provide seamless user experiences. Requirements : Design, develop, and deploy intelligent chatbot agents using Python and AI frameworks. Build and train NLP models for intent classification, entity extraction, and response generation. Work with platforms such as Rasa, Dialogflow, Microsoft Bot Framework, or OpenAI GPT-based tools. Integrate chatbots with web/mobile applications, CRMs, or third-party APIs. Write clean, scalable, and well-documented Python code to support AI logic and backend functionalities. Continuously evaluate and improve chatbot performance through analytics and feedback loops. Collaborate with product managers, designers, and developers to define bot workflows and use cases. Ensure data privacy, security, and compliance in AI-driven solutions Benefits : 5 Day working On time salary inter- tech work Skill upgradation trainings Soft skills trainings Rewards and recognitions weekly de-stressing activities Job Types: Part-time, Contractual / Temporary Pay: ₹14,645.88 - ₹60,195.54 per month Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 week ago

Apply

15.0 years

0 Lacs

Chennai

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence tools and cloud AI services. Your typical day will involve designing and implementing production-ready solutions, ensuring that they meet quality standards. You will work with various AI models, including generative AI, and may also explore deep learning, neural networks, chatbots, and image processing technologies. Collaboration with cross-functional teams will be essential to integrate these solutions effectively into existing systems and workflows. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Expected to provide solutions to problems that apply across multiple teams. - Facilitate knowledge sharing and training sessions to enhance team capabilities. - Monitor project progress and ensure alignment with strategic goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of cloud-based AI services and deployment strategies. - Experience with deep learning frameworks such as TensorFlow or PyTorch. - Familiarity with natural language processing techniques and tools. - Ability to design and implement scalable AI solutions. - Strong programming skills in Python, RAG, Prompt Engineering, RAI, OpenAI APIs, Lang Chain, expert working knowledge on OpenAI / Gemini models, Agentic framework, Code Conversion experience for reverse and forward engineering Additional Information: - The candidate should have minimum 12 years of experience in Large Language Models. - This position is based at our Chennai office. - A 15 years full time education is required. 15 years full time education

Posted 1 week ago

Apply

7.0 years

0 Lacs

Noida

On-site

Lead Full Stack Developer (Azure + AI + LLaMA Integration) Location: Noida Experience: 7–10 Years Type: Full-Time | Immediate Joiner Industry: AI-Powered Platforms | Web & Mobile Development | Cloud Applications About the Role Servoedge Technologies is building next-gen AI-powered digital platforms with high user engagement. We are looking for a hands-on Lead Developer to architect and deliver scalable, intelligent web and mobile apps. This role requires expertise in Microsoft technologies , cloud-native development , and LLM integration (OpenAI / LLaMA) . You’ll lead a cross-functional team and collaborate directly with the founders to bring innovation to life. Key Responsibilities Architect full-stack platforms using .NET Core / Python / JavaScript. Develop cloud-native apps on Microsoft Azure (App Services, Azure Functions, Blob Storage) . Integrate LLMs like OpenAI GPT and Meta's LLaMA models into business workflows for chat, summarization, automation, etc. Build scalable APIs and microservices using REST, GraphQL, and integrate AI pipelines. Lead and mentor a team of developers through Agile sprints and code reviews. Own deployment pipelines, security best practices, and performance tuning. Must-Have Skills 7+ years of development experience across .NET Core and/or Python. Proven experience building SaaS or high-traffic apps on Azure Cloud . Strong understanding of LLM integration (OpenAI, LLaMA 2/3, or similar) . Ability to work with Hugging Face, Ollama, LangChain, or open-source AI libraries. Familiar with REST APIs, authentication protocols (OAuth2/JWT), and DevOps pipelines. Good to Have Familiarity with containerized LLaMA deployments (e.g., Ollama, Docker, Kubernetes). Experience using LangChain, Transformers, or llama.cpp to embed AI into apps. Previous experience with cross-platform mobile frameworks (React Native, Flutter). Experience working in startup or product-led environments. What’s in it for You Ownership of core AI platform modules Freedom to propose and implement AI use cases Competitive pay + leadership track Work with a team deeply focused on Enterprise AI Adoption Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Noida

On-site

Position: Senior Data Scientist / Data Analyst (Noida) (CE58SF RM 3385) Education Required: Bachelor’s / Masters / PhD: Bachelor’s degree in Computer Science, Data Science, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Must have skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have: Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. Job Description: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. ******************************************************************************************************************************************* Job Category: Embedded HW_SW Job Type: Full Time Job Location: Noida Experience: 5 - 8 Years Notice period: 0-15 days

Posted 1 week ago

Apply

1.0 - 2.0 years

4 - 9 Lacs

Noida

On-site

Foxit is remaking the way the world interacts with documents through advanced PDF technology and tools. We are a leading global software provider of fast, affordable, and secure PDF solutions that are used by millions of people worldwide. Winner of numerous awards, Foxit has customers in more than 200 countries and global operations. We have a complete product line and an exciting and aggressive development schedule. Our proven PDF technology is disrupting the status quo establishment and has accelerated our company growth. We are proud to list as customers Google, Amazon, and NASDAQ, and with your skills and help, we plan to add many more. Foxit has offices all over the world, including locations in the US, Asia, Europe, and Australia. For more information, please visit https://www.foxit.com About the Role: As a Junior Product Manager for Foxit's Fusion Platform, you will play a key role in shaping product enhancements and AI-driven capabilities within our integrated document and workflow ecosystem. Leveraging your experience with widely adopted AI tools, you'll collaborate with product teams, engineering, and data science to deliver innovative, AI-powered user experiences. Responsibilities: Work closely with senior product owners and cross-functional teams to identify and prioritize enhancements leveraging AI for Foxit Fusion. Develop and refine user stories, acceptance criteria, and product documentation that clearly articulate Fusion's integrated AI features. Support product backlog management aligned with strategic objectives, particularly focusing on AI integration and usability. Assist with user and market research to uncover new opportunities for integrating AI within the Fusion ecosystem. Facilitate clear and effective communication among technical and non-technical stakeholders to align on AI initiatives. Track product metrics and user feedback to inform continuous improvements and iterations. Qualifications: Bachelor’s degree or relevant practical experience. 1-2 years in product management or product ownership roles, with hands-on experience using widely adopted AI tools. Practical knowledge of artificial intelligence, specifically as integrated into contemporary SaaS products. Familiarity with agile methodologies and the product lifecycle, especially within integrated software ecosystems like Fusion. Strong communication, collaboration, and analytical skills. Preferred Skills: Hands-on experience with popular AI platforms and frameworks (OpenAI APIs, TensorFlow, GPT models). Competency with data analytics tools (SQL, Excel, Tableau) for extracting insights and supporting decisions. Enthusiasm for staying updated on emerging AI technologies and industry trends.

Posted 1 week ago

Apply

3.0 years

16 - 20 Lacs

India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

0 Lacs

Indore

On-site

Hiring For AI Enginner - Python Developer :- Job Description:- We are seeking a talented Python Developer with hands-on experience in AI chatbot development and familiarity with Model Context Protocol (MCP) to join our AI team. You will be responsible for developing intelligent, context-aware conversational systems that integrate seamlessly with our internal knowledge base and enterprise services. The ideal candidate is technically proficient ,proactive, and capable of translating complex AI interactions into scalable backend solutions. Key Responsibilities 1. Design and develop robust AI chatbots using Python and integrate them with LLM APIs(e.g., OpenAI, Google AI, etc.). 2. Implement and manage Model Context Protocol (MCP) for optimize context injection, session management, and model-aware interactions. 3.Build and maintain secure pipelines for knowledge base access that allow the chatbot to accurately respond to internal queries. 4.Work with internal teams to define and evolve the contextual metadata strategy (roles, user state, query history, etc.). 5.Contribute to internal tooling and framework development for contextual AI applications. Required Skills & Experience 1. 3+ years of professional Python development experience. 2. Proven track record in AI chatbot development, particularly using LLMs. 3. Understanding of Model Context Protocol (MCP) and its role in enhancing AI interactionfidelity and relevance. 4. Strong experience integrating with AI APIs (e.g., OpenAI, Azure OpenAI). 5. Familiarity with Retrieval-Augmented Generation (RAG) pipelines and vector-basedsearch (e.g., Pinecone, Weaviate, FAISS). 6. Experience designing systems that ingest and structure unstructured knowledge (e.g., PDF,Confluence, Google Drive docs). 7. Comfortable working with RESTful APIs, event-driven architectures, and context-awareservices.8.Good understanding of data handling, privacy, and security standards related to enterpriseAI use. Job Location: Indore Joining: Immediate Share resume at talent@jstechalliance.com or can Contact Here :- 0731-3122400 WhatsApp : 8224006397 Job Type: Full-time Application Question(s): Immediate Joiner Have you completed your Bachelor's\Master's Degree? Experience: Python: 3 years (Required) Model Context Protocol (MCP): 3 years (Required) LLM APIs: 3 years (Required) Artificial Intelligence: 2 years (Required) Location: Indore, Madhya Pradesh (Required) Work Location: In person

Posted 1 week ago

Apply

4.0 - 5.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Description We are looking for an Arabic speaking GenAI Specialist to join our global team. Responsibilities Develop Next-Gen Generative AI Models: Research, design, and fine-tune state-of-the-art models like Llama, GPT, Stable Diffusion, and VAEs to drive business impact. Enhance AI Capabilities: Implement and optimize LLM fine-tuning, prompt engineering, and AI safety mechanisms to improve model efficiency and usability. Integrate AI into Products/ Services: Work closely with business and engineers to embed generative AI solutions into real-world applications, ensuring seamless user experiences. Optimize for Scale & Performance: Build and refine scalable ML pipelines for efficient model deployment, inference, and real-time adaptation. Stay Ahead of AI Trends: Keep pace with emerging technologies, model architectures, and industry breakthroughs to drive continuous innovation. Communicate complex AI concepts to technical and non-technical audiences, ensuring alignment with business strategies. Key Skills required At least 4-5 years of experience in Data Science domain with strong exposure to Generative AI. Proficiency in Arabic with strong speaking and writing skills. Experience using Machine and Deep Learning languages, preferably Python and R, to manipulate data and draw insights from large data sets. Hands-on experience with LLMs, multimodal AI, and diffusion models for applications in text, image, or speech generation. Strong proficiency in TensorFlow, PyTorch, Hugging Face Transformers, and OpenAI APIs and custom fine tuning of LLMs and AIops toolkits. Familiarity with cloud platforms (AWS, GCP, Azure), model deployment strategies, and GPU optimization. Experience building data pipelines and data centric applications using distributed storage platforms in a production setting Ability to work with large-scale datasets, vector embeddings, and retrieval-based AI systems.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview We are seeking an experienced Reporting GenAI Consultant with a strong background in developing AI-driven reporting solutions. This role focuses on building and integrating Generative AI capabilities into BI platforms to enable natural language insights, automated report generation, and interactive dialogue with data. The ideal candidate will have hands-on experience working with LLMs, prompt engineering, and modern data visualization tools. Responsibilities Design, develop, and deploy GenAI-based reporting solutions that generate insights summaries, dashboards, and narrative analytics using structured and unstructured data. Build natural language interfaces and conversational agents for querying data (Dialogue with Data), enabling users to interact with reports through plain English. Integrate GenAI features (like ChatGPT, Azure OpenAI, or Vertex AI) with enterprise BI platforms (Power BI, Tableau, Qlik, ThoughtSpot, etc.). Implement automated insight generation using LLMs to summarize trends, detect anomalies, and generate key takeaways. Collaborate with data engineering and BI teams to optimize data models and ensure clean, prompt-ready datasets. Design and fine-tune prompts and templates for contextual report summarization and storytelling. Conduct POCs and pilots to evaluate the feasibility and impact of GenAI-driven reporting use cases. Ensure solutions are secure, scalable, and compliant with enterprise governance policies. Qualifications 10+ years of experience in Business Intelligence/Analytics with 1-2 years in Generative AI implementations. Strong experience in Power BI with exposure to augmented analytics features. Experience working with LLMs (OpenAI, Azure OpenAI, Hugging Face, Google PaLM, etc.) for natural language understanding and summarization. Expertise in prompt engineering, few-shot learning, and custom summarization models. Good understanding of data storytelling, narrative generation, and auto-generated insights. Experience in integrating APIs for AI models into web or reporting tools. Familiarity with Python or JavaScript for model integration and backend logic. Excellent communication and stakeholder management skills. Preferred Qualifications: Experience with RAG (Retrieval-Augmented Generation), LangChain, or similar frameworks. Exposure to voice-based analytics or speech-to-insight solutions. Knowledge of data governance, privacy (GDPR/CPRA), and enterprise security standards. Familiarity with cloud platforms: Azure

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Opening for -Noida Sec 62, Mumbai-Andheri East & Bangalore-Whitefield Key Responsibilities: • Actively sell M365, with a focus on security via the E5 license, and promote Azure consumption to exceed sales objectives. • Develop and maintain strategic relationships with Microsoft's field sales representatives, fostering mutual collaboration to source and share business opportunities. • Present CloudFIrst's suite of solutions, emphasizing cloud, cybersecurity, digital engineering, and industry solutions, while aligning with Microsoft's product strategies and modernization goals. • Engage in continuous learning to stay ahead of technology trends, obtaining relevant Microsoft sales certifications to enhance expertise and credibility. • Utilize in-depth knowledge of Microsoft products, including Azure, M365, Microsoft Copilot, Copilot Studio, and Azure OpenAI, to drive customer modernization initiatives securely and efficiently. • Demonstrate a successful sales track record, with evidence of closing at least one million dollars in business in the last 12 months. share profiles at - hrd@cloudfirst.tech mention CTC, Notice Monthly Revenue Driven Location applied for?

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

Remote

Procedure is hiring for WorkHero. Across the U.S., thousands of small HVAC businesses are drowning in admin. WorkHero pairs seasoned office managers with AI workflows to free up their time, unlock growth, and modernize the trades. We handle everything that touches the back office of a small HVAC operation: billing, rebates, permits, warranties, marketing, and more. Our customers save dozens of hours every week, allowing them to focus on growing their business. We are building at the intersection of Human and AI automation, and that’s where you come in. How you will help us achieve it As an Applied AI Engineer, you will collaborate with the company founders, product team, and office managers to develop agentic workflows for many of their daily processes. You will ideate, design, implement, and maintain the software side of the product. You will design the UX, develop the front end interfaces, build backend routes and design database models. The bulk of this role is interacting with office managers to understand their workflows and then creating and managing AI agents & workflows to make them more efficient. In your day to day, you will: Partner with our frontline staff to understand their work so you can automate it with AI and save them time Live on the cutting edge of Human & AI interactions, keeping up to date with the latest patterns and how they apply to WorkHero and our workflows Contribute to exceptional code quality and performance of the code base Mentor young engineer(s) to be great at their craft, as our team grows Key Details You can work remotely from anywhere with an overlap of at least four hours with the product team (6 pm to 10 pm IST) This role involves participating in calls with US-based stakeholders The compensation comes with a salary and an equity component You’ll thrive if you Care about your craft and the mission of the software you are developing Are enterprising and can figure out how to convert workflows/playbooks to AI agents Work with cutting-edge AI tools and APIs Don’t just complete tickets that are assigned to you; you think deeply about the product and how you can make it great, then put that into action Enjoy developing 0-1 software products iteratively Are proficient in the Typescript ecosystem, including monorepos. Are skilled in React and Express (Koa is a plus) or similar tech Are pragmatic and understand trade-offs in software engineering Requirements 4+ years of experience in building software products Have created applications using AI and LLMs. Can understand when to prompt, use RAG, create agents or fine-tune an LLM Have solid experience with the MERN stack Ability to debug and optimize complex systems Excellent communication and collaboration skills Bonus points if you have Experience in mentoring junior team members. Better if they had gone to replace you in your team, allowing you to take up a more meaningful role Familiarity with our tech stack (React, Koa, Mongo, AWS, React Native) A good understanding of UX best practices for SaaS based interfaces & AI Built apps using React Native A deep understanding of infrastructure and can optimize for cost Set up evaluation frameworks for agentic workflows Contributed to open-source projects Your First Six Months You will be working on creating AI Agents/Workflows based on the playbooks for some of the crucial workflows of HVAC office managers You will work on building the frontend, APIs and apply AI knowledge to add features to the office manager and admin portal, as well as the mobile app You will be using Vercel AI SDK to create abstracted AI agents and workflows with the capability of falling back to different providers You will demo the agentic workflows to US-based stakeholders—small HVAC business owners, product team and office managers You will monitor and manage the AI Workflows, ensuring quality and accuracy, improving them over time About engineering at WorkHero We value: High ownership and fast iteration Deep understanding our tools and systems Continuous improvement, both technically and interpersonal Our technology stack: Full Stack TypeScript (React, React Native, KoaJS Backend, CRXJS Browser Extension) NX Monorepo with shared typing & libraries Infrastructure: AWS (ECS, Hosted RMQ) MongoDB Atlas Firebase & GCP services AI Tooling (Anthropic, OpenAI, Gemini, Deepgram) Our team The Company is funded by Workshop Ventures Partners, a US-based venture studio focused on building great companies in Climate Tech The team consists of founding members from Harvard Business School and the Workshop Ventures team — Kyler, Furman, Chad, Jason, and Ulhas Interview Process: Screening: Review of your open source contributions and past projects. First Round of Technical Evaluation: Assessing your problem-solving and AI integration skills. Technical Evaluation + Cultural Fit: Understanding your alignment with our team.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies