Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
haryana
On-site
The Spec Analytics Intmd Analyst role is a developing professional position where you deal with most problems independently and have the latitude to solve complex issues. By integrating in-depth specialty area knowledge with a solid understanding of industry standards and practices, you contribute to achieving the objectives of the sub function/job family. Your analytical thinking and knowledge of data analysis tools and methodologies are crucial, requiring attention to detail to make informed judgments and recommendations based on factual information. You will typically handle variable issues with potential broader business impact, applying professional judgment when interpreting data and results in a systematic and communicable manner. Developed communication and diplomacy skills are essential to exchange potentially complex/sensitive information and have a moderate but direct impact on the businesses" core activities. The quality and timeliness of service you provide will affect the effectiveness of your team and other closely related teams. Responsibilities: - Work with large and complex data sets (both internal and external data) to evaluate, recommend, and support the implementation of business strategies - Identify and compile data sets using various tools (e.g. SQL, Access) to predict, improve, and measure the success of key business outcomes - Document data requirements, data collection/processing/cleaning, and exploratory data analysis, which may involve utilizing statistical models/algorithms and data visualization techniques - Specialize in marketing, risk, digital, and AML fields - Appropriately assess risk when making business decisions, ensuring compliance with applicable laws, rules, and regulations, safeguarding Citigroup, its clients, and assets Skills and Experience: - 5+ years of relevant experience in data science, machine learning, or a related field - Advanced process management skills, organized, and detail-oriented - Curiosity in learning and developing new skill sets, particularly in artificial intelligence - Positive outlook with a can-do mindset - Strong programming skills in Python and proficiency in relevant data science libraries like scikit-learn, TensorFlow, PyTorch, and Transformers - Experience with statistical modeling techniques, including regression, classification, and clustering - Experience building GenAI solutions using LLMs and vector databases - Experience with agentic AI frameworks such as Langchain, Langraph, MLOps - Experience with data visualization tools like Tableau or Power BI - Must-have skills: Strong logical reasoning capabilities, willingness to learn new skills, good communication and presentation skills Education: - Bachelors/University degree or equivalent experience This job description offers a high-level overview of the work performed, and additional job-related duties may be assigned as required.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are searching for a Senior Software Engineer in AI who possesses more than 3 years of practical experience in Artificial Intelligence and Machine Learning and has a strong enthusiasm for innovation. This position is perfect for an individual who excels in a startup setting that is fast-paced, product-oriented, and filled with opportunities to create a meaningful difference. Your main responsibility will be to contribute to the development of intelligent, scalable, and production-ready AI systems, with a particular emphasis on Generative AI and Agentic AI technologies. Key Responsibilities: - Developing and implementing AI-powered applications and services, with a focus on Generative AI and Large Language Models (LLMs). - Designing and executing Agentic AI systems that consist of autonomous agents capable of planning and executing multi-step tasks. - Collaborating with various teams such as product, design, and engineering to seamlessly integrate AI capabilities into products. - Crafting clean, scalable code and establishing robust APIs and services to facilitate the deployment of AI models. - Taking charge of feature delivery from start to finish, including research, experimentation, deployment, and monitoring. - Keeping abreast of the latest AI frameworks, tools, and best practices and leveraging them in product development. - Contributing to a culture of high performance within the team and providing mentorship to junior team members when necessary. Required Skills: - A total of 6 years of software development experience, with at least 3 years dedicated to AI/ML engineering. - Proficiency in Python, along with hands-on experience in PyTorch, TensorFlow, and Transformers (Hugging Face). - Demonstrated expertise in working with LLMs (e.g., GPT, Claude, Mistral) and various Generative AI models (text, image, or audio). - Practical knowledge of Agentic AI frameworks such as LangChain, AutoGPT, and Semantic Kernel. - Experience in constructing and deploying ML models in production environments. - Familiarity with vector databases like Pinecone, Weaviate, FAISS, and prompt engineering concepts. - Comfortable operating in a startup-like atmosphere - self-driven, adaptable, and eager to take on responsibilities. - Sound understanding of API development, version control, and contemporary DevOps/MLOps procedures. This is a permanent position that requires in-person work.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
You are a dynamic and innovative AI Engineer at our technology company, leveraging cutting-edge cloud technologies and generative AI to create groundbreaking solutions across various industries. Your role involves collaborating closely with cross-functional teams to design, develop, and deploy innovative solutions that harness the power of AI and generative technologies. Your technical expertise, leadership skills, and passion for pushing boundaries will be instrumental in achieving our goals. As an AI Engineer, your responsibilities include leading the design, architecture, and development of Gen AI applications using cloud technologies. You will collaborate with product managers, software engineers, and end-users to define application requirements, user experiences, and technical specifications. Your role involves driving technical excellence, code quality, and best practices within the team, staying up-to-date with the latest advancements in Gen AI, cloud technologies, and software engineering trends, and ensuring seamless deployment, scaling, and operation of applications in cloud environments. You will actively participate in code reviews, lead performance optimization efforts, address technical challenges, and contribute to the creation of technical documentation. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with 2+ years of experience in architecting and developing software applications, preferably chatbots, information retrieval, and enterprise search applications. Experience with extracting and organizing information from unstructured documents and structured datasets, working on information retrieval applications based on natural language queries, and knowledge or experience working with vector databases is required. Strong programming skills in languages such as Python, understanding of LLM's and prompt engineering, experience with deploying Gen AI applications on the cloud using CI/CD tools, excellent problem-solving skills, and the ability to think critically and analytically are essential. You should also possess strong communication and collaboration skills, with the ability to work effectively across cross-functional teams and manage multiple priorities in a fast-paced environment.,
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
As a Senior Data Science Engineer Lead at our Bangalore location, you will be responsible for spearheading the development of intelligent, autonomous AI systems. Your role will involve leveraging technologies such as agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs to design and deploy innovative AI solutions, focusing on enhancing enterprise applications. You will work as part of a cross-functional agile delivery team, bringing an innovative approach to software development and contributing to all stages of software delivery. In this position, you will design and develop agentic AI applications using frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. You will also implement RAG Pipelines by integrating LLMs with vector databases and knowledge graphs to create dynamic retrieval systems. Additionally, you will fine-tune language models, train NER models, develop knowledge graphs, collaborate cross-functionally, and optimize AI workflows using MLOps practices. To be successful in this role, you should have 15+ years of professional experience in AI/ML development, proficiency in Python, Python API frameworks, SQL, and AI/ML frameworks like TensorFlow or PyTorch. Experience with deploying AI models on cloud platforms, familiarity with semantic technologies, MLOps tools, vector databases, RAG architectures, and hybrid search methodologies is essential. Strong problem-solving abilities, analytical thinking, excellent communication skills, and the ability to work independently on multiple projects are also key requirements. We offer a range of benefits including best in class leave policy, parental leaves, reimbursement under childcare assistance benefit, sponsorship for certifications, employee assistance program, hospitalization insurance, accident and term life insurance, health screening, training, coaching, and a culture of continuous learning to support your career progression. Join us at our Bangalore location to be part of a collaborative team that strives for excellence every day. Visit our company website for more information about Deutsche Bank Group and our inclusive work environment. We welcome applications from all individuals who share our values of responsibility, initiative, and collaboration.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As an engineer specializing in Generative AI and AI agent development at PubMatic, you will play a key role in building and optimizing advanced AI agents that utilize cutting-edge technologies such as Retrieval-Augmented Generation (RAG), vector databases, and large language models (LLMs). Your primary responsibilities will involve developing state-of-the-art solutions to enhance Generative AI capabilities, enabling the platform to handle complex information retrieval, contextual generation, and adaptive interactions. You will collaborate closely with a cross-functional team of engineers, architects, product managers, and UX designers to innovate AI solutions for new customer use-cases. Working independently, you will iterate rapidly based on customer feedback to refine product features. Your tasks will include implementing and optimizing LLMs for specific use cases, integrating RAG systems to enhance content accuracy, designing and optimizing vector databases for efficient search, and fine-tuning prompts to improve LLM performance. In this role, you will leverage evaluation frameworks and metrics to assess and enhance the performance of generative models and AI systems. Additionally, you will work with data scientists, engineers, and product teams to integrate AI-driven capabilities into both customer-facing products and internal tools. Staying updated with the latest research and trends in LLMs, RAG, and generative AI technologies will be essential to driving innovation within the company's offerings. To be successful in this role, you must possess a strong understanding of large language models, transformer architecture, and hyper-parameter tuning. You should have proven experience in building AI agents using Retrieval-Augmented Generation and working with external data sources. Proficiency in Python, experience with machine learning libraries such as TensorFlow and PyTorch, and familiarity with technologies like FAISS, Pinecone, and Weaviate are required. A bachelor's degree in engineering (CS/IT) or an equivalent degree from a reputable institute is necessary for this position. Additionally, experience with graph-based architectures, cloud platforms (AWS, GCP, Azure), and containerization technologies (Docker, Kubernetes) would be beneficial. Publication or contributions to research in AI-related fields are considered a plus. PubMatic offers a hybrid work schedule allowing employees to work 3 days in the office and 2 days remotely, aiming to maximize collaboration and productivity. The benefits package includes paternity/maternity leave, healthcare insurance, broadband reimbursement, and other perks such as a kitchen stocked with snacks and catered lunches. Join PubMatic, a leading digital advertising platform, and contribute to driving better business outcomes through transparent advertising solutions on the open internet.,
Posted 2 weeks ago
3.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Data Scientist specializing in GenAI within the Banking domain, you will utilize your extensive experience of over 10 years in Data Science, with a focus of 3+ years specifically in GenAI. Your expertise will be instrumental in developing, training, and fine-tuning GenAI models such as LLMs and GPT for various banking use cases. You will collaborate closely with business and product teams to design and implement predictive models, NLP solutions, and recommendation systems tailored to the financial industry. A key aspect of your role will involve working with large volumes of both structured and unstructured financial data to derive valuable insights. Moreover, you will be responsible for ensuring the ethical and compliant use of AI by incorporating practices related to fairness, explainability, and compliance into the model outputs. Deployment of these models using MLOps practices like CI/CD pipelines and model monitoring will also be within your purview. Your skill set must include a strong proficiency in Python programming, along with a deep understanding of libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, and PyTorch. Hands-on experience with GenAI tools like OpenAI, Hugging Face, LangChain, and Azure OpenAI will be crucial for success in this role. Furthermore, your expertise in NLP, prompt engineering, embeddings, and vector databases will play a pivotal role in building models for critical banking functions like credit risk assessment, fraud detection, and customer segmentation. While not mandatory, it would be advantageous to possess knowledge of LLM fine-tuning and retrieval-augmented generation (RAG) in addition to familiarity with data privacy and compliance regulations such as GDPR and RBI guidelines as they pertain to AI systems. Your understanding of banking data, processes, and regulatory requirements will be key in delivering effective AI-driven solutions within the financial services industry.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior Machine Learning Engineer with expertise in Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG), you will play a critical role in supporting a high-impact Proof of Concept (PoC) focused on legacy code analysis, transformation, and modernization. Your primary responsibility will be to enable intelligent code migration and documentation through the implementation of advanced AI/ML tooling. You will be expected to develop and integrate LLM-based pipelines, utilizing tools such as Claude Sonnet 3.7 or 4 on AWS Bedrock. Additionally, you will design and implement RAG-based systems for code understanding, leveraging Vector Databases like Milvus or Pinecone. Your expertise in Abstract Syntax Tree (AST) techniques will be crucial for parsing, analyzing, and transforming legacy code for migration and documentation purposes. You will also apply CodeRAG techniques to facilitate context-aware retrieval and transformation of code artifacts. Iterative validation and correction of AI-generated outputs will be part of your routine to ensure the production of high-quality code assets. Data preprocessing and metadata enrichment, including embeddings, structured knowledge, or fine-tuning for LLM input optimization, will also be within your scope of work. Collaboration with domain experts and engineering teams is essential to ensure alignment with architecture and business logic. You will utilize version control systems like Git to manage changes, support collaboration, and ensure reproducibility. Additionally, you will contribute to QA strategies and help define testing protocols for model output validation. To excel in this role, you must have at least 5 years of experience in Machine Learning, with a strong focus on LLM applications and code understanding. Proficiency in Python and solid software engineering principles are a must, as well as experience working with AWS Bedrock for model deployment and orchestration. You should possess a strong understanding and hands-on experience with AST-based code parsing and transformation, familiarity with RAG architectures, and experience working with vector databases like Milvus, Pinecone, or similar. Experience in preprocessing legacy codebases, enriching metadata for LLM consumption, and using Git or other version control systems in collaborative environments are crucial skills. A solid understanding of code migration, modernization processes, and business logic documentation is also required. Nice-to-have skills include ensuring compliance with architectural and code specifications, documenting code flows, aligning with business requirements, familiarity with QA and testing strategies in AI/ML or code-generation workflows, and a collaborative mindset with strong communication skills and a proactive attitude essential for working in a fast-paced PoC environment with tight feedback loops.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
We are seeking a skilled Senior Java Backend Engineer to join our team dedicated to constructing scalable and high-performance backend systems for Generative AI applications. You will play a pivotal role in the design of APIs, coordination of AI agents, and integration of Large Language Models (LLMs) into systems ready for production. This position is optimal for backend developers enthusiastic about modern AI technologies and distributed systems. Responsibilities include designing, building, and managing scalable backend services using Java and the Spring Boot framework. You will also be responsible for creating APIs to facilitate LLM integration, supporting AI agent orchestration workflows, and architecting microservices for RAG (Retrieval-Augmented Generation) and other LLM-driven systems. Optimizing performance through efficient caching strategies and integrating with multiple LLM providers such as OpenAI, Gemini, and Claude are key aspects of this role. Additionally, implementing real-time streaming features for conversational AI systems and ensuring robust system observability will be part of your responsibilities. **Key Responsibilities:** - Design, build, and maintain scalable backend services using Java and the Spring Boot framework - Develop APIs to enable LLM integration and support AI agent orchestration workflows - Architect microservices to power RAG and other LLM-driven systems - Optimize performance through efficient caching strategies and vector database interactions - Integrate and manage connections with multiple LLM providers, including rate limiting and failover handling - Implement real-time streaming features for conversational AI systems - Ensure robust system observability with logging, monitoring, and tracing **Required Skills & Qualifications:** - 3+ years of experience with Java and Spring Boot - Strong understanding of RESTful API design principles and microservices architecture - Proficiency with core Spring modules like Spring Security, Spring Data JPA, and Spring Cloud - Experience with relational and NoSQL databases such as PostgreSQL, MongoDB, and Redis - Familiarity with message brokers like RabbitMQ or Apache Kafka - Expertise in caching mechanisms and system performance tuning - Experience in integrating LLM APIs and vector databases into backend services - Knowledge of AI agent orchestration frameworks and RAG systems - Proficiency in developing streaming responses using WebSockets - Working knowledge of prompt templating and management systems **Nice to Have:** - Experience in fine-tuning LLMs and managing model deployment pipelines - Knowledge of self-hosted LLM environments and infrastructure management - Exposure to observability tools such as LangSmith or custom monitoring setups - Familiarity with natural language to SQL systems or BI applications powered by LLMs,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. Interact with C-level at our clients on a regular basis to drive their business towards impactful change. Lead your team in creating new business solutions. Seize opportunities at the client and at Metyis in our entrepreneurial environment. Become part of a fast-growing international and diverse team. Lead and manage the delivery of complex data science projects, ensuring quality and timelines. Engage with clients and business stakeholders to understand business challenges and translate them into analytical solutions. Design solution architectures and guide the technical approach across projects. Align technical deliverables with business goals, ensuring data products create measurable business value. Communicate insights clearly through presentations, visualizations, and storytelling for both technical and non-technical audiences. Promote best practices in coding, model validation, documentation, and reproducibility across the data science lifecycle. Collaborate with cross-functional teams to ensure smooth integration and deployment of solutions. Drive experimentation and innovation in AI/ML techniques, including newer fields - Generative AI. 6+ years of experience in delivering full-lifecycle data science projects. Proven ability to lead cross-functional teams and manage client interactions independently. Strong business understanding with the ability to connect data science outputs to strategic business outcomes. Experience with stakeholder management, translating business questions into data science solutions. Track record of mentoring junior team members and creating a collaborative learning environment. Familiarity with data productization and ML systems in production, including pipelines, monitoring, and scalability. Experience managing project roadmaps, resourcing, and client communication. Strong hands-on experience in Python/R and SQL. Good understanding and Experience with cloud platforms such as Azure, AWS, or GCP. Experience with data visualization tools in python like Seaborn, Plotly. Good understanding of Git concepts. Good experience with data manipulation tools in python like Pandas and Numpy. Must have worked with scikit learn, NLTK, Spacy, transformers. Experience with dashboarding tools such as Power BI and Tableau to create interactive and insightful visualizations. Proficient in using deployment and containerization tools like Docker and Kubernetes for building and managing scalable applications. Core Competencies: Strong foundation in machine learning algorithms, predictive modeling, and statistical analysis. Good understanding of deep learning concepts, especially in NLP and Computer Vision applications. Proficiency in time-series forecasting and business analytics for functions like marketing, sales, operations, and CRM. Exposure to tools like Mlflow, model deployment, API integration, and CI/CD pipelines. Hands-on experience with MLOps and model governance best practices in production environments. Experience in developing optimization and recommendation system solutions to enhance decision-making, user personalization, and operational efficiency across business functions. Generative AI Experience with text and Image data. Familiarity with LLM frameworks such as LangChain and hubs like Hugging Face. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate) for semantic search or retrieval-augmented generation (RAG).,
Posted 2 weeks ago
3.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Data Scientist specializing in GenAI within the Banking domain, you will leverage your extensive experience of over 10 years in Data Science, with a specific focus of 3+ years in GenAI. Your key responsibility will revolve around developing, training, and fine-tuning GenAI models such as LLMs, GPT, etc., tailored for banking use cases. Additionally, you will be tasked with designing and implementing predictive models, NLP solutions, and recommendation systems while working with large volumes of structured and unstructured financial data. Collaboration with business and product teams to define AI-driven solutions will be crucial in your role. You will play a pivotal part in ensuring responsible AI practices, fairness, explainability, and compliance in model outputs. Deployment of models using MLOps practices including CI/CD pipelines and model monitoring will also be within your purview. Your skill set must include strong proficiency in Python programming along with libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. Hands-on experience with GenAI tools such as OpenAI, Hugging Face, LangChain, Azure OpenAI, etc., will be essential. Expertise in NLP, prompt engineering, embeddings, and vector databases is a prerequisite. Furthermore, you should have experience in building models for credit risk assessment, fraud detection, customer segmentation, among others. A solid understanding of banking data, processes, and regulatory requirements is essential for success in this role. Moreover, having knowledge in LLM fine-tuning and retrieval-augmented generation (RAG) is considered a good-to-have skill. Exposure to data privacy and compliance standards such as GDPR, RBI guidelines, in AI systems will be an added advantage.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As a skilled and passionate Platform Architect specializing in GenAI/LLM Systems, you will be responsible for architecting scalable, cloud-native infrastructure to support enterprise-grade GenAI and LLM-powered applications. Your role will involve designing and deploying secure, reliable API gateways, orchestration layers (such as Airflow, Kubeflow), and CI/CD workflows for ML and LLM pipelines. Collaboration with data and ML engineering teams will be essential to enable low-latency LLM inference and vector-based search platforms across GCP (or multi-cloud). You will define and implement a semantic layer and data abstraction strategy to facilitate consistent and governed consumption of data across LLM and analytics use cases. Additionally, implementing robust data governance frameworks including role-based access control (RBAC), data lineage, cataloging, observability, and metadata management will be part of your responsibilities. Your role will involve guiding architectural decisions around embedding stores, vector databases, LLM tooling, and prompt orchestration (e.g., LangChain, LlamaIndex), as well as establishing compliance and security standards to meet enterprise SLA, privacy, and auditability requirements. To excel in this role, you should have at least 7 years of experience as a Platform/Cloud/Data Architect, ideally within GenAI, Data Platforms, or LLM systems. Strong cloud infrastructure experience on GCP (preferred), AWS, or Azure, including Kubernetes, Docker, Terraform/IaC is required. Demonstrated experience in building and scaling LLM-powered architectures using OpenAI, Vertex AI, LangChain, LlamaIndex, etc. will be a significant advantage. You should also have familiarity with semantic layers, data catalogs, lineage tracking, and governed data delivery across APIs and ML pipelines. A track record of deploying production-grade GenAI/LLM services that meet performance, compliance, and enterprise integration requirements is essential. Strong communication and cross-functional leadership skills are also desired, as you will be required to translate business needs into scalable architecture.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The role involves various responsibilities in the field of AI engineering, focusing on prompt engineering, API integration, model fine-tuning, data engineering, LangChain/LlamaIndex development, vector databases, and frontend + GenAI integration. You will be responsible for crafting effective prompts for models like GPT DALLE and Codex, along with understanding prompt tuning chaining and context management. Additionally, you will work on API integration, utilizing APIs from platforms such as OpenAI, Hugging Face, and Cohere. This will involve handling authentication, rate limits, and response parsing effectively. Furthermore, you will be involved in model fine-tuning and customization, working with open-source models like LLaMA, Mistral, and Falcon. Tools such as LoRA, PEFT, and Hugging Face Transformers will be utilized in this process. In the realm of data engineering for AI, you will focus on collecting, cleaning, and preparing datasets for training or inference purposes. A good understanding of tokenization and embeddings will be crucial in this aspect. You will also work on LangChain/LlamaIndex, where you will build AI-powered applications with memory tools and retrieval-augmented generation (RAG). This will involve connecting large language models (LLMs) to external data sources like PDFs, databases, or APIs. Additionally, you will work with vector databases such as Pinecone, Weaviate, FAISS, or Chroma for semantic search and RAG, requiring a solid grasp of embeddings and similarity search techniques. Another aspect of the role involves frontend and GenAI integration, where you will build GenAI-powered user interfaces using technologies like React, Next.js, or Flutter. Integration of chatbots, image generators, or code assistants will also be part of this responsibility. Overall, the role encompasses a diverse set of tasks in AI engineering, ranging from prompt engineering and model fine-tuning to data engineering and frontend development, all aimed at creating innovative AI solutions.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be joining a dynamic team as an AI Trainer in the AI Learning & Development / Technical Training department. Your primary role will involve designing, developing, and delivering high-quality AI education programs in collaboration with the software product engineering team. As an expert in AI and open-source technologies, you will have the opportunity to mentor and teach the next generation of AI professionals. Your key responsibilities will include collaborating with the engineering team to create industry-aligned AI training programs, conducting training needs analysis, developing customized training roadmaps, delivering large-scale training sessions, workshops, and bootcamps, and incorporating open-source technologies for innovation. You will also mentor learners through projects and assignments to ensure practical skill acquisition, create training content, assessments, and feedback mechanisms for continuous improvement, and stay updated with emerging trends in AI/ML technologies to enhance training programs. To qualify for this role, you should hold a Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field, along with at least 5 years of experience in curating and delivering technical training programs. Strong expertise in open-source AI/ML tools like Python, TensorFlow, PyTorch, Hugging Face, and LangChain is essential. Additionally, you should have experience with LLMs, prompt engineering, vector databases, and AI agent frameworks, along with excellent communication skills to explain complex concepts effectively. Preferred qualifications include familiarity with AI integration in enterprise software products, exposure to cloud platforms and containerization tools, experience in edtech or AI sectors, and contributions to open-source projects or community involvement. Prior experience with AI certification programs from leading providers like AWS, Nvidia, GCP, or Microsoft would be advantageous. In this role, you will have the opportunity to work at the forefront of AI education and innovation, collaborate with a multidisciplinary product engineering team, and contribute to a mission-driven organization focused on real-world impact. You can expect opportunities for growth into strategic leadership, curriculum innovation, or AI evangelism roles, within a supportive and intellectually vibrant work culture that provides access to the latest tools and resources.,
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Oracle Health & AI (OHAI) is a newly formed business unit committed to transforming the healthcare industry through our expertise in IaaS and SaaS. Our mission is to deliver patient-centered care and make advanced clinical tools accessible globally ( learn more ). We&aposre assembling a team of innovative technologists to build the next-generation health platforma greenfield initiative driven by entrepreneurship, creativity, and energy. If you thrive in a fast-paced, innovative environment, we invite you to help us create a world-class engineering team with a meaningful impact. The OHAI Patient Accounting Analytics Team focuses on delivering cutting-edge reporting metrics and visualizations for healthcare financial data. Our goal is to transform healthcare by automating insurance and patient billing processes, helping optimize operations and reimbursement processes. Our solutions leverage a blend of reporting platforms across both on-premises and cloud infrastructure. As we expand reporting capabilities on the cloud and make use of AI, were looking for talented professionals to join us on this exciting journey. Responsibilities What We are seeking a hands-on Principal Member of Technical Staff with proven experience in NoSQL databases, machine learning, and AI/LLM (Large Language Model) design and implementation. In this role, you will design and maintain complex data pipelines, enable scalable AI solutions, and collaborate with multidisciplinary teams to advance our enterprise analytics and AI capabilities. Minimum Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. 8+ years of experience in data engineering, including 3+ years working with NoSQL databases. Experience designing, implementing, and optimizing data architectures for large-scale analytics or AI/ML solutions. Proficiency in programming languages commonly used for data engineering (e.g., Python, Java, Scala). Hands-on experience integrating and scaling machine learning models in production environments. Familiarity with concepts and frameworks for LLM construction and deployment. Solid understanding of data modeling, data structures, and distributed data systems. Knowledge of data security, compliance, and privacy best practices. Excellent problem-solving and communication skills. Preferred Qualifications Experience with healthcare / financial systems. Experience with cloud platforms and managed data/ML services (e.g., Oracle Cloud, AWS, Azure, GCP). Experience working with vector databases and retrieval augmented generation (RAG) for LLMs. Prior experience in enterprise, large-scale, or highly regulated environments. Certifications in Data Engineering, Cloud, or AI/ML (preferred but not required). Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrows technology to tackle todays challenges. Weve partnered with industry-leaders in almost every sectorand continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. Thats why were committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. Were committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing [HIDDEN TEXT] or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 weeks ago
12.0 - 20.0 years
0 Lacs
, India
On-site
Senior GenAI & Agentic AI Expert (Architect) Relocation to Abu Dhabi, UAE Location: Abu Dhabi Client: Abu Dhabi Government About The Role Our client, a global consulting firm with distributed teams across the US, Canada, UAE, India, and PK, is hiring a high-caliber Senior Generative AI Expert with proven hands-on experience in building Agentic AI applications . This role is ideal for someone who has a total of 12 to 20+ years of software engineering and AI/ML experience and is now focused on autonomous AI agents, tool-using LLMs, LangChain, AutoGPT, or similar frameworks . Key Responsibilities Design and develop Agentic AI applications using LLM frameworks (LangChain, AutoGPT, CrewAI, Semantic Kernel, or similar) Architect and implement multi-agent systems for enterprise-grade solutions Integrate AI agents with APIs, databases, internal tools, and external SaaS products Lead and mentor a cross-functional team across global time zones Optimize performance, context retention, tool usage, and cost efficiency Build reusable pipelines and modules to support GenAI use cases at scale Ensure enterprise-grade security, privacy, and compliance standards in deployments Collaborate directly with clients and senior stakeholders Ideal Candidate Profile 10 to 15+ years of professional experience in software engineering and AI/ML 3+ years of practical experience in LLM-based application development Strong track record of delivering Agentic AI systems (not just chatbot interfaces) Hands-on experience with: LangChain, AutoGPT, CrewAI, ReAct, Semantic Kernel OpenAI, Claude, Gemini, Mistral, or Llama2 Embedding models, vector databases (FAISS, Pinecone, Weaviate, etc.) Prompt engineering, RAG, memory/context management Serverless, Python, Node.js, AWS/GCP/Azure cloud Experience leading engineering teams and working with enterprise clients Excellent communication, documentation, and stakeholder management skills Must be open to relocation to UAE Why Join Work on UAE Government project(s) Lead cutting-edge Agentic AI projects at enterprise scale Collaborate with senior teams across US, Canada, UAE, India, and PK Competitive compensation + long-term career roadmap Skills: memory/context management,apis integration,enterprise-grade security,crewai,saas products integration,aws,semantic kernel,prompt engineering,openai,node.js,multi-agent systems,azure,databases integration,gemini,cost efficiency,embedding models,rag,autogpt,performance optimization,llm frameworks,agentic ai,generative ai,langchain,gcp,python,vector databases Show more Show less
Posted 2 weeks ago
3.0 - 8.0 years
3 - 8 Lacs
Hyderabad, Telangana, India
On-site
We are seeking an innovative AI/ML Engineer with a strong background in Python and backend development to build and deploy advanced AI solutions. You will be responsible for developing scalable APIs, implementing LLM-driven workflows, and building Retrieval-Augmented Generation (RAG) pipelines. This role requires hands-on experience with frameworks like LangChain, proficiency in vector databases, and collaboration with cross-functional teams to bring robust AI features to production. Roles & Responsibilities: Develop and maintain scalable backend APIs and services in Python (3.11+) using FastAPI . Design and implement LLM-driven solutions , including prompt engineering and intelligent AI agent workflows. Build and deploy Retrieval-Augmented Generation (RAG) pipelines using vector databases such as Weaviate and Neo4j. Leverage LangChain and LangGraph for orchestrating LLM interactions and knowledge graph tasks. Integrate and manage data storage with Firebase Firestore or similar NoSQL databases on Google Cloud Platform (GCP). Process, clean, and analyze text data using NLP techniques to feed into AI pipelines. Collaborate with cross-functional teams to translate requirements into robust AI features. Create dashboards and developer tools using JavaScript/TypeScript for monitoring and analytics of AI systems. Ensure code quality and follow best practices (testing, CI/CD) for reliable production deployments. Skills Required: Strong focus on Python (3.11+) and backend development. Proficiency with FastAPI or similar Python web frameworks. Hands-on experience with LLMs (Large Language Models) and Generative AI, including prompt engineering. Solid experience with LangChain and LangGraph (or equivalent frameworks). Proven track record in building RAG systems and deploying AI agents in production. Proficient in using vector databases like Weaviate . Strong NLP and text data processing skills . Experience with GCP services and Firebase Firestore . Basic proficiency in c/TypeScript . Excellent problem-solving, communication, and teamwork skills. Experience with Docker, Kubernetes , and CI/CD pipelines is a plus. QUALIFICATION: Bachelor's degree in Computer Science, AI, or a related technical field, or equivalent practical experience.
Posted 2 weeks ago
3.0 - 8.0 years
3 - 8 Lacs
Delhi, India
On-site
We are seeking an innovative AI/ML Engineer with a strong background in Python and backend development to build and deploy advanced AI solutions. You will be responsible for developing scalable APIs, implementing LLM-driven workflows, and building Retrieval-Augmented Generation (RAG) pipelines. This role requires hands-on experience with frameworks like LangChain, proficiency in vector databases, and collaboration with cross-functional teams to bring robust AI features to production. Roles & Responsibilities: Develop and maintain scalable backend APIs and services in Python (3.11+) using FastAPI . Design and implement LLM-driven solutions , including prompt engineering and intelligent AI agent workflows. Build and deploy Retrieval-Augmented Generation (RAG) pipelines using vector databases such as Weaviate and Neo4j. Leverage LangChain and LangGraph for orchestrating LLM interactions and knowledge graph tasks. Integrate and manage data storage with Firebase Firestore or similar NoSQL databases on Google Cloud Platform (GCP). Process, clean, and analyze text data using NLP techniques to feed into AI pipelines. Collaborate with cross-functional teams to translate requirements into robust AI features. Create dashboards and developer tools using JavaScript/TypeScript for monitoring and analytics of AI systems. Ensure code quality and follow best practices (testing, CI/CD) for reliable production deployments. Skills Required: Strong focus on Python (3.11+) and backend development. Proficiency with FastAPI or similar Python web frameworks. Hands-on experience with LLMs (Large Language Models) and Generative AI, including prompt engineering. Solid experience with LangChain and LangGraph (or equivalent frameworks). Proven track record in building RAG systems and deploying AI agents in production. Proficient in using vector databases like Weaviate . Strong NLP and text data processing skills . Experience with GCP services and Firebase Firestore . Basic proficiency in c/TypeScript . Excellent problem-solving, communication, and teamwork skills. Experience with Docker, Kubernetes , and CI/CD pipelines is a plus. QUALIFICATION: Bachelor's degree in Computer Science, AI, or a related technical field, or equivalent practical experience.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Backend (API) Engineer with 5 to 8 years of experience, you will be responsible for designing and implementing scalable APIs for AI-powered enterprise pipelines. Your expertise in Python for backend API development will be crucial in this role. You should have a strong knowledge of Cloud (AWS) services like S3, Lambda, SQS, SNS, Bedrock, EKS, ECR, etc., and proficiency in developing and scaling distributed, loosely coupled APIs for text and image file ingestion, search, storage, and retrieval. It is essential to have a solid understanding of authentication & security, caching, event-driven processing, and logging/monitoring systems. In this role, it would be beneficial to have experience with Vector Databases such as MongoDB, Pinecone, FAISS, or similar for API & search implementations, as well as hands-on experience with containerization & orchestration (EKS, Docker, Kubernetes). Exposure to end-to-end DevOps implementation in a cloud environment will also be a plus. Your responsibilities will include developing high-performance backend systems for handling large-scale data ingestion and retrieval and working on distributed system architectures, including message queuing, event-driven processing, and caching mechanisms. You will need to optimize API performance, security, and cloud deployment strategies, and collaborate with cross-functional teams to integrate AI/LLM models into enterprise applications. Overall, as a Senior Backend (API) Engineer, you will play a key role in the development and maintenance of efficient and secure APIs for AI-powered enterprise pipelines, leveraging your expertise in Python, Cloud services, distributed systems, and DevOps practices.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
punjab
On-site
At Quark, we have been at the forefront of revolutionizing graphic design, digital publishing, and content automation since 1981. With over four decades of experience, we empower organizations to master their content lifecycle through cutting-edge design, automation, and intelligence. Our innovative software solutions allow customers to efficiently create, manage, publish, and analyze their content. As we enter a new phase of growth, we are seeking exceptional individuals to join our global team. Quark serves as the foundation for all content, just like a Quark forms the basis of all matter in science. Our commitment to excellence is encapsulated in our tagline, "brilliant content that works." With a diverse global workforce of approximately 250 professionals, we foster an inclusive culture that values and celebrates our team's diversity. As a highly motivated Senior AI Engineer at Quark Inc., you will play a crucial role in our expanding AI team. Your primary responsibilities will include building scalable AI systems that focus on document conversion, domain-adaptive model training, and conversational AI. The ideal candidate will possess a strong background in Natural Language Processing (NLP), experience with both open-source and commercial AI platforms, and a proven track record of delivering AI solutions in production environments. Your key responsibilities will include: - Designing and implementing AI pipelines for converting unstructured documents to structured formats such as XML, JSON, or proprietary schemas. - Developing and fine-tuning Conversational AI agents (chatbots, virtual assistants) for various use cases using Large Language Models (LLMs) or open-source models. - Training, evaluating, and deploying domain-specific AI/ML/NLP models. - Collaborating with product managers, domain experts, and backend/frontend engineers to integrate AI capabilities into production systems. - Leveraging frameworks like LangChain, Spring AI, or RAG pipelines for building intelligent systems. - Building tools and infrastructure to facilitate scalable and reusable AI model development. - Conducting comprehensive data preprocessing, feature engineering, and model performance evaluations. - Enabling Bring Your Own Model/AI capabilities for customers by designing pluggable AI interfaces. - Keeping abreast of the latest research and developments in the AI space. We are looking for candidates with the following qualifications: - Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. - 8+ years of experience in Software Development. - 5+ years of professional experience in AI/ML/NLP. - Proven experience in Document AI, Conversational AI, model training and fine-tuning, strong programming skills in Python, and familiarity with Java/Kotlin-based backends. - Hands-on experience with frameworks like HuggingFace Transformers, LangChain, Spring AI, TensorFlow, PyTorch, or OpenAI API. - Experience with vector databases and semantic search, as well as familiarity with cloud platforms like Azure, AWS, or GCP. - Solid understanding of data privacy, model security, and ethical AI principles. Join Quark, a pioneer in closed-loop content lifecycle management, and be part of a team that enables organizations to engage their audiences with precision and impact. We offer comprehensive benefits from day one and are committed to your growth and success. Together, we will harness the power of innovative and successful content.,
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we&aposve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you&aposll make a valuable - and valued - contribution. We&aposre a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the Team Our mission is to deliver a best-in-class user experience for internal and external advertising products that delights users and simplifies their tasks. Ads Customer Interfaces team develops full-stack web applications, and end-to-end revenue operations business tooling to provide a unified interface Roku&aposs suite of advertising products. About the Role We are looking for seasoned AI Engineer to join our team for building intelligent, autonomous systems that unlock scale, productivity, and innovation across the business. We focus on building agentic AI capabilities, where smart agents can reason, plan, and act on behalf of users and teams integrated seamlessly into business workflows. With a strong foundation in backend engineering and middleware architecture, our team sits at the intersection of cutting-edge enterprise-scale software systems. We enable agents to interface with critical systems like CRMs, order management tools, financial platforms, and internal APIs, ensuring secure, resilient, and explainable automation. What you will be doing Build intelligent agents that plan, reason, and take actions using LLMs (GPT, Gemini, Claude, LLaMA, etc.) Implement RAG pipelines, long-term memory, action orchestration, and multi-agent collaboration Translate business requirements into autonomous workflows and smart features that generate business value Apply advanced prompt engineering, tool use, and function-calling strategies to drive agent behavior Design and implement robust middleware that connects APIs, AI models, databases, and services in scalable ways Architect service-oriented and event-driven systems that enable real-time decision-making by agents Build secure, resilient integration layers that facilitate communication between business systems (CRM, ERP, OMS, etc.) Ensure observability, monitoring, and governance of agent operations and system interactions Partner with product managers, data engineers, and backend teams to align system capabilities with business goals Lead architectural discussions and make trade-off decisions regarding scalability, modularity, and performance Mentor engineers and contribute to architectural standards and AI enablement framework. We&aposre excited if you have Bachelors or Masters degree in Computer Science, Computer Engineering, Machine Learning, or a related field, with 8+ years of overall industry experience. Strong computer science fundamentals; able to design and implement efficient algorithms with ease 2+ years of hands-on experience with agentic AI and/or LLM-based agent frameworks Proven experience designing and building middleware platforms, REST APIs, and distributed systems at scale Advanced proficiency in Python and/or Java, with deep familiarity with AI/ML libraries (PyTorch, Transformers, LangChain, etc.) Solid understanding of microservices architecture, message queues (Kafka, Pub/Sub etc), and cloud-native development (AWS/GCP) Experience working with vector databases and retrieval-augmented generation (RAG) pipelines Demonstrated success building and deploying agentic AI use cases in production, such as sales automation, operations agents, or analytics bots. You have either tried Gen AI in your previous work or outside of work or are curious about Gen AI and have explored it. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It&aposs important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company&aposs success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We&aposre independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you&aposll be part of a company that&aposs changing how the world watches TV.? We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn&apost real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002.? To learn more about Roku, our global footprint, and how we&aposve grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less
Posted 2 weeks ago
2.0 - 4.0 years
0 Lacs
, India
Remote
About The Role Masai, in academic collaboration with a premier institute, is seeking a Teaching Assistant (TA) for its New Age Software Engineering program. This advanced 90-hour course equips learners with Generative AI foundations, production-grade AI engineering, serverless deployments, agentic workflows, and vision-enabled AI applications. The TA will play a key role in mentoring learners, resolving queries, sharing real-world practices, and guiding hands-on AI engineering projects. This role is perfect for professionals who want to contribute to next-generation AI-driven software engineering education while keeping their technical skills sharp. Key Responsibilities (KRAs) Doubt-Solving Sessions Conduct or moderate weekly sessions to clarify concepts across: Generative AI & Prompt Engineering AI Lifecycle Management & Observability Serverless & Edge AI Deployments Agentic Workflows and Vision-Language Models (VLMs) Share industry insights and practical examples to reinforce learning. Q&A and Discussion Forum Support Respond to student questions through forums, chat, or email with detailed explanations and actionable solutions. Facilitate peer-to-peer discussions on emerging tools, frameworks, and best practices in AI engineering. Research & Project Support Assist learners in capstone project design and integration, including vector databases, agent orchestration, and performance tuning. Collaborate with the academic team to research emerging AI frameworks like LangGraph, CrewAI, Hugging Face models, and WebGPU deployments. Learner Engagement Drive engagement via assignment feedback, interactive problem-solving, and personalized nudges to keep learners motivated. Encourage learners to adopt best practices for responsible and scalable AI engineering. Content Feedback Loop Collect learner feedback and recommend updates to curriculum modules for continuous course improvement. Candidate Requirements 2+ years of experience in Software Engineering, AI Engineering, or Full-Stack Development. Strong knowledge of Python/Node.js, cloud platforms (AWS Lambda, Vercel, Cloudflare Workers), and modern AI tools. Hands-on experience with LLMs, Vector Databases (Pinecone, Weaviate), Agentic Frameworks (LangGraph, ReAct), and AI observability tools. Understanding of AI deployment, prompt engineering, model fine-tuning, and RAG pipelines. Excellent communication and problem-solving skills; mentoring experience is a plus. Familiarity with online learning platforms or LMS tools is advantageous. Engagement Details Time Commitment: 6 to 8 hours per week Location: Remote (online) Compensation: ?8,000 to ?10,000 per month Why Join Us Benefits and Perks Contribute to a cutting-edge AI & software engineering program with a leading ed-tech platform. Mentor learners on next-generation AI applications and engineering best practices. Engage in flexible remote working while influencing future technological innovations. Access to continuous professional development and faculty enrichment programs. Network with industry experts and professionals in the AI and software engineering domain. Skills: edge,llms,rag pipelines,communication,online,aws lambda,databases,cloudflare workers,ai observability tools,vercel,prompt,learning,model fine-tuning,vector databases,prompt engineering,software,new age,agentic frameworks,mentoring,problem-solving,python,models,learners,node.js Show more Show less
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India. We are seeking a Senior AI Engineer / Data Engineer to join our engineering team and help build the future of AI-powered business solutions. In this role, you&aposll be developing intelligent systems that leverage advanced large language models (LLMs), real-time AI interactions, and cutting-edge retrieval architectures. Your work will directly contribute to products that are reshaping how businesses operate-particularly in recruitment, data extraction, and intelligent decision-making. This is an exciting opportunity for someone who thrives in building production-grade AI systems and working across the full stack of modern AI technologies. Responsibilities Design, build, and optimize AI-powered systems using multi-modal architectures (text, voice, visual). Integrate and deploy LLM APIs from providers such as OpenAI, Anthropic, and AWS Bedrock. Build and maintain RAG (Retrieval-Augmented Generation) systems with hybrid search, re-ranking, and knowledge graphs. Develop real-time AI features using streaming analytics and voice interaction tools (e.g., ElevenLabs). Build APIs and pipelines using FastAPI or similar frameworks to support AI workflows. Process and analyze unstructured documents with layout and semantic understanding. Implement predictive models that power intelligent business recommendations. Deploy and maintain scalable solutions using AWS services (EC2, S3, RDS, Lambda, Bedrock, etc.). Use Docker for containerization and manage CI/CD workflows and version control via Git. Debug, monitor, and optimize performance for large-scale data pipelines. Collaborate cross-functionally with product, data, and engineering teams. Qualifications 5+ years of experience in AI/ML or data engineering with Python in production environments. Hands-on experience with LLM APIs and frameworks such as OpenAI, Anthropic, Bedrock, or LangChain. Production experience using vector databases like PGVector, Weaviate, FAISS, or Pinecone. Strong understanding of NLP, document extraction, and text processing. Proficiency in AWS cloud services including Bedrock, EC2, S3, Lambda, and monitoring tools. Experience with FastAPI or similar frameworks for building AI/ML APIs. Familiarity with embedding models, prompt engineering, and RAG systems. Asynchronous programming knowledge for high-throughput pipelines. Experience with Docker, Git workflows, CI/CD pipelines, and testing best practices. Preferred Background in HRTech or ATS integrations (e.g., Greenhouse, Workday, Bullhorn). Experience working with knowledge graphs (e.g., Neo4j) for semantic relationships. Real-time AI systems (e.g., WebRTC, OpenAI Realtime API) and voice AI tools (e.g., ElevenLabs). Advanced Python development skills using design patterns and clean architecture. Large-scale data processing experience (1-2M+ records) with cost optimization techniques for LLMs. Event-driven architecture experience using AWS SQS, SNS, or EventBridge. Hands-on experience with fine-tuning, evaluating, and deploying foundation models. Show more Show less
Posted 2 weeks ago
10.0 - 12.0 years
29 - 39 Lacs
Pune, Maharashtra, India
On-site
Job Requisition ID # 25WD90121 Position Overview Are you a problem solver who thrives on building real-world AI applications Do you geek out over LLMs, RAG, MCP and agentic architectures Want to help shape a brand-new team and build cool stuff that actually ships If so, read on. Were building a new Applied AI team within Autodesks Data and Process Management (DPM) group. As a Founding Principal Engineer, youll be at the heart of this initiative working in a highly dynamic environment, designing, building, and scaling AI-powered experiences across our diverse portfolio providing critical Product Lifecycle Management (PLM) and Product Data Management (PDM) capabilities to our customers. Youll work on real production systems, solve hard problems, and help define the future of AI at Autodesk. Responsibilities Build AI-powered Experiences: Architect and develop production-grade AI applications that are scalable, resilient & secure Shape AI Strategy: Help define the AI roadmap for DPM by identifying opportunities, evaluating emerging technologies, and guiding long-term direction Operationalize LLMs: Fine-tune, evaluate, and deploy large language models in production environments. Balance performance, cost, and user experience while working with real-world data and constraints Build for Builders: Design frameworks and tools that make it easier for other teams to develop AI-powered experiences Guide Engineering Practices: Collaborate with other engineering teams to define and evolve best practices for AI experimentation, evaluation, and optimization. Provide technical guidance and influence decisions across teams Drive Innovation: Stay on top of the latest in AI technologies (e.g. LLMs, VLMs, Foundation Models), Architecture Patterns such as fine-tuning, RAG, function calling, MCP and moreand bring these innovations to production effectively Optimize for Scale: Ensure AI applications are resilient, performant, and can scale well in production Collaborate Across Functions: Partner with product managers, architects, engineers, and data scientists to bring AI features to life in Autodesk products Minimum Qualifications Masters in computer science, AI, Machine Learning, Data Science, or a related field 10+ years building scalable cloud-native applications, with 3+ years focused on production AI/ML systems Deep understanding of LLMs, VLMs, and foundation models, including their architecture, limitations, and practical applications Experience fine-tuning LLMs using real-world datasets and integrating them into production systems Experience with LLM related technologies including frameworks, embedding models, vector databases, and Retrieval-Augmented Generation (RAG) systems, MCP, in production settings Deep understanding of data modeling, system architectures, and processing techniques Experience with AWS cloud services and SageMaker Studio (or similar) for scalable data processing and model development Proven track record of building and deploying scalable cloud-native AI applications using platforms like AWS, Azure, or Google Cloud. Proficiency in Python or TypeScript You love tackling complex challenges and delivering elegant, scalable solutions You can explain technical concepts clearly to both technical and non- technical audiences Preferred Qualifications Experience building AI applications in the CAD or manufacturing domain. Experience designing evaluation pipelines for LLM-based systems (e.g., prompt testing, hallucination detection, safety filters) Familiarity with tools and frameworks for LLM fine-tuning and orchestration (e.g., LoRA, QLoRA, AoT P-Tuning etc.) A passion for mentoring and growing engineering talent Experience with emerging Agentic AI solutions such as LangGraph, CrewAI, A2A, Opik Comet, or equivalents Contributions to open-source AI projects or publications in the field Bonus points if youve ever explained RAG to a non-technical friendand they got it Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk its at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world. When youre an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future Join us! Salary transparency Salary is one part of Autodesks competitive compensation package. Offers are based on the candidates experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk Please search for open jobs and apply internally (not on this external site). Show more Show less
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Machine Learning Engineer at Skyfall, you will play a crucial role in deploying and optimizing large language models (LLMs) in production environments. Your responsibilities will include deploying post-trained LLMs, optimizing inference for cost and latency, and building scalable training pipelines using cutting-edge technologies like DeepSpeed, Accelerate, and Ray. The Skyfall team, comprising of industry pioneers from Maluuba, is committed to revolutionizing the AI ecosystem by creating the first world model for the enterprise. By overcoming the limitations of existing LLMs, the Enterprise World Model aims to provide enterprises with a comprehensive understanding of the intricate relationships between data, people, and processes within organizations. You will be part of a dynamic team spread across New York, Toronto, and Bangalore, working on designing distributed training infrastructure, managing multi-cloud ML deployments, and implementing advanced model compression techniques. Additionally, you will develop internal tools to facilitate multi-GPU training and large-scale experimentation, ensuring efficient resource allocation and continuous model evaluation. To excel in this role, you should have a minimum of 3 years of experience in ML engineering, model deployment, and large-scale training. Proficiency in vector databases such as FAISS, Pinecone, and Weaviate for retrieval-augmented generation (RAG) is essential. Experience with multi-cloud ML deployment, hands-on deployment of large-scale models in production, and expertise in multi-GPU training and inference optimizations are key requirements. Your strong knowledge of ML system performance tuning, latency optimization, and cost reduction strategies will be instrumental in developing cluster management tools for external compute infrastructure and implementing state-of-the-art model compression techniques. By staying abreast of LLM fine-tuning techniques, RLHF, and model evaluation metrics, you will contribute to Skyfall's mission of disrupting the AI landscape and providing enterprises with significant value through innovative solutions.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As an Azure AI Engineer at Optimum Data Analytics, you will be a key member of our team in Pune, contributing to the design and deployment of cutting-edge AI/ML solutions using Azure cloud services. You should have a minimum of 5 years of hands-on experience in the field and hold either the Microsoft Azure AI Engineer Associate or Microsoft Azure Data Scientist Associate certification. Your role will involve collaborating with business stakeholders, data engineers, and product teams to deliver scalable and production-ready AI solutions that drive business value and growth. Key Responsibilities - Design and deploy AI/ML models using Azure AI/ML Studio, Azure Machine Learning, and Azure Cognitive Services. - Implement and manage data pipelines for model training workflows and ML lifecycle within the Azure ecosystem. - Work closely with business stakeholders to gather requirements, analyze data, and provide predictive insights. - Collaborate with data engineers and product teams to ensure the delivery of scalable and production-ready AI solutions. - Establish and maintain best practices for model monitoring, versioning, governance, and responsible AI practices. - Contribute to solution documentation and technical architecture. - Design AI agents using Azure stack tools such as Autogen, PromptFlow, ML Studio, AI Foundry, AI Search, LangChain, and LangGraph. Required Skills & Qualifications - Minimum of 5 years of hands-on experience in AI/ML, data science, or machine learning engineering. - Mandatory Certification: Microsoft Azure AI Engineer Associate OR Microsoft Azure Data Scientist Associate. - Strong knowledge of Azure services including Azure Machine Learning, Cognitive Services, Azure Functions, AI Foundry, and Azure Storage. - Proficiency in Python and experience with ML libraries such as scikit-learn, TensorFlow, PyTorch, or similar. - Solid understanding of data science lifecycle, model evaluation, and performance optimization. - Experience with version control tools like Git and deployment through CI/CD pipelines. - Excellent problem-solving and communication skills. - Strong experience in building AI Agents. - Familiarity with LLMs, prompt engineering, or GenAI tools (Azure OpenAI, Hugging Face, Vector Databases, etc.). Good To Have - Experience with Power BI or other data visualization tools. - Exposure to MLOps tools and practices. If you are passionate about leveraging AI/ML to drive business transformation and meet organizational objectives, we encourage you to apply for this position and be part of our dynamic team at Optimum Data Analytics.,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City