Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Hyderaba-ProductManager Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Category: AIML Job Type: Full Time Job Location: Bengaluru Mangalore Experience: 4-8 Years Skills: AI AWS/AZURE/GCP Azure ML C computer vision data analytics Data Modeling Data Visualization deep learning Descriptive Analytics GenAI Image processing Java LLM models ML ONNX Predictive Analytics Python R Regression/Classification Models SageMaker SQL TensorFlow Position Overview We are looking for an experienced AI/ML Engineer to join our team in Bengaluru. The ideal candidate will bring a deep understanding of machine learning, artificial intelligence, and big data technologies, with proven expertise in developing scalable AI/ML solutions. You will lead technical efforts, mentor team members, and collaborate with cross-functional teams to design, develop, and deploy cutting edge AI/ML applications. Job Details Job Category: AI/ML Engineer. Job Type: Full-Time Job Location: Bengaluru Experience Required: 4-8 Years About Us We are a multi-award-winning creative engineering company. Since 2011, we have worked with our customers as a design and technology enablement partner, guiding them on their digital transformation journeys. Roles And Responsibilities Design, develop, and deploy deep learning models for object classification, detection, and segmentation using CNNs and Transfer Learning. Implement image preprocessing and advanced computer vision pipelines. Optimize deep learning models using pruning, quantization, and ONNX for deployment on edge devices. Work with PyTorch, TensorFlow, and ONNX frameworks to develop and convert models. Accelerate model inference using GPU programming with CUDA and cuDNN. Port and test models on embedded and edge hardware platforms. ( Orin, Jetson, Hailo ) Conduct research and experiments to evaluate and integrate GenAI technologies in computer vision tasks. Explore and implement cloud-based AI workflows, particularly using AWS/Azure AI/ML services. Collaborate with cross-functional teams for data analytics, data processing, and large-scale model training. Required Skills Strong programming experience in Python. Solid background in deep learning, CNNs, and transfer learning and Machine learning basics. Expertise in object detection, classification, segmentation. Proficiency with PyTorch, TensorFlow, and ONNX. Experience with GPU acceleration (CUDA, cuDNN). Hands-on knowledge of model optimization (pruning, quantization). Experience deploying models to edge devices (e.g., Jetson, mobile, Orin, Hailo ) Understanding of image processing techniques. Familiarity with data pipelines, data preprocessing, and data analytics. Willingness to explore and contribute to Generative AI and cloud-based AI solutions. Good problem-solving and communication skills. Preferred (Nice-to-Have) Experience with C/C++. Familiarity with AWS Cloud AI/ML tools (e.g., SageMaker, Rekognition). Exposure to GenAI frameworks like OpenAI, Stable Diffusion, etc. Knowledge of real-time deployment systems and streaming analytics. Qualifications Graduation/Post-graduation in Computers, Engineering, or Statistics from a reputed institute. What We Offer Competitive salary and benefits package. Opportunity to work in a dynamic and innovative environment. Professional development and learning opportunities. Visit us on: CodeCraft Technologies LinkedIn : CodeCraft Technologies LinkedIn Instagram : CodeCraft Technologies Instagram Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Gurgaon-SeniorProductM Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Chennai-SeniorProductM Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Mumbai-ProductManager Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Rawatsar, Rajasthan, India
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-LK-COUNTRY-SeniorProductM Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
About Loti AI, Inc Loti AI specializes in protecting major celebrities, public figures, and corporate IP from online threats, focusing on deepfake and impersonation detection. Founded in 2022, Loti offers likeness protection, content location and removal, and contract enforcement across various online platforms including social media and adult sites. The company's mission is to empower individuals to control their digital identities and privacy effectively. We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications And Skills Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
ORANTS AI is a cutting-edge technology company at the forefront of AI and Big Data innovation. We specialize in developing advanced marketing and management platforms, leveraging data mining, data integration, and artificial intelligence to deliver efficient and impactful solutions for our corporate clients. We're a dynamic, remote-first team committed to fostering a collaborative and flexible work environment. Salary: 40 - 43 LPA + Variable Location: Remote (India) Work Schedule: Flexible Working Hours Join ORANTS AI as a Senior AI Engineer and contribute to the development of our intelligent marketing and management platforms. We're looking for an experienced professional who can design, implement, and deploy advanced AI models and algorithms to solve complex business problems. Responsibilities: Design, develop, and deploy machine learning and deep learning models for various applications (e.g., natural language processing, predictive analytics, recommendation systems). Collaborate with data scientists to translate research prototypes into production-ready solutions. Optimize AI models for performance, scalability, and efficiency. Implement robust data pipelines for training and inference. Stay current with the latest advancements in AI/ML research and technologies. Participate in the entire AI lifecycle, from data collection and preparation to model deployment and monitoring. Requirements: 5+ years of experience as an AI/ML Engineer. Strong proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with various machine learning algorithms and techniques. Solid understanding of data structures, algorithms, and software design principles. Experience with cloud platforms (AWS, Azure, GCP) and MLOps practices. Familiarity with big data technologies (e.g., Spark, Hadoop) is a plus. Excellent problem-solving skills and a strong analytical mindset. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Lead Backend Engineer About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Lead backend Engineer on our Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. We seek a highly skilled engineer with a strong foundation in digital product development, a zeal for innovation and responsible for deploying product updates, identifying production issues and implementing integrations. The backend engineer should thrive in agile, fast-paced environments, champion DevOps and CI/CD best practices, and consistently deliver scalable, customer-focused backend solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a team, leveraging skills to build solutions and drive innovation forward.”. What You’ll Contribute Design, develop, and maintain high-performance, scalable Python-based backend systems powering ML and Generative AI products. Collaborate closely with ML engineers, data scientists, and product managers to build reusable APIs and services that support the full ML lifecycle—from data ingestion to inference and monitoring. Take end-to-end ownership of backend services, including design, implementation, testing, deployment, and maintenance. Implement product changes across the SDLC: detailed design, unit/integration testing, documentation, deployment, and support. Contribute to architecture discussions and enforce coding best practices and design patterns across the engineering team. Participate in peer code reviews, PR approvals, and mentor junior developers by removing technical blockers and sharing expertise. Work with the QA and DevOps teams to enable CI/CD, build pipelines, and ensure product quality through automated testing and performance monitoring. Translate business and product requirements into robust engineering deliverables and detailed technical documentation. Build backend infrastructure that supports ML pipelines, model versioning, performance monitoring, and retraining loops. Engage in prototyping efforts, collaborating with internal and external stakeholders to design PoVs and pilot solutions. What We’re Seeking 8+ of software development experience, with at least 3 years in a technical or team leadership role. Deep expertise in Python, including design and development of reusable, modular API packages for ML and data science use cases. Strong understanding of REST and gRPC APIs, including schema design, authentication, and versioning. Familiarity with ML workflows, MLOps, and tools such as MLflow, FastAPI, TensorFlow, PyTorch, or similar. Strong experience building and maintaining microservices and distributed backend systems in production environments. Solid knowledge of cloud-native development and experience with platforms like AWS, GCP, or Azure. Familiarity with Kubernetes, Docker, Helm, and deployment strategies for scalable AI systems. Proficient in SQL and NoSQL databases and experience designing performant database schemas. Experience with messaging and streaming platforms like Kafka is a plus. Understanding of software engineering best practices, including unit testing, integration testing, TDD, code reviews, and performance tuning. Exposure to frontend technologies such as React or Angular is a bonus, though not mandatory. Experience integrating with LLM APIs and understanding of prompt engineering and vector databases. Exposure to Java or Spring Boot in hybrid technology environments will be a bonus. Excellent collaboration and communication skills, with a proven ability to work effectively in cross-functional, globally distributed teams. A bachelor’s degree in Computer Science, Engineering, or a related discipline, or equivalent hands-on industry experience. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Lead Backend Engineer About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Lead backend Engineer on our Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. We seek a highly skilled engineer with a strong foundation in digital product development, a zeal for innovation and responsible for deploying product updates, identifying production issues and implementing integrations. The backend engineer should thrive in agile, fast-paced environments, champion DevOps and CI/CD best practices, and consistently deliver scalable, customer-focused backend solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a team, leveraging skills to build solutions and drive innovation forward.”. What You’ll Contribute Design, develop, and maintain high-performance, scalable Python-based backend systems powering ML and Generative AI products. Collaborate closely with ML engineers, data scientists, and product managers to build reusable APIs and services that support the full ML lifecycle—from data ingestion to inference and monitoring. Take end-to-end ownership of backend services, including design, implementation, testing, deployment, and maintenance. Implement product changes across the SDLC: detailed design, unit/integration testing, documentation, deployment, and support. Contribute to architecture discussions and enforce coding best practices and design patterns across the engineering team. Participate in peer code reviews, PR approvals, and mentor junior developers by removing technical blockers and sharing expertise. Work with the QA and DevOps teams to enable CI/CD, build pipelines, and ensure product quality through automated testing and performance monitoring. Translate business and product requirements into robust engineering deliverables and detailed technical documentation. Build backend infrastructure that supports ML pipelines, model versioning, performance monitoring, and retraining loops. Engage in prototyping efforts, collaborating with internal and external stakeholders to design PoVs and pilot solutions. What We’re Seeking 8+ of software development experience, with at least 3 years in a technical or team leadership role. Deep expertise in Python, including design and development of reusable, modular API packages for ML and data science use cases. Strong understanding of REST and gRPC APIs, including schema design, authentication, and versioning. Familiarity with ML workflows, MLOps, and tools such as MLflow, FastAPI, TensorFlow, PyTorch, or similar. Strong experience building and maintaining microservices and distributed backend systems in production environments. Solid knowledge of cloud-native development and experience with platforms like AWS, GCP, or Azure. Familiarity with Kubernetes, Docker, Helm, and deployment strategies for scalable AI systems. Proficient in SQL and NoSQL databases and experience designing performant database schemas. Experience with messaging and streaming platforms like Kafka is a plus. Understanding of software engineering best practices, including unit testing, integration testing, TDD, code reviews, and performance tuning. Exposure to frontend technologies such as React or Angular is a bonus, though not mandatory. Experience integrating with LLM APIs and understanding of prompt engineering and vector databases. Exposure to Java or Spring Boot in hybrid technology environments will be a bonus. Excellent collaboration and communication skills, with a proven ability to work effectively in cross-functional, globally distributed teams. A bachelor’s degree in Computer Science, Engineering, or a related discipline, or equivalent hands-on industry experience. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of our clients Industry: Technology, Information and Media Seniority level: Mid-Senior level Min Experience: 3 years Location: Bengaluru JobType: full-time We are on the lookout for a Machine Intelligence Specialist to join our forward-thinking engineering team. This role is ideal for individuals passionate about building intelligent systems that transform raw data into real-world automation through advanced natural language processing (NLP), computer vision, and machine learning technologies. You will lead initiatives involving deep learning, model deployment, and system optimization, contributing to high-impact products across domains. If you're energized by turning cutting-edge AI research into practical solutions, this is your opportunity to make a difference. Your Mission Design and Develop Intelligent Systems : Create robust machine learning models for tasks in NLP (e.g., text summarization, named entity recognition, semantic search) and computer vision (e.g., image classification, OCR, object detection). Deep Learning Engineering : Build and optimize models using TensorFlow, Keras, or PyTorch, leveraging neural architectures such as CNNs, RNNs, LSTMs, and Transformers. Collaborative Innovation : Partner with product, data, and engineering teams to align AI development with real business needs, ensuring seamless integration of intelligence into products and workflows. Cloud-Native Deployment : Implement and scale ML models on AWS, Azure, or GCP using native services and containerization tools (Docker, Kubernetes). AI at Scale : Drive performance tuning, data preprocessing, A/B testing, and continuous training to ensure accuracy and production reliability. Model Lifecycle Ownership : Implement MLOps best practices, including CI/CD pipelines for ML, model versioning, and monitoring using tools like MLflow or SageMaker. Ethical & Transparent AI : Prioritize explainability, fairness, and compliance throughout the model development lifecycle. What You Bring Essential Qualifications 3+ years of hands-on experience in designing, training, and deploying machine learning models. Strong background in Python and ML libraries: TensorFlow, Keras, OpenCV, scikit-learn, Hugging Face Transformers, etc. Experience in both NLP and computer vision use cases and toolkits. Proven ability to deploy models using cloud-native AI tools on AWS, Azure, or GCP. Familiarity with containerization (Docker) and orchestration (Kubernetes). Solid foundation in mathematics, statistics, and deep learning algorithms. Excellent communication skills and a collaborative mindset. Bonus Points For Experience with MLOps workflows and tools like MLflow, TFX, or Kubeflow. Exposure to edge AI or streaming inference systems. Understanding of responsible AI principles and data governance. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Devise and implement efficient and secure procedures for data management and analysis with attention to all technical aspects Create and enforce policies for effective data management Formulate management techniques for quality data collection to ensure adequacy, accuracy and legitimacy of data Establish rules and procedures for data sharing with upper management, external stakeholders etc. Basic Qualifications Knowledge of SQL and Excel Experience hiring and leading a high-performance team Knowledge of data engineering pipelines, cloud solutions, ETL management, databases, visualizations and analytical platforms Knowledge of methods for statistical inference (e.g. regression, experimental design, significance testing) Preferred Qualifications Knowledge of product experimentation (A/B testing) Knowledge of a scripting language (Python, R, etc.) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2974488 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We are united in our mission to make a positive impact on healthcare. Join Us! South Florida Business Journal, Best Places to Work 2024 Inc. 5000 Fastest-Growing Private Companies in America 2024 2024 Black Book Awards, ranked #1 EHR in 11 Specialties 2024 Spring Digital Health Awards, “Web-based Digital Health” category for EMA Health Records (Gold) 2024 Stevie American Business Award (Silver), New Product and Service: Health Technology Solution (Klara) Who We Are We Are Modernizing Medicine (WAMM)! We’re a team of bright, passionate, and positive problem-solvers on a mission to place doctors and patients at the center of care through an intelligent, specialty-specific cloud platform. Our vision is a world where the software we build increases medical practice success and improves patient outcomes. Founded in 2010 by Daniel Cane and Dr. Michael Sherling, we have grown to over 3400 combined direct and contingent team members serving eleven specialties, and we are just getting started! ModMed is based in Boca Raton, FL, with office locations in Santiago, Chile, Berlin, Germany, Hyderabad, India, and a robust remote workforce with team members across the US. ModMed is hiring a driven ML Ops Engineer 2 to join our positive, passionate, and high-performing team focused on scalable ML Systems. This is an exciting opportunity to You as you will collaborate with data scientists, engineers, and other cross-functional teams to ensure seamless model deployment, monitoring, and automation. If you're passionate about cloud infrastructure, automation, and optimizing ML pipelines, this is the role for you within a fast-paced Healthcare IT company that is truly Modernizing Medicine! Key Responsibilities Model Deployment & Automation: Develop, deploy, and manage ML models on Databricks using MLflow for tracking experiments, managing models, and registering them in a centralized repository. Infrastructure & Environment Management: Set up scalable and fault-tolerant infrastructure to support model training and inference in cloud environments such as AWS, GCP, or Azure. Monitoring & Performance Optimization: Implement monitoring systems to track model performance, accuracy, and drift over time. Create automated systems for re-training and continuous learning to maintain optimal performance. Data Pipeline Integration: Collaborate with the data engineering team to integrate model pipelines with real-time and batch data processing frameworks, ensuring seamless data flow for training and inference. Skillset & Qualification Model Deployment: Experience with deploying models in production using cloud platforms like AWS Sagemaker, GCP AI Platform, or Azure ML Studio. Version Control & Automation: Experience with MLOps tools such as MLflow, Kubeflow, or Airflow to automate and monitor the lifecycle of machine learning models. Cloud Expertise: Experience with cloud-based machine learning services on AWS, Google Cloud, or Azure, ensuring that models are scalable and efficient. Engineers must be skilled in measuring and optimizing model performance through metrics like AUC, precision, recall, and F1-score, ensuring that models are robust and reliable in production settings. Education: Bachelor’s or Master’s degree in Data Science, Statistics, Mathematics, or a related technical field. ModMed In India Benefit Highlights High growth, collaborative, transparent, fun, and award-winning culture Comprehensive benefits package including medical for you, your family, and your dependent parents The company supported community engagement opportunities along with a paid Voluntary Time Off day to use for volunteering in your community of interest Global presence, and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability Company-sponsored Employee Resource Groups that provide engaged and supportive communities within ModMed ModMed Benefits Highlight: At ModMed, we believe it’s important to offer a competitive benefits package designed to meet the diverse needs of our growing workforce. Eligible Modernizers can enroll in a wide range of benefits: India Meals & Snacks: Enjoy complimentary office lunches & dinners on select days and healthy snacks delivered to your desk, Insurance Coverage: Comprehensive health, accidental, and life insurance plans, including coverage for family members, all at no cost to employees, Allowances: Annual wellness allowance to support your well-being and productivity, Earned, casual, and sick leaves to maintain a healthy work-life balance, Bereavement leave for difficult times and extended medical leave options, Paid parental leaves, including maternity, paternity, adoption, surrogacy, and abortion leave, Celebration leave to make your special day even more memorable, and company-paid holidays to recharge and unwind. United States Comprehensive medical, dental, and vision benefits, including a company Health Savings Account contribution, 401(k): ModMed provides a matching contribution each payday of 50% of your contribution deferred on up to 6% of your compensation. After one year of employment with ModMed, 100% of any matching contribution you receive is yours to keep. Generous Paid Time Off and Paid Parental Leave programs, Company paid Life and Disability benefits, Flexible Spending Account, and Employee Assistance Programs, Company-sponsored Business Resource & Special Interest Groups that provide engaged and supportive communities within ModMed, Professional development opportunities, including tuition reimbursement programs and unlimited access to LinkedIn Learning, Global presence and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability for some roles, Weekly catered breakfast and lunch, treadmill workstations, Zen, and wellness rooms within our BRIC headquarters. PHISHING SCAM WARNING: ModMed is among several companies recently made aware of a phishing scam involving imposters posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote "interviews," and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from ModMed without a formal interview process, and valid communications from our hiring team will come from our employees with a ModMed email address (first.lastname@modmed.com). Please check senders’ email addresses carefully. Additionally, ModMed will not ask you to purchase equipment or supplies as part of your onboarding process. If you are receiving communications as described above, please report them to the FTC website. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Python Developer – Backend Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 4–8 Years About Darwix AI Darwix AI is building India’s most advanced GenAI-powered platform for enterprise sales teams. We combine speech recognition, LLMs, vector databases, real-time analytics, and multilingual intelligence to power customer conversations across India, the Middle East, and Southeast Asia. We’re solving complex backend problems across speech-to-text pipelines , agent assist systems , AI-based real-time decisioning , and scalable SaaS delivery . Our engineering team sits at the core of our product and works closely with AI research, product, and client delivery to build the future of revenue enablement. Backed by top-tier VCs, AI advisors, and enterprise clients, this is a chance to build something foundational. Role Overview We are hiring a Senior Python Developer to architect, implement, and optimize high-performance backend systems that power our AI platform. You will take ownership of key backend services—from core REST APIs and data pipelines to complex integrations with AI/ML modules. This role is for builders. You’ll work closely with product, AI, and infra teams, write production-grade Python code, lead critical decisions on architecture, and help shape engineering best practices. Key Responsibilities 1. Backend API Development Design and implement scalable, secure RESTful APIs using FastAPI , Flask , or Django REST Framework Architect modular services and microservices to support AI, transcription, real-time analytics, and reporting Optimize API performance with proper indexing, pagination, caching, and load management strategies Integrate with frontend systems, mobile clients, and third-party systems through clean, well-documented endpoints 2. AI Integrations & Inference Orchestration Work closely with AI engineers to integrate GenAI/LLM APIs (OpenAI, Llama, Gemini), transcription models (Whisper, Deepgram), and retrieval-augmented generation (RAG) workflows Build services to manage prompt templates, chaining logic, and LangChain flows Deploy and manage vector database integrations (e.g., FAISS , Pinecone , Weaviate ) for real-time search and recommendation pipelines 3. Database Design & Optimization Model and maintain relational databases using MySQL or PostgreSQL ; experience with MongoDB is a plus Optimize SQL queries, schema design, and indexes to support low-latency data access Set up background jobs for session archiving, transcript cleanup, and audio-data binding 4. System Architecture & Deployment Own backend deployments using GitHub Actions , Docker , and AWS EC2 Ensure high availability of services through containerization, horizontal scaling, and health monitoring Manage staging and production environments, including DB backups, server health checks, and rollback systems 5. Security, Auth & Access Control Implement robust authentication (JWT, OAuth), rate limiting , and input validation Build role-based access controls (RBAC) and audit logging into backend workflows Maintain compliance-ready architecture for enterprise clients (data encryption, PII masking) 6. Code Quality, Documentation & Collaboration Write clean, modular, extensible Python code with meaningful comments and documentation Build test coverage (unit, integration) using PyTest , unittest , or Postman/Newman Participate in pull requests, code reviews, sprint planning, and retrospectives with the engineering team Required Skills & QualificationsTechnical Expertise 3–8 years of experience in backend development with Python, PHP. Strong experience with FastAPI , Flask , or Django (at least one in production-scale systems) Deep understanding of RESTful APIs , microservice architecture, and asynchronous Python patterns Strong hands-on with MySQL (joins, views, stored procedures); bonus if familiar with MongoDB , Redis , or Elasticsearch Experience with containerized deployment using Docker and cloud platforms like AWS or GCP Familiarity with Git , GitHub , CI/CD pipelines , and Linux-based server environments Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Years of exp : 10 - 15 yrs Location : Noida Join us as Cloud Engineer at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. Ensure version control, repeatability, and compliance across all infrastructure components. Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: AI Engineer Location: Gurgaon (On-site) Type: Full-Time Experience: 2–6 Years Role Overview We are seeking a hands-on AI Engineer to architect and deploy production-grade AI systems that power our real-time voice intelligence suite. You will lead AI model development, optimize low-latency inference pipelines, and integrate GenAI, ASR, and RAG systems into scalable platforms. This role combines deep technical expertise with team leadership and a strong product mindset. Key Responsibilities Build and deploy ASR models (e.g., Whisper, Wav2Vec2.0) and diarization systems for multi-lingual, real-time environments. Design and optimize GenAI pipelines using OpenAI, Gemini, LLaMA, and RAG frameworks (LangChain, LlamaIndex). Architect and implement vector database systems (FAISS, Pinecone, Weaviate) for knowledge retrieval and indexing. Fine-tune LLMs using SFT, LoRA, RLHF, and craft effective prompt strategies for summarization and recommendation tasks. Lead AI engineering team members and collaborate cross-functionally to ship robust, high-performance systems at scale. Preferred Qualification 2–6 years of experience in AI/ML, with demonstrated deployment of NLP, GenAI, or STT models in production. Proficiency in Python, PyTorch/TensorFlow, and real-time architectures (WebSockets, Kafka). Strong grasp of transformer models, MLOps, and low-latency pipeline optimization. Bachelor’s/Master’s in CS, AI/ML, or related field from a reputed institution (IITs, BITS, IIITs, or equivalent). What We Offer Compensation: Competitive salary + equity + performance bonuses Ownership: Lead impactful AI modules across voice, NLP, and GenAI Growth: Work with top-tier mentors, advanced compute resources, and real-world scaling challenges Culture: High-trust, high-speed, outcome-driven startup environment Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Dwarka, Delhi, Delhi
Remote
Job Description: Embedded AI Engineer Location: Dwarka Sector 12 Delhi Job Type: Full-Time / Contract / Freelance Job Description: We are seeking a skilled and motivated Embedded AI Engineer to develop and optimize a voice assistant system on resource-constrained devices like the ESP32-S3. You will be responsible for implementing wake word detection, voice activity detection (VAD), and basic speech command recognition using Espressif's ESP-SR framework, I2S microphones, and embedded ML models. And also, they have experience in computer vision-based project through that we can monitor particular object in Realtime using thermal and normal camera. Responsibilities: - Design and implement embedded voice assistant pipelines using ESP-SR, ESP-IDF, or PlatformIO. - Integrate I2S digital microphones (e.g., INMP441, DFPlayer) with ESP32 for real-time audio capture. - Develop wake word detection, VAD, and command recognition using models like WakeNet, MultiNet, or TinyML-based solutions. - - Optimize AI models and inference for ultra-low-power operation. - Manage real-time tasks using FreeRTOS on ESP32 platforms. - Interface with peripherals like SD cards, LEDs, relays, and Wi-Fi/BLE modules. - Debug, profile, and optimize memory and performance on constrained hardware. Required Skills: - Strong proficiency in C/C++ and embedded development for ESP32. - Experience with ESP-IDF, PlatformIO, or Arduino ESP32 core. - Practical knowledge of voice processing algorithms: VAD, wake word, STT. - Experience using or modifying ESP-SR, ESP-Skainet, or custom keyword spotting models. - Familiarity with I2S, DMA, and audio pre-processing (gain control, filtering). - Understanding of FreeRTOS, low-power modes, and real-time audio handling. Preferred/Bonus Skills: - Experience with TinyML, TensorFlow Lite for Microcontrollers, or Edge Impulse. - Knowledge of Python for data preprocessing and model training. - They have knowledge and experience in computer vision (OpenCV, image processing). - They have also knowledge and experience in Deep learning/AI for vision (like CNN, YOLO or Faster-R CNN, pytorch, TensorFlow, keras) - Having Experience on Nvidia Jetson Nano/Orin based device. - Experience with Bluetooth (BLE) or Wi-Fi communication for IoT applications. - Experience in noise reduction (e.g., NSNet), echo cancellation, or ESP-DSP. Qualifications: - Bachelor's or Master's degree in Electronics, Embedded Systems, Computer Engineering, or related field. - 2+ years of experience in embedded firmware or AI on edge devices. Why Join Us? - Work on cutting-edge embedded AI products for consumer and industrial voice control. - Opportunity to shape next-gen low-power voice assistant hardware. - Flexible remote work options and tech ownership. How to Apply: Send your resume, GitHub/portfolio, and any project demos to: Email: hr@gfofireequipments.com Job Type: Full-time Pay: From ₹100,000.00 per month Schedule: Day shift Work Location: In person
Posted 1 week ago
10.0 years
1 - 1 Lacs
Hyderābād
On-site
JOB DESCRIPTION Elevate your career as the Director of Machine Learning Engineering, where your technical expertise and visionary leadership will shape the future of AI and ML solutions. As a Director of Machine Learning Engineering at JPMorgan Chase within the Corporate Sector – Artificial Intelligence and Machine Learning (AIML) Data Platforms, you will lead a specialized technical area, driving impact across teams, technologies, and projects. In this role, you will leverage your deep knowledge of machine learning, software engineering, and product management to spearhead multiple complex ML projects and initiatives, serving as the primary decision-maker and a catalyst for innovation and solution delivery. You will be responsible for hiring, leading, and mentoring a team of Machine Learning and Software Engineers, focusing on best practices in ML engineering, with the goal of elevating team performance to produce high-quality, scalable ML solutions with operational excellence. You will engage deeply in technical aspects, reviewing code, mentoring engineers, troubleshooting production ML applications, and enabling new ideas through rapid prototyping. Your passion for parallel distributed computing, big data, cloud engineering, micro-services, automation, and operational excellence will be key. Job Responsibilities Lead and manage a team of machine learning engineers, ensuring the implementation, delivery, and support of high-quality ML solutions. Collaborate with product teams to deliver tailored, AI/ML-driven technology solutions. Architect and implement distributed AI/ML infrastructure, including inference, training, scheduling, orchestration, and storage. Develop advanced monitoring and management tools for high reliability and scalability in AI/ML systems. Optimize AI/ML system performance by identifying and resolving inefficiencies and bottlenecks. Drive the adoption and execution of AI/ML Platform tools across various teams. Integrate Generative AI and Classical AI within the ML Platform using state-of-the-art techniques. Lead the entire AI/ML product life cycle through planning, execution, and future development by continuously adapting, developing new AI/ML products and methodologies, managing risks, and achieving business targets like cost, features, reusability, and reliability to support growth. Manage, mentor, and develop a team of AI/ML professionals in a way that promotes a culture of excellence, continuous learning, and supports their professional goals. Required Qualifications, Capabilities, and Skills Formal training or certification in software engineering concepts and 10+ years applied experience. In addition, 5+ years of experience leading technologists to manage, anticipate and solve complex technical items within your domain of expertise 12+ years of experience in engineering management with a strong technical background in machine learning. Extensive hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, JAX, scikit-learn). Deep expertise in Cloud Engineering (AWS, Azure, GCP) and Distributed Micro-service architecture. Experienced with Kubernetes ecosystem, including EKS, Helm, and custom operators. Background in High Performance Computing, ML Hardware Acceleration (e.g., GPU, TPU, RDMA), or ML for Systems. Strategic thinker with the ability to craft and drive a technical vision for maximum business impact. Demonstrated leadership in working effectively with engineers, data scientists, and ML practitioners. Preferred Qualifications, Capabilities, and Skills Strong coding skills and experience in developing large-scale AI/ML systems. Proven track record in contributing to and optimizing open-source ML frameworks. Recognized thought leader within the field of machine learning. Understanding & experience of AI/ML Platforms, LLMs, GenAI, and AI Agents. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Delhi, India
On-site
Company Description Monk Outsourcing is a digital marketing service provider company based in Delhi, India. We offer staffing solutions for US-based companies, software development, web development, content management solutions, and creative design services. Our team of experts works with modern technologies and tools to deliver web-based projects from concept to implementation. We are looking for a talented AI/ML Engineer to join our dynamic team and contribute to our exciting projects involving large language models (LLMs). *Job Overview:* As an AI/ML Engineer specializing in generative AI applications, you will be responsible for developing and optimizing the entire machine learning pipeline. This includes data preprocessing, model training, fine-tuning, and deployment. You will work closely with data scientists, software engineers, and product managers to create efficient and scalable LLM models that meet our enterprise clients' needs. *Key Responsibilities:* • Design, implement, and maintain end-to-end machine learning pipelines for generative AI applications. • Develop and fine-tune large language models (LLMs) to meet specific project requirements. • Implement efficient data preprocessing and augmentation techniques to enhance model performance. • Collaborate with cross-functional teams to define project requirements and deliver AI solutions that align with business objectives. • Conduct experiments to evaluate model performance, using metrics and validation techniques to ensure high-quality results. • Optimize model inference and deployment for scalability and efficiency in production environments. • Stay updated with the latest advancements in AI/ML research and incorporate relevant innovations into our projects. • Provide technical guidance and mentorship to junior team members. *Required Skills and Qualifications:* • Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, or a related field. • 3-5 years of experience in machine learning, with a focus on generative AI and LLMs. • Proficiency in programming languages such as Python, and experience with ML frameworks like TensorFlow, PyTorch, or similar. • Strong understanding of NLP concepts, including text generation, prompting, and transformer-based architectures. • Experience in building and deploying machine learning models in production environments. • Knowledge of data preprocessing techniques, including text cleaning, tokenization, and augmentation. • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes) for scalable model deployment. • Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment. • Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. *Preferred Qualifications:* • Experience with fine-tuning pre-trained LLMs such as GPT, BERT, or similar. • Familiarity with MLOps practices and tools for continuous integration and deployment (CI/CD) of ML models. • Understanding of ethical considerations and bias mitigation in AI models. • Contributions to open-source projects or publications in AI/ML conferences/journals. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderābād
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: Job Description Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds! Responsibilities: In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force. Requirements: Master’s/Bachelor’s degree in computer science or equivalent. 2-4 years of relevant work experience in software development. Strong understanding of Generative AI models – LLM, LVM, LMMs and building blocks (self-attention, cross attention, kv caching etc.) Floating-point, Fixed-point representations and Quantization concepts. Experience with optimizing algorithms for AI hardware accelerators (like CPU/GPU/NPU). Strong in C/C++ programming, Design Patterns and OS concepts. Good scripting skills in Python. Excellent analytical and debugging skills. Good communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Strong understanding of SIMD processor architecture and system design. Proficiency in object-oriented software development and familiarity Familiarity with Linux and Windows environment Strong background in kernel development for SIMD architectures. Familiarity with frameworks like llama.cpp, MLX, and MLC is a plus. Good knowledge of PyTorch, TFLite, and ONNX Runtime is preferred. Experience with parallel computing systems and languages like OpenCL and CUDA is a plus. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 1 week ago
5.0 years
10 - 27 Lacs
India
On-site
About MostEdge At MostEdge , our purpose is clear: Accelerate commerce and build sustainable, trusted experiences. With every byte of data, we strive to Protect Every Penny. Power Every Possibility. We empower retailers to make real-time, profitable decisions using cutting-edge AI , smart infrastructure, and operational excellence. Our platforms handle: hundreds of thousands of sales transactions/hour hundreds of vendor purchase invoices/hour few hundred product updates/day With systems built for 99.99999% uptime We are building an AI-native commerce engine , and language models are at the heart of this transformation. Role Overview We are looking for an AI/ML Expert with deep experience in training and deploying Large Language Models (LLMs) to power MostEdge's next-generation operations, cost intelligence, and customer analytics platform . You will be responsible for fine-tuning domain-specific models using internal structured and unstructured data (product catalogs, invoices, chats, documents), embedding real-time knowledge through RAG pipelines, and enabling AI-powered interfaces that drive search, reporting, insight generation, and operational recommendations. Scope & Accountability What You Will Own Fine-tune and deploy LLMs for product, vendor, and shopper-facing use cases. Design hybrid retrieval-augmented generation (RAG) pipelines with LangChain, FastAPI, and vector DBs (e.g., FAISS, Weaviate, Qdrant). Train models on internal datasets (sales, cost, product specs, invoices, support logs) using supervised fine-tuning and LoRA/QLoRA techniques. Orchestrate embedding pipelines, prompt tuning, and model evaluation across customer and field operations use cases. Deploy LLMs efficiently on RunPod, AWS, or GCP , optimizing for multi-GPU, low-latency inference . Collaborate with engineering and product teams to embed model outputs in dashboards, chat UIs, and retail systems. What Success Looks Like 90%+ accuracy on retrieval and reasoning tasks for product/vendor cost and invoice queries. <3s inference time across operational prompts, running on GPU-optimized containers. Full integration of LLMs with backend APIs, sales dashboards, and product portals. 75% reduction in manual effort across selected operational workflows. Skills & Experience Must-Have 5+ years in AI/ML , with 2+ years working on LLMs or transformer architectures . Proven experience training or fine-tuning Mistral, LLaMA, Falcon, or similar open-source LLMs . Strong command over LoRA, QLoRA, PEFT, RAG, embeddings, and quantized inference . Familiarity with LangChain, HuggingFace Transformers, FAISS/Qdrant , and FastAPI for LLM orchestration. Experience deploying models on RunPod, AWS, or GCP using Docker + Kubernetes. Proficient in Python , PyTorch , and data preprocessing (structured and unstructured). Experience with ETL pipelines , multi-modal data, and real-time data integration. Nice-to-Have Experience with retail, inventory, or customer analytics systems . Knowledge of semantic search, OCR post-processing, or auto-tagging pipelines . Exposure to multi-tenant environments and secure model isolation for enterprise use. How You Reflect Our Values Lead with Purpose : You empower smarter decisions with AI-first operations. Build Trust : You make model behavior explainable, dependable, and fair. Own the Outcome : You train and optimize end-to-end pipelines from data to insights. Win Together : You partner across engineering, ops, and customer success teams. Keep It Simple : You design intuitive models, prompts, and outputs that drive action—not confusion. Why Join MostEdge? Shape how AI transforms commerce and operations at scale . Be part of a mission-critical, high-velocity, AI-first company . Build LLMs with purpose—connecting frontline data to real-time results. Job Types: Full-time, Permanent Pay: ₹1,068,726.69 - ₹2,729,919.70 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Morning shift US shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Expected Start Date: 15/07/2025
Posted 1 week ago
0 years
0 Lacs
Delhi
On-site
SUMMARY We are seeking a Machine Learning Analyst with a strong foundation in Engineering or a related Quantitative Sciences discipline. While prior experience in Machine Learning is not mandatory, candidates with exposure to Machine Learning and Deep Learning (if any) are expected to demonstrate a rigorous understanding of the concepts they are familiar with. The ideal candidate must be a quick learner and demonstrate strong analytical skills, clear thinking and structured problem-solving, strong quantitative aptitude, a willingness to learn, high self-motivation, and a diligent work ethic. ABOUT US Wadhwani AI is a nonprofit institute building and deploying applied AI solutions to solve critical issues in public health, agriculture, education, and urban development in underserved communities in the global south. We collaborate with governments, social sector organizations, academic and research institutions, and domain experts to identify real-world problems, and develop practical AI solutions to tackle these issues with the aim of making a substantial positive impact. We have over 30+ AI projects supported by leading philanthropies such as Bill & Melinda Gates Foundation and Google.org. With a team of over 200 professionals, our expertise encompasses AI/ML research and innovation, software engineering, domain knowledge, design and user research. In the Press: Our Founder Donors are among the Top 100 AI Influencers G20 India’s Presidency: AI Healthcare, Agriculture, & Education Solutions Showcased Globally. Unlocking the potentials of AI in Public Health Wadhwani AI Takes an Impact-First Approach to Applying Artificial Intelligence - data.org Winner of the H&M Foundation Global Change Award 2022 Indian Winners of the 2019 Google AI Impact Challenge, and the first in the Asia Pacific to host Google Fellow PRE-REQUISITES ML Analyst position is open to all with prior training in Engineering or any related Quantitative Sciences discipline No prior experience in Machine Learning or Deep Learning is required Candidates with exposure to ML/DL (if any) are expected to have a clear and rigorous understanding of the concepts they are familiar with Strong skills in data handling, and logical problem-solving Demonstrates a quick learning ability, and a strong work ethic Willingness to take on any task, learn new tools, and adapt to evolving project needs ROLES & RESPONSIBILITIES Work closely with data to support the development of ML and DL solutions Conduct experiments under guidance and report results reliably Learn to derive insights from experimental outcomes and determine appropriate next steps Prepare, curate, and analyse datasets for training and evaluation Monitor incoming data streams and perform regular quality checks Assist in training and inference of ML models, including deep learning architectures Contribute to well-documented and maintainable codebases Document work clearly and consistently with high standards Communicate and present experimental findings and results clearly within the team Learn and apply best practices across ML development, coding, documentation, and experimentation Collaborate effectively with project teams to meet milestones and deliverables Proactively seek help and feedback when needed Work efficiently with tools like Unix, VS Code, GitHub, and Docker Develop proficiency with common ML tools and libraries such as Pandas, Scikit-learn, PyTorch, Excel (pivot tables), Matplotlib, Weights & Biases DESIRED COMPETENSIES Demonstrates curiosity, humility, and a strong motivation to learn and grow Takes full ownership of tasks; highly diligent, detail-oriented, and accountable Willing to engage in all types of work from data cleaning and exploration to debugging and tooling Comfortable sitting with raw data to explore, understand and derive insights, and not just focused on modelling Proactively seeks guidance and independently builds knowledge when needed Approaches every task with a quality-first mindset; no task is considered beneath them Identifies recurring patterns and abstracts them into reusable, generalisable workflows Contributes across the entire ML lifecycle including data preparation, experimentation, and analysis Selects and applies appropriate tools; builds efficient, reliable, and repeatable processes Maintains a high standard of error-free work; reviews and validates work thoroughly Collaborates effectively with cross-functional teams Communicates clearly and constructively, with an emphasis on precision and clarity We are committed to promoting diversity and the principle of equal employment opportunity for all our employees and encourage qualified candidates to apply irrespective of religion or belief, ethnic or social background, gender, gender identity, and disability. If you have any questions, please email us at careers@wadhwaniai.org.
Posted 1 week ago
3.0 years
1 - 6 Lacs
Noida
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP. Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc. 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs. Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc. Deep familiarity with internals of at least a few Machine Learning algorithms and concepts. Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc. Experience with ML model deployments using REST API, Docker, Kubernetes, etc. Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable. Knowledge of basic data structures and algorithms. Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus. Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions. Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others. Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services. Optimize existing deep learning models for performance, scalability, and efficiency. Build, deploy, and own scalable production NLP pipelines. Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks. Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues. Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc. To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane