Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
35.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description F-Secure makes every digital moment more secure, for everyone. For over 35 years, we’ve led the cyber security industry, protecting tens of millions of people online together with our 200+ service provider partners. We value our Fellows' individuality, with an inclusive environment where diversity drives innovation and growth. What makes you unique is what we value – be yourself, that is (y)our greatest asset. Founded in Finland, F‑Secure has offices in Europe, North America and Asia Pacific. About The Role We are looking for skilled Machine Learning Engineers to join our Technology team in Bengaluru! At F-Secure, we're developing cutting-edge AI-powered cybersecurity defenses that protect millions of users globally. Our ML models operate in dynamic environments where threat actors continuously evolve their techniques. We're seeking a motivated individual to perform in-depth analysis of data and machine learning models, develop and implement models using both classical and modern approaches, and optimize models for performance and latency. This is a fantastic opportunity to enhance your skills in a real-world cybersecurity context with significant impact. This role will be located in Bengaluru, India. You can choose whether you work at our Bengaluru office, or in a hybrid mode from your home office. We hope you are able to join us for common gatherings at the Bengaluru office when needed. Key Responsibilities To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What are we looking for? Prior experience from utilizing various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Additional Nice-to-have's Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What will you get from us? You will work together with experienced and enthusiastic colleagues, and within F-Secure you will find some of the best minds in the cyber security industry. We actively encourage our Fellows to grow and develop within F-Secure, and in your career here you can find yourself contributing to any number of our other products and teams. You decide what to make of this role, what your priorities are, and how you organize your work for the best benefit to us all. We offer interesting challenges and a competitive compensation model with wide range of benefits. You get a chance to develop yourself professionally in an international and highly motivated team serving our customers in providing world class security, privacy and uncensored access to information online. You get to work in a flexible, agile, and dynamic working environment that supports individual needs. Giving our people both support and the opportunity to be in charge of their own work is something that is in our DNA. We are in a unique phase in our 30-year history and with curiosity and excitement in the air we see no limits for building a strong and fruitful career with us! A security vetting will possibly be conducted for the selected candidate in accordance to our employment process.
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3024781
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description We are part of the India & Emerging Stores Customer Fulfilment Experience Org. Team's mission is to address unique customer requirements and the increasing associated costs/abuse of returns and rejects for Emerging Stores. Our team implements tech solves that reduce the net cost of concessions/refunds - this includes buyer and seller abuse, costs associated with return/reject transportation, cost of contacts and operations cost at return centers. We have a huge opportunity to create a legacy and our Legacy Statement is to “transform ease and quality of living in India, thereby enabling its potential in the 21st century”. We also believe that we have an additional responsibility to “help Amazon become truly global in its perspective and innovations” by creating global best-in-class products/platforms that can serve our customers worldwide. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives, building and launching customer facing products in IN and other emerging markets, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and are on the state-of-the-art in automated large scale e-commerce business. We are looking for a SDE to deliver capabilities across marketplaces. We operate in a high performance agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Deliver Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovation. As an engineer, you will help with the design, implementation, and launch of many key product features. You will get an opportunity to work on the wide range of technologies (including AWS Open Search, Lambda, ECS, SQS, Dynamo DB, Neptune etc.) and apply new technologies for solving customer problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with highly skilled and motivated engineers who are already contributing to building high-scale and high-available systems. If you are looking for an opportunity to work on world-leading technologies and would like to build creative technology solutions that positively impact hundreds of millions of customers, and relish large ownership and diverse technologies, join our team today! As An Engineer You Will Be Responsible For Ownership of product/feature end-to-end for all phases from the development to the production. Ensuring the developed features are scalable and highly available with no quality concerns. Work closely with senior engineers for refining the design and implementation. Management and execution against project plans and delivery commitments. Assist directly and indirectly in the continual hiring and development of technical talent. Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts. Contribute intellectual property through patents. The candidate should be passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end. About The Team Team: IES NCRC Tech Mission: We own programs to prevent customer abuse for IN & emerging marketplaces. We detect abusive customers for known abuse patterns and apply interventions at different stages of buyer's journey like checkout, pre-fulfillment, shipment and customer contact (customer service). We closely partner with International machine learning team to build ML based solutions for above interventions. Vision: Our goal is to automate detection of new abuse patterns and act quickly to minimize financial loss to Amazon. This would act as a deterrent for abusers, while building trust for genuine customers. We use machine learning based models to automate the abuse detection in a scalable & efficient manner. Technologies: The ML models leveraged by the team include a vast variety ranging from regression-based (XgBoost), to deep-learning models (RNN, CNN) and use frameworks like PyTorch, TensorFlow, Keras for training & inference. Productionization of ML models for real-time low-latency high traffic use-cases poses unique challenges, which in turn makes the work exciting. In terms of tech stack, multiple AWS technologies are used, e.g. Sagemaker, ECS, Lambda, ElasticSearch, StepFunctions, AWS Batch, DynamoDB, S3, CDK (for infra), GraphDBs and are open to adopt new technologies as per use-case. Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Haryana Job ID: A3024851
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. At Target, we have a timeless purpose and a proven strategy and that hasn’t happened by accident. Some of the best minds from diverse backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target’s global team and has more than 4,000 team members supporting the company’s global strategy and operations. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values diverse backgrounds. We believe your unique perspective is important, and you'll build relationships by being authentic and respectful. At Target, inclusion is part of the core value. We aim to create equitable experiences for all, regardless of their dimensions of difference. As an equal opportunity employer, Target provides diverse opportunities for everyone to grow and win Behind one of the world’s best loved brands is a uniquely capable and brilliant team of data scientists, engineers and analysts. The Target Data & Analytics team creates the tools and data products to sustainably educate and enable our business partners to make great data-based decisions at Target. We help develop the technology that personalizes the guest experience, from product recommendations to relevant ad content. We’re also the source of the data and analytics behind Target’s Internet of Things (iOT) applications, fraud detection, Supply Chain optimization and demand forecasting. We play a key role in identifying the test-and-measure or A/B test opportunities that continuously help Target improve the guest experience, whether they love to shop in stores or at Target.com. About This Career Role is about being passionate about data, analysis, metrics development, feature experimentation and its application to improve both business strategies, as well as support to GSCL operations team Develop, model and apply analytical best practices while upskilling and coaching others on new and emerging technologies to raise the bar for performance in analysis by sharing with others (clients, peers, etc.) well documented analytical solutions . Drive a continuous improvement mindset by seeking out new ways to solve problems through formal trainings, peer interactions and industry publications to continually improve technically, implement best practices and analytical acumen Be expert in specific business domain, self-directed and drive execution towards outcomes, understand business inter-dependencies, conduct detailed problem solving, remediate obstacles, use independent judgement and decision making to deliver as per product scope, provide inputs to establish product/ project timelines Participate in learning forums, or be a buddy to help increase awareness and adoption of current technical topics relevant for analytics competency e.g. Tools (R, Python); exploratory & descriptive techniques ( basic statistics and modelling) Champion participation in internal meetups, hackathons; presents in internal conferences, relevant to analytics competency Contribute the evaluation and design of relevant technical guides and tools to hire great talent by partnering with talent acquisition Participate in Agile ceremonies to keep the team up-to-date on task progress, as needed Develop and analyse data reports/Dashboards/pipelines, do RCA and troubleshooting of issues that arise using exploratory and systemic techniques About You B.E/B.Tech (2-3 years of relevant exp), M.Tech, M.Sc. , MCA (+2 years of relevant exp) Candidates with strong domain knowledge and relevant experience in Supply Chain / Retails analytics would be highly preferred Strong data understanding inference of patterns, root cause, statistical analysis, understanding forecasting/predictive modelling, , etc. Advanced SQL experience writing complex queries Hands on experience with analytics tools: Hadoop, Hive, Spark, Python, R, Domo and/or equivalent technologies Experience working with Product teams and business leaders to develop product roadmaps and feature development Able to support conclusions with analytical evidence using descriptive stats, inferential stats and data visualizations Strong analytical, problem solving, and conceptual skills. Demonstrated ability to work with ambiguous problem definitions, recognize dependencies and deliver impact solutions through logical problem solving and technical ideations Excellent communication skills with the ability to speak to both business and technical teams, and translate ideas between them Intellectually curious, high energy and a strong work ethic Comfort with ambiguity and open-ended problems in support of supply chain operations Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture - https://india.target.com/life-at-target/belonging
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description and Requirements "At BMC trust is not just a word - it's a way of life!" Hybrid Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC AMI Cost and Capacity Management - As digitization increases, so does the complexity of managing mainframe capacity and costs. BMC AMI Cost and Capacity portfolio increases availability, predicts capacity bottlenecks before they occur, and optimizes mainframe software costs. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: We are seeking a Python with AI/ML Developer to join a highly motivated team responsible for developing and maintaining innovation for mainframe capacity and cost management. As an Application Developer at BMC, you will be responsible for: Developing and integrating AI/ML models with a focus on Generative AI (GenAI), Retrieval-Augmented Generation (RAG), and Vector Databases to enhance intelligent decision-making. Building scalable AI pipelines for real-time and batch inference, optimizing model performance, and deploying AI-driven applications. Implementing RAG-based architectures using LLMs (Large Language Models) for intelligent search, chatbot development, and knowledge management. Utilizing vector databases (e.g., FAISS, ChromaDB, Weaviate, Pinecone) to enable efficient similarity search and AI-driven recommendations. Developing modern web applications using Angular to create interactive and AI-powered user interfaces. Developing APIs and microservices to expose AI/ML models for enterprise applications. Processing and analyzing structured & unstructured data, including text, images, and time-series data for AI/ML applications. Optimizing ML models for performance and scalability, ensuring low latency and high availability in production. Staying updated with advancements in GenAI, NLP, transformers, and deep learning architectures to drive innovation. Collaborating with cross-functional teams to integrate AI capabilities into existing applications and workflows. To ensure you’re set up for success, you will bring the following skillset & experience: Strong proficiency in Python and AI/ML frameworks like TensorFlow, PyTorch, Hugging Face Transformers, LangChain. Experience with Vector Databases (FAISS, ChromaDB, Weaviate, Pinecone) for semantic search and embeddings. Hands-on expertise in LLMs (GPT, LLaMA, Mistral, Claude, etc.) and fine-tuning/customizing models. Proficiency in Retrieval-Augmented Generation (RAG) and prompt engineering for AI-driven applications. Experience with Angular for developing interactive web applications. Experience with RESTful APIs, FastAPI, Flask, or Django for AI model serving. Working knowledge of SQL and NoSQL databases for AI/ML applications. Hands-on experience with Git/GitHub, Docker, and Kubernetes for AI/ML model deployment. Whilst these are nice to have, our team can help you develop in the following skills: Experience with knowledge graphs, semantic search, and enterprise AI applications. Additional experience with .NET v7+ and cross-platform .NET development would be helpful. Exposure to IBM z/OS mainframe environments and AI-driven optimization for legacy systems. Background in statistical data analysis, reinforcement learning, or advanced ML techniques. Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 4,166,900 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply.
Posted 3 weeks ago
0 years
0 Lacs
New Delhi, Delhi, India
Remote
AI Engineer Intern (Remote) — Optivus Technologies Location: Remote Type: Internship (Full-Time Preferred) Duration: 3–6 months Stipend: Competitive Start Date: Immediate or August 2025 About Optivus Technologies Optivus Technologies is a next-generation AI consulting and product firm dedicated to empowering businesses through intelligent, scalable, and practical AI solutions. Our mission is clear: to help organizations of all sizes — especially small and mid-sized enterprises — harness the full transformative power of artificial intelligence without complexity or prohibitive costs. We build plug-and-play GenAI products and automation tools that integrate seamlessly into existing business environments, driving real-world outcomes through cutting-edge technologies such as large language models, intelligent process automation, and advanced analytics. We believe innovation isn’t just for large enterprises — every business deserves access to technology that fuels smarter decisions and faster growth. Learn more: www.optivustechnologies.com About the Role We're looking for a highly driven AI Engineer Intern who thrives in startup environments and wants to build impactful, real-world AI applications. As an intern at Optivus, you’ll be an integral part of our engineering team — working hands-on with large language models (LLMs), machine learning pipelines, and scalable web technologies. You won’t just be building prototypes; you’ll help ship production-ready features for real clients and get deep exposure to the full AI product lifecycle. What You’ll Do Build and test ML/LLM pipelines for various business use cases Integrate AI models and APIs into functional web applications Develop and optimize backend services using Python, FastAPI, or Node.js Work with vector databases, embedding models, and prompt orchestration tools Contribute to end-to-end system architecture — from data handling to inference and UI integration Write clean, modular, and well-documented code using Git workflows Stay on top of new advancements in generative AI, LLMs, and MLOps Collaborate closely with the founding team on building deployable, scalable AI tools What We’re Looking For Strong foundation in Python and machine learning fundamentals Experience with frameworks like PyTorch, TensorFlow, or Hugging Face Transformers Familiarity with LLM tools such as LangChain, OpenAI API, or similar Ability to build and consume REST APIs Understanding of vector stores (e.g., FAISS, Pinecone, Weaviate) Bonus: Experience with full-stack frameworks (React/Next.js) or backend dev (FastAPI, Node.js) Excellent problem-solving ability and willingness to learn fast A builder mindset — you’re excited to ship, iterate, and improve What You’ll Gain Real-world experience building advanced AI products from the ground up Direct mentorship from engineers and founders High ownership and fast learning in a startup environment Option for a pre-placement offer (PPO) based on performance A chance to shape the future of AI with a company that’s just getting started How to Apply Apply with your resume + GitHub/portfolio link + short note on why you want to join Optivus . Send to: advik@optivustechnologies.com Subject line: AI Engineer Internship Application – [Your Name]
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description We seek an innovative AI Engineer (Experience 2 - 5 yrs) to join our team and lead the development of scalable solutions using open-source technologies, LLM APIs , and advanced AI techniques. The ideal candidate will excel in designing RAG, Graph RAG, Agent Systems with function calling , and fine-tuning/customizing LLMs. Proficiency in hosting open-source models (e.g., Llama 2, Mistral) and integrating APIs (OpenAI, Anthropic, etc.) is critical, along with experience in Python frameworks like Fast API/Flask for production-grade deployments. Key Responsibilities: Architect, build, and optimize AI solutions using open-source models (e.g., Hugging Face, Ollama) and third-party LLM APIs. Design and implement advanced techniques including RAG, GraphRAG, Agent Systems with orchestration/function calling, and fine-tuning/prompt-tuning of LLMs. Deploy and manage self-hosted open-source models (e.g., via vLLM, TensorRT-LLM) with scalable APIs. Collaborate with teams to integrate AI/ML solutions into production systems using FastAPI, Flask, or similar frameworks. Develop automation pipelines for data retrieval, preprocessing, and model evaluation, ensuring alignment with business use cases. Stay ahead of AI trends (e.g., open-source LLM advancements, cost-efficient scaling) and drive strategic adoption. Ensure robust monitoring, testing, and documentation of systems for reliability and reproducibility. Five Reasons Why You Should Join Zycus Cloud Product Company: We are a Cloud SaaS Company, and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS, and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features. About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore Job Requirement Experience & Qualifications: Bachelor’s/master’s in computer science, AI, or related field. Expertise in Python and backend frameworks like FastAPI/Flask. Hands-on experience with RAG architectures, Agent Systems (function calling/tool use), GraphRAG, or similar LLM-driven workflows. Ability to fine-tune LLMs (LoRA, QLoRA) and host/deploy open-source models (Llama 2, Mistral, etc.). Proficiency with LLM APIs (OpenAI, Anthropic, Groq) and vector databases (Pinecone, Qdrant, pgvector). Familiarity with NLP/ML frameworks (PyTorch, Transformers, LangChain, LlamaIndex) and cloud platforms (AWS, Azure, GCP). Skilled in building scalable APIs and microservices for AI applications. Preferred Qualifications: Experience optimizing inference for open-source models (quantization, distillation). Familiarity with multi-agent systems, chain-of-thought prompting, or LLM eval frameworks. Knowledge of distributed training, GPU optimization, and MLOps (MLflow, Kubeflow). Contributions to open-source AI/ML projects.
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore We Are An Equal Opportunity Employer: Zycus is committed to providing equal opportunities in employment and creating an inclusive work environment. We do not discriminate against applicants on the basis of race, color, religion, gender, sexual orientation, national origin, age, disability, or any other legally protected characteristic. All hiring decisions will be based solely on qualifications, skills, and experience relevant to the job requirements. Job Description We seek an innovative AI Engineer (Experience 2 - 5 yrs) to join our team and lead the development of scalable solutions using open-source technologies, LLM APIs , and advanced AI techniques. The ideal candidate will excel in designing RAG, Graph RAG, Agent Systems with function calling , and fine-tuning/customizing LLMs. Proficiency in hosting open-source models (e.g., Llama 2, Mistral) and integrating APIs (OpenAI, Anthropic, etc.) is critical, along with experience in Python frameworks like Fast API/Flask for production-grade deployments. Key Responsibilities: Architect, build, and optimize AI solutions using open-source models (e.g., Hugging Face, Ollama) and third-party LLM APIs. Design and implement advanced techniques including RAG, GraphRAG, Agent Systems with orchestration/function calling, and fine-tuning/prompt-tuning of LLMs. Deploy and manage self-hosted open-source models (e.g., via vLLM, TensorRT-LLM) with scalable APIs. Collaborate with teams to integrate AI/ML solutions into production systems using FastAPI, Flask, or similar frameworks. Develop automation pipelines for data retrieval, preprocessing, and model evaluation, ensuring alignment with business use cases. Stay ahead of AI trends (e.g., open-source LLM advancements, cost-efficient scaling) and drive strategic adoption. Ensure robust monitoring, testing, and documentation of systems for reliability and reproducibility. Five Reasons Why You Should Join Zycus Cloud Product Company: We are a Cloud SaaS Company, and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS, and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features. About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users. Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization. Start your #CognitiveProcurement journey with us, as you are #MeantforMore Job Requirement Experience & Qualifications: Bachelor’s/master’s in computer science, AI, or related field. Expertise in Python and backend frameworks like FastAPI/Flask. Hands-on experience with RAG architectures, Agent Systems (function calling/tool use), GraphRAG, or similar LLM-driven workflows. Ability to fine-tune LLMs (LoRA, QLoRA) and host/deploy open-source models (Llama 2, Mistral, etc.). Proficiency with LLM APIs (OpenAI, Anthropic, Groq) and vector databases (Pinecone, Qdrant, pgvector). Familiarity with NLP/ML frameworks (PyTorch, Transformers, LangChain, LlamaIndex) and cloud platforms (AWS, Azure, GCP). Skilled in building scalable APIs and microservices for AI applications. Preferred Qualifications: Experience optimizing inference for open-source models (quantization, distillation). Familiarity with multi-agent systems, chain-of-thought prompting, or LLM eval frameworks. Knowledge of distributed training, GPU optimization, and MLOps (MLflow, Kubeflow). Contributions to open-source AI/ML projects.
Posted 3 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Designation : Data Scientist II Office Location: Gurgaon Position Description: The Data Scientist is crucial in leveraging data to derive meaningful insights and solutions for complex business problems. This individual will lead and guide the data science team in developing advanced analytical models, algorithms, and statistical analyses. They will collaborate with cross-functional teams to identify opportunities for leveraging data-driven solutions, making strategic decisions, and enhancing overall business performance. The Data Scientist will be responsible for designing and implementing machine learning models, conducting data exploration, and communicating findings to non-technical stakeholders. Primary Responsibilities: Collaborate with business stakeholders to understand and translate their goals into AI and data science initiatives. Lead the development and implementation of LLM-based workflows and AI agents to drive automation, personalization, and intelligent decision-making. Develop strategies to optimize budget allocation and campaign performance using AI-driven approaches in scenarios with high cardinality and uncertainty. Conduct exploratory data analysis to uncover trends and insights from large, complex data sets. Identify, evaluate, and deploy use case-specific LLMs (e.g., OpenAI, Gemini, Claude) for summarization, retrieval, semantic search, tool use, and classification. Design and implement retrieval-augmented generation (RAG) pipelines and memory-augmented AI agents. Implement Groq or similar platforms for high-performance, low-latency inference and scalable AI deployment. Communicate complex analytical and AI-driven findings in a clear, actionable manner to non-technical stakeholders. Stay abreast of the latest advancements in AI, LLMs, and agentic systems. Build AI-powered proof of concepts (PoCs) leveraging foundation models, vector databases, and orchestration frameworks. Use SQL for data exploration, feature engineering, and prompt conditioning. Required Skills: Qualification in a quantitative field such as Computer Science, Artificial Intelligence, Statistics, Physics, or Mathematics. Excellent problem-solving skills and strategic thinking with a strong AI product mindset. Strong coding skills, particularly in Python. 6+ years of relevant work experience in Data Science, with significant hands-on experience in LLM-based application development. Solid foundation in statistical analysis, experimentation, and hypothesis testing. Proficiency in Python and SQL. Preferred experience on GCP platform. Proven experience with LLMs, including prompt engineering, fine-tuning, RAG, and evaluation. Experience identifying and scaling LLM-based use cases across business functions. Familiar with building AI agents using LangChain, LangGraph, and agentic orchestration frameworks. Experience with Groq or similar platforms for high-speed inference of LLMs. Plus points for experience in AdTech or Meta Ads. Effective communication skills with the ability to convey technical concepts to non-technical audiences. Work Environment Details: About Affle: Affle is a global technology company with a proprietary consumer intelligence platform that delivers consumer engagement, acquisitions, and transactions through relevant Mobile Advertising. The platform aims to enhance returns on marketing investment through contextual mobile ads and also by reducing digital ad fraud. While Affle's Consumer platform is used by online & offline companies for measurable mobile advertising, its Enterprise platform helps offline companies to go online through platform-based app development, enablement of O2O commerce and through its customer data platform. Affle India successfully completed its IPO in India on 08. Aug.2019 and now trades on the stock exchanges (BSE: 542752 & NSE:AFFLE). Affl e Holdings is the Singapore based promoter for Affl e India and its investors include Microsoft, Bennett Coleman &Company (BCCL) amongst others. For more details: www.affle.com About BU (Ultra) Ultra Prism is a cutting-edge AI-powered platform designed to empower performance marketers on walled garden platforms like Meta and Google. With a focus on precision targeting, the platform streamlines the creative process, delivering state-of-the-art assets while providing actionable insights and recommendations for campaign optimization. By transforming guesswork into strategy and enabling smarter, data-driven decisions, Ultra Prism helps marketers unlock unmatched ROI and achieve extraordinary results with confidence and ease. For more details: https://www.ultraplatform.io/
Posted 3 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a highly skilled and innovative AI Lead/Developer with proven experience in designing, developing, and deploying AI Agents and Conversational AI Chatbots using Azure Cloud Services . You will play a critical role in transforming enterprise workflows through intelligent automation, integrating with modern AI platforms, and delivering scalable solutions that drive business outcomes. Requirements Research, design and develop intelligent AI agents , AI/GenAI apps and chatbots using Azure OpenAI, Azure AI foundry, Semantic Kernel, Vector Databases, Azure AI Agent Service, Azure AI Model Inference, Azure AI Search, Azure Bot Services, Cognitive Services, Azure Machine Learning. Etc Lead architecture and implementation of AI workflows, including prompt engineering, RAG (Retrieval-Augmented Generation), and multi-turn conversational flows. Build and fine-tune LLM-based applications using Azure OpenAI (GPT models) for various enterprise use cases (customer support, internal tools, etc.). Integrate AI agents with backend services, APIs, databases, and third-party platforms via Azure Logic Apps, Azure Functions, and REST APIs. Design secure and scalable cloud architecture using Azure App Services, Azure Kubernetes Service (AKS), Azure API Management, etc. Collaborate with product managers, UX designers, and business stakeholders to define AI use cases and user interaction strategies. Conduct performance tuning, A/B testing, and continuous improvement of AI agent responses and model accuracy. Provide technical leadership, mentoring junior developers and contributing to architectural decisions. Stay up to date with advancements in Generative AI, LLM orchestration, and Azure AI ecosystem. Required Skills & Experience: 4+ years of experience in AI/ML or software development, with at least 2+ years focused on Azure AI and chatbot development. Strong knowledge of Azure OpenAI Service, Azure Bot Framework, Azure Cognitive Services (LUIS, QnA Maker, Speech). Experience with Python, Node.js, or C# for bot development. Familiarity with LangChain, Semantic Kernel, or other agent orchestration frameworks (preferred). Hands-on experience deploying AI solutions using Azure ML, Azure DevOps, and containerization (Docker/Kubernetes). Deep understanding of natural language processing (NLP), LLMs, and prompt engineering techniques. Experience with RAG pipelines, vector databases (e.g., Azure Cognitive Search or Pinecone), and knowledge grounding. Proven experience integrating chatbots with enterprise platforms (MS Teams, Slack, Web, CRM, etc.). Strong problem-solving skills, analytical mindset, and passion for emerging AI technologies. Preferred Qualifications: Microsoft Certified: Azure AI Engineer Associate or equivalent. Familiarity with Ethical AI, responsible AI design principles, and data governance. Prior experience in building multilingual and voice-enabled agents. Experience with CI/CD for AI pipelines using Azure DevOps or GitHub Actions. Benefits Attractive salary packages with performance-based incentives. Opportunities for professional certifications (e.g., AWS, Kubernetes, Terraform). Access to training programs, workshops, and learning resources. Comprehensive health insurance coverage for employees and their families. Wellness programs and mental health support. Hands-on experience with large-scale, innovative cloud solutions. Opportunities to work with modern tools and technologies. Inclusive, supportive, and team-oriented environment. Opportunities to collaborate with global clients and cross-functional teams. Regular performance reviews with rewards for outstanding contributions. Employee appreciation events and programs.
Posted 3 weeks ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred Job Description We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities Will Include Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What Are We Looking For Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS, EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We Would Love To See Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3076100
Posted 3 weeks ago
3.0 - 6.0 years
0 Lacs
Faridabad, Haryana, India
On-site
Job Title – A.I Full Stack Developer Location – Faridabad Experience – 3-6 Years We are looking for a Full Stack Developer who can work closely with the A.I team. Key Responsibility Areas - Backend and API Development- Design and develop robust RESTful APIs for AI services. Integrate LLMs and VLMs through inference APIs, transformers, or other frameworks. Manage database interactions using SQL (e.g., PostgreSQL), NoSQL (e.g., MongoDB, Redis), vector databases (e.g., Chroma, Weaviate ). Frontend Development: Build responsive, scalable dashboards and UIs for AI pipelines and model interactions. Collaborate with AI researchers and designers to visualize data and model outputs. Use frameworks like React.js, Next.js. AI Integration: Collaborate closely with the AI team to integrate LLMs, VLMs, and other AI models into production systems via APIs. Develop and maintain frontend components that interact with AI services (e.g., model outputs, chat interfaces, dashboards). Build robust API layers to serve AI model predictions and responses. Handle pre/post-processing logic on the API layer to format data for and from AI models (e.g., text inputs, embeddings, image results). Ensure seamless connectivity between AI backends and client-facing applications using modern API and web technologies. (Good to Have) On-Premise Server Experience: Familiarity with managing GitLab servers, and storage management. Knowledge of networking, firewalls, user authentication is a bonus. Technical Skills Required: Must-Have: Strong proficiency in Python (FastAPI) and JavaScript/TypeScript (React/Next.js). Solid experience with SQL and NoSQL databases. Familiarity with vector DBs like FAISS, Weaviate, Chroma, or Pinecone. Strong understanding of REST APIs, backend logic, and authentication (OAuth, JWT). Git version control and collaboration on Git-based platforms (GitHub, GitLab). Good to Have: Exposure to model serving libraries (e.g., Hugging Face Inference, vLLM, LangChain, LlamaIndex). Experience with CI/CD tools like Jenkins, GitLab CI, or GitHub Actions. Basic knowledge of Linux OS, terminal commands, and service management tools Familiarity with any cloud platforms (AWS, GCP, Azure) or on-prem environments. Any AI framework like LangChain, LlamaIndex etc. Why Join Us: Work on modern, interactive, and scalable web applications. Own end-to-end features and contribute to system architecture. Join a fast-moving, collaborative, and growth-oriented team.
Posted 3 weeks ago
5.0 years
0 Lacs
Kolkata metropolitan area, West Bengal, India
On-site
Talent Scout Management Solutions/PIVOTAL is a professional services recruitment firm dedicated to helping clients recruit world-class leadership talent. We are currently recruiting " Computer Vision Research Engineer " for our client, Videonetics Technology. Location: Kolkata (New Town) Mode of Employment: Full-time About Videonetics: Videonetics is a leading innovator in AI-powered video computing solutions, offering intelligent video management, analytics and security solutions across industries. Our mission is to make the world smarter and safer through cutting-edge technology. To know more about us – https://www.videonetics.com, https://www.linkedin.com/company/videonetics/ Who You'll Work With: As an AI Researcher focused on Computer Vision and Image Processing, you will collaborate with a multidisciplinary team of machine learning engineers, video systems experts, software developers and data engineers. You’ll work closely with domain experts to translate real-world visual problems into scalable AI models, contributing to both applied research and production-grade deployment. Your work will also intersect with platform teams to optimize models for inference on edge devices and video pipelines. What you’ll do: Design and develop high-performance software solutions for computer vision applications using C++ and Python. Design, train and evaluate deep learning models for computer vision tasks such as object detection, tracking and segmentation. Conduct research on advanced image processing algorithms and contribute to novel techniques for visual data understanding. Develop robust model training pipelines using PyTorch or TensorFlow, incorporating data augmentation, transfer learning and optimization strategies. Collaborate with video pipeline teams to ensure models are compatible with real-time inference requirements (e.g., for NVIDIA DeepStream or TensorRT deployment). Perform data analysis, visualization and curation to support training on high-quality and diverse datasets. Contribute to both academic publication efforts and applied R&D initiatives. Stay current with the latest research in computer vision and integrate state-of-the-art techniques where applicable. Key Responsibilities: Lead model development cycles, from problem definition through experimentation and final training. Implement and optimize data structures and algorithms to ensure efficiency and scalability. Design and implement scalable training and validation workflows for large-scale datasets. Design, develop and optimize software solutions in C++ and Python for high-performance Computer Vision Products. Tune model architectures and hyperparameters for optimal accuracy and inference speed and evaluate model performance Collaborate with engineering teams to optimize models for deployment in edge environments or real-time systems. Maintain detailed documentation of experiments, training results and model behavior. Contribute to internal tools, utilities and reusable components for efficient experimentation and deployment. Support knowledge sharing through papers, or patents. What we are looking for: B.E./M.E. in Computer Science and Engineering/Electronics/Electrical. 5+ years of experience in AI model development with a focus on computer vision and image processing. Strong expertise in deep learning frameworks such as PyTorch or TensorFlow. Proven track record in training models for tasks like detection (YOLO, Faster R-CNN), segmentation (UNet, DeepLab), or enhancement (SRGAN, ESRGAN). Solid understanding of classical image processing techniques (filtering, transformations, edge detection, etc.). Experience working with large-scale datasets and training on GPUs. Familiarity with model optimization for deployment (ONNX, TensorRT, pruning, quantization). Strong mathematical foundation in linear algebra, statistics and signal processing Practical experience in using Docker for containerizing applications and managing software dependencies. Why Join Us? Be part of an innovative company at the forefront of AI-driven video computing. Opportunity to work with top-tier partners and industry leaders. Competitive salary, performance incentives and professional growth opportunities. Embark on an impactful journey with Videonetics, where you'll work on innovative products that enhance safety, efficiency and sustainability, ultimately making a positive difference in society Global presence: Collaborate with international clients and teams, expanding your professional horizons If you are passionate about building strong Product and driving strategic decisions, we invite you to be part of our growth journey at Videonetics!
Posted 3 weeks ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Python programming/Machine Learning concepts and Automation Testing ( Python framework, autoframework) Mandatory Job Overview Join a new and growing team at Qualcomm focused on advancing state-of-the-art in Machine Learning. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities and engineers them to allow the running of trained neural networks on device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. See your work directly impact billions of mobile devices around the world & also most advanced Autonomous features for AUTO industry. In this position, you will be responsible for the development of test frameworks for Qualcomm Neural Network (QNN). You will work with neural network frameworks like TensorFlow, Pytorch and develop the validation framework to gauge functionality, performance, precision, and power of QNN. You will work with the latest and greatest DNNs emerging from the research community. You will also have to keep up with the fast pace development happening in the industry and academia to continuously enhance our benchmarking and validation infrastructure from software engineering as well as machine learning standpoint. Minimum Qualifications Expertise in Developing test cases, automating the tests, test case execution and troubleshooting/analyzing problems Experience with Programming Language such as C, C++, Python, etc. Strong proficiency in Python Programming Solid understanding of OOPS, Automation and OS concepts. Hands-on with Jenkins for CI/CD Familiarity with Docker Knowledge of AI, ML and GenAI will be added advantage. Knowledge of version control systems like Git Strong Problem-Solving skills and ability to work in fast-paced environment. Live and breathe quality software development with excellent analytical and debugging skills. Excellent communication skills (verbal, presentation, written) Ability to collaborate across a globally diverse team and multiple interests Excellent communication skills (verbal, presentation, written), Strong problem-solving skills, Good time management skills,, excellent analytical and debugging skills, must be an effective team player, and should be self-driven. Preferred Qualifications Strong exposure to software testing methodologies and reporting. Experience with CI tools like Jenkins and Data visualization tools like power BI ,Tableau Development experience in Python & C++ Work Experience 1 to 6 years of relevant work experience in software dev/test development Educational Requirements Master’s/Bachelor's Computer Science, Computer Engineering, or Electrical Engineering Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3071304
Posted 3 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us Founded in 2007, Biz2Credit is rated as the Number 1 small business financing resource in the U.S. by Entrepreneur Magazine. Till date, we've facilitated more than $2.5+ billion in small business lending. Biz2Credit, an all - in - one financing solution for entrepreneurs to get a small business loan with fast approval process. Explore the best small business financing options with us! Biz2Credit offers an innovative way for lenders and small business borrowers to connect through our online credit marketplace. Our robust network includes more than 1,200 lenders and tens of thousands of small businesses who connect to the network through partnerships with Paychex, Start - up America, CPA's, business brokers and other referral partners. Biz2X is the technology platform that has helped Biz2Credit become the premier alternative lender for small businesses. Now, traditional banks are leveraging our expertise and experience to automate small business lending. Biz2X allows banks to implement AI - powered digital banking. This allows banks clients to have a fully omnichannel experience with data and workflows that adapt seamlessly to the banks core banking systems. Biz2X platforms world - class risk solutions are based on AI algorithms that enables auto decision - making and quick processing. Biz2X is One Platform That Does It All - automates lending, optimizes risk management, and improves operational efficiency. Learn More : www.biz2credit.com & www.biz2x.com Read About Us https://www.globenewswire.com/en/news - release/2023/04/25/2653660/0/en/Financial - Times - Names - Biz2Credit - and - Biz2X - to - its - Americas - Fastest - Growing - Companies - of - 2023 - List.html https://inc42.com/buzz/biz2credit - announces - esops - worth - 12 - 25 - mn - for - 500 - indian - employees/ About The Role This is a rare opportunity to join a fast - growing team, where you will play a major role in shaping Biz2Credit's future. We're looking for an exceptional candidate that is excited about the opportunity to build a next - generation financial services business. We believe th at there is a tremendous opportunity to leverage cutting - edge data science to inform smarter, faster decision making. As a Biz2 x Data Scientist, you will shape the company's data - centric culture, work closely with our engineering team to develop our analytics infrastructure, and collaborate closely with our Chief Risk Officer on developing, validating, and automating our customer conversion and underwriting models. While a background in financial services is not required, you must be passionate about tackling complex data challenges for the benefit of small and medium businesses everywhere. Job Responsibility: - Drive the ongoing advancement and refinement of Biz2Credit's credit decisioning & pricing model - optimizing risk and return while dramatically reducing decision cycle times. Continuously evaluate alternative data sources and structures to document and improve the efficacy of our customer conversion models and processes. Harness the power of Biz2Credit's technology to proactively identify emerging risks as well as opportunities with our customers. Play a key role in the design and implementation of ongoing operational and risk reporting and analytics. Work on data projects and proposals involving Biz2Credit's financial services partners worldwide (banks, non - banks, debt investors, equity investors and others) to analyze , classify and visualize credit - related data Perform ad hoc analys e s on customer, business, and portfolio trends to generate actionable insights for internal and external stakeholders Manage multiple projects and priorities while delivering accurate & timely results in a fast - paced environment Requirements Degree in Statistics, Applied Mathematics, Engineering, Computer Science or other quantitative fields from leading university; Advanced degree preferred 4 - 6 years of experience in applied data science or machine learning roles. Hands - on experience with LLMs and GenAI applications . Expertise in Python and ML frameworks such as TensorFlow, PyTorch, Scikit - learn . Strong experience in deep learning , NLP , and generative models (e.g., VAEs, GANs, Diffusion models) . Experience with prompt engineering , fine - tuning , RAG (Retrieval - Augmented Generation) , or model distillation . Proficiency in SQL, data wrangling, and working with large datasets. Familiarity with MLOps tools (e.g., MLflow, Airflow, Kubeflow) and cloud platforms (AWS/GCP/Azure). Preferred Qualifications Published work in ML/AI journals or major conferences (NeurIPS, ICML, ACL, CVPR, etc.). Experience building and deploying LLM - powered applications in production. Background in reinforcement learning , time series forecasting , or causal inference . Understanding data privacy, model fairness, and ethical AI considerations.
Posted 3 weeks ago
0 years
0 Lacs
India
On-site
*Who you are* You’re the person whose fingertips know the difference between spinning up a GPU cluster and spinning down a stale inference node. You love the “infrastructure behind the magic” of LLMs. You've built CI/CD pipelines that automatically version models, log inference metrics, and alert on drift. You’ve containerized GenAI services in Docker, deployed them on Kubernetes clusters (AKS or EKS), and implemented terraform or ARM to manage infra-as-code. You monitor cloud costs like a hawk, optimize GPU workloads, and sometimes sacrifice cost for performance—but never vice versa. You’re fluent in Python and Bash, can script tests for REST endpoints, and build automated feedback loops for model retraining. You’re comfortable working in Azure — OpenAI, Azure ML, Azure DevOps Pipelines—but are cloud-agnostic enough to cover AWS or GCP if needed. You read MLOps/LLMOps blog posts or arXiv summaries on the weekend and implement improvements on Monday. You think of yourself as a self-driven engineer: no playbooks, no spoon-feeding—just solid automation, reliability, and a hunger to scale GenAI from prototype to production. --- *What you will actually do* You’ll architect and build deployment platforms for internal LLM services: start from containerizing models and building CI/CD pipelines for inference microservices. You’ll write IaC (Terraform or ARM) to spin up clusters, endpoints, GPUs, storage, and logging infrastructure. You’ll integrate Azure OpenAI and Azure ML endpoints, pushing models via pipelines, versioning them, and enabling automatic retraining triggers. You’ll build monitoring and observability around latency, cost, error rates, drift, and prompt health metrics. You’ll optimize deployments—autoscaling, use of spot/gpu nodes, invalidation policies—to balance cost and performance. You’ll set up automated QA pipelines that validate model outputs (e.g. semantic similarity, hallucination detection) before merging. You’ll collaborate with ML, backend, and frontend teams to package components into release-ready backend services. You’ll manage alerts, rollbacks on failure, and ensure 99% uptime. You'll create reusable tooling (CI templates, deployment scripts, infra modules) to make future projects plug-and-play. --- *Skills and knowledge* Strong scripting skills in Python and Bash for automation and pipelines Fluent in Docker, Kubernetes (especially AKS), containerizing LLM workloads Infrastructure-as-code expertise: Terraform (Azure provider) or ARM templates Experience with Azure DevOps or GitHub Actions for CI/CD of models and services Knowledge of Azure OpenAI, Azure ML, or equivalent cloud LLM endpoints Familiar with setting up monitoring: Azure Monitor, Prometheus/Grafana—track latency, errors, drift, costs Cost-optimization tactics: spot nodes, autoscaling, GPU utilization tracking Basic LLM understanding: inference latency/cost, deployment patterns, model versioning Ability to build lightweight QA checks or integrate with QA pipelines Cloud-agnostic awareness—experience with AWS or GCP backup systems Comfortable establishing production-grade Ops pipelines, automating deployments end-to-end Self-starter mentality: no playbooks required, ability to pick up new tools and drive infrastructure independently
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Model Risk Validation – Quantitative Modelling (Market/Credit/Derivatives) Location: EMEA Department: Enterprise Risk Management (ERM) Function: Model Risk Management (MRM) Role Overview The Enterprise Risk Management (ERM) function is responsible for supporting the EMEA Chief Risk Officer in maintaining a robust and effective risk governance framework. A key component of this framework is the Model Risk Management (MRM) team, which provides oversight, governance, and independent validation of quantitative models used across the region. MRM plays a critical role in ensuring the integrity and reliability of risk models that inform decision-making across asset classes and business lines, including environmental and social risk management. The team collaborates closely with Risk Analytics and Front Office quants to validate models used for risk measurement, pricing, and capital assessment. Key Responsibilities Lead and perform independent model validation (initial and periodic) for a wide range of quantitative models, including: Derivatives pricing models Market and counterparty credit risk models Capital models (economic and regulatory) AI/ML models and corporate credit risk models (IRB, PD/LGD/EAD) Develop and prototype challenger models to benchmark and stress-test existing models. Conduct rigorous quantitative reviews of model frameworks, underlying assumptions, input data, and performance results. Perform technical testing of numerical implementations and conduct thorough documentation reviews . Ensure regulatory compliance and adherence to internal model governance policies. Produce detailed validation reports highlighting findings and recommendations for model improvements . Monitor the remediation of validation findings , ensuring timely and effective resolution. Experience & Technical Competencies Essential: Prior experience in quantitative modelling , either in development or validation, with exposure to one or more of the following: Market risk models (e.g., VaR, stressed VaR, IRC) Counterparty credit risk (e.g., CVA, PFE, SA-CCR) Derivatives pricing and valuation Strong knowledge of mathematics, statistics, and probability theory as applied in finance. Proficient in Python or R ; hands-on experience with simulation, numerical methods, and statistical inference techniques. Solid understanding of financial instruments , valuation principles, and risk metrics. Preferred: Exposure to capital modelling (Basel regulations, ICAAP), credit risk models, or AI/ML model frameworks. Experience with C++, C#, or other compiled languages . Awareness of emerging trends in quantitative finance, regulatory changes, and advancements in risk modelling. Educational Qualifications A postgraduate degree (Master’s or Ph.D.) in a quantitative discipline such as: Mathematics Statistics Mathematical Finance Econometrics Physics or related fields.
Posted 3 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Licious is a fast-paced, innovative D2C brand revolutionizing the meat and seafood industry in India. We leverage cutting-edge technology, data science, and customer insights to deliver unmatched quality, convenience, and personalization. Join us to solve complex problems at scale and drive data-driven decision-making! Role Overview: We are seeking a Senior Data Scientist with 6+ years of experience to build and deploy advanced ML models (LLMs, Recommendation Systems, Demand Forecasting) and generate actionable insights. You will collaborate with cross-functional teams (Product, Supply Chain, Marketing) to optimize customer experience, demand prediction, and business growth. Key Responsibilities: 1. Machine Learning & AI Solutions: Develop and deploy Large Language Models (LLMs) for customer support automation, personalized content generation, and sentiment analysis. Enhance Recommendation Systems (collaborative filtering, NLP-based, reinforcement learning) to drive engagement and conversions. Build scalable Demand Forecasting models (time series, causal inference) to optimize inventory and supply chain. 2. Data-Driven Insights: Analyze customer behavior, transactional data, and market trends to uncover growth opportunities. Create dashboards and reports (using Tableau/Power BI) to communicate insights to stakeholders. 3. Cross-Functional Collaboration: Partner with Engineering to productionize models (MLOps, APIs, A/B testing). Work with Marketing to design hyper-personalized campaigns using CLV, churn prediction, and segmentation. 4. Innovation & Scalability: Stay updated with advancements in GenAI, causal ML, and optimization techniques. Improve model performance through feature engineering, ensemble methods, and experimentation. Qualifications: Education: BTech/MTech/MS/Ph.D. in Computer Science, Statistics, or related fields. Experience: 6+ years in Data Science, with hands-on expertise in: LLMs (GPT, BERT, fine-tuning, prompt engineering). Recommendation Systems (matrix factorization, neural CF, graph-based). Demand Forecasting (ARIMA, Prophet, LSTM, Bayesian methods). Python/R , SQL, PySpark, and ML frameworks (TensorFlow, PyTorch, scikit-learn). Cloud platforms (AWS/GCP) and MLOps tools (MLflow, Kubeflow).
Posted 3 weeks ago
2.0 years
0 Lacs
Banera, Rajasthan, India
On-site
Job Title: Senior AI/ML Engineer Location: Baner, Pune Company: Muks Robotics AI Pvt. Ltd. Experience: 2- 3+ years Employment Type: Full-time Position - 2 About Us At Muks Robotics AI Pvt. Ltd. , we are at the forefront of AI-driven robotics innovation. Our mission is to integrate Artificial Intelligence and Machine Learning into robotics solutions that transform industries and improve lives. We’re looking for a talented and passionate Senior AI/ML Engineer to join our growing team and help build intelligent systems that shape the future. Roles & Responsibilities: Design, develop, and implement cutting-edge machine learning and deep learning models for robotics and automation. Collaborate with cross-functional teams (Robotics, Embedded, Software) to integrate AI models into real-time robotic systems. Lead end-to-end AI/ML project lifecycles – from data collection, preprocessing, model training, evaluation, and deployment. Optimize models for performance and real-time inference on edge devices. Research and implement novel algorithms and techniques from recent AI/ML publications. Mentor junior engineers and contribute to team knowledge-sharing sessions. Monitor and maintain deployed models, ensuring robustness and accuracy over time. Required Skills: Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Strong understanding of machine learning, deep learning (CNNs, RNNs, Transformers), and computer vision. Hands-on experience with AI model deployment (TensorRT, ONNX, Docker, etc.). Experience with data pipelines, data augmentation, and annotation tools. Good knowledge of algorithms, data structures, and system design. Familiarity with ROS (Robot Operating System), edge computing, or real-time inference is a strong plus. Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, Robotics, or related fields. Minimum 2 years of relevant industry experience in AI/ML with successful project deliveries. Publications in top AI/ML conferences (optional but desirable). Why Join Us? Work on cutting-edge robotics AI applications. Collaborative and research-oriented culture. Fast-paced, innovation-driven startup environment. Opportunities to lead and grow with the company.
Posted 3 weeks ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Job Profile – Lead Data Engineer Does working with data on a day to day basis excite you? Are you interested in building robust data architecture to identify data patterns and optimise data consumption for our customers, who will forecast and predict what actions to undertake based on data? If this is what excites you, then you’ll love working in our intelligent automation team. Schneider AI Hub is leading the AI transformation of Schneider Electric by building AI-powered solutions. We are looking for a savvy Data Engineer to join our growing team of AI and machine learning experts. You will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software engineers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. Responsibilities Create and maintain optimal data pipeline architecture; assemble large, complex data sets that meet functional / non-functional requirements. Design the right schema to support the functional requirement and consumption patter. Design and build production data pipelines from ingestion to consumption. Create necessary preprocessing and postprocessing for various forms of data for training/ retraining and inference ingestions as required. Create data visualization and business intelligence tools for stakeholders and data scientists for necessary business/ solution insights. Identify, design, and implement internal process improvements: automating manual data processes, optimizing data delivery, etc. Ensure our data is separated and secure across national boundaries through multiple data centers Requirements and Skills You should have a bachelors or master’s degree in computer science, Information Technology or other quantitative fields You should have at least 8 years working as a data engineer in supporting large data transformation initiatives related to machine learning, with experience in building and optimizing pipelines and data sets Strong analytic skills related to working with unstructured datasets. Experience with Azure cloud services, ADF, ADLS, HDInsight, Data Bricks, App Insights etc Experience in handling ETL’s using Spark. Experience with object-oriented/object function scripting languages: Python, Pyspark, etc. Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. You should be a good team player and committed for the success of team and overall project. About Us Schneider Electric™ creates connected technologies that reshape industries, transform cities and enrich lives. Our 144,000 employees thrive in more than 100 countries. From the simplest of switches to complex operational systems, our technology, software and services improve the way our customers manage and automate their operations. Great people make Schneider Electric a great company. We seek out and reward people for putting the customer first, being disruptive to the status quo, embracing different perspectives, continuously learning, and acting like owners. We want our employees to reflect the diversity of the communities in which we operate. We welcome people as they are, creating an inclusive culture where all forms of diversity are seen as a real value for the company. We’re looking for people with a passion for success — on the job and beyond. Primary Location : IN-Karnataka-Bangalore Schedule : Full-time Unposting Date : Ongoing
Posted 3 weeks ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. As a Machine Learning Engineer, the purpose of this role is to work with our machine learning research team to develop, implement and deploy machine learning and deep learning models. Our core area of work is in the field of computer vision so you will require some familiarity with convolutional neural networks. Please find our research page : https://research.fynd.com/ Some of our products: https://www.erase.bg/, https://www.upscale.media/ What will you do at Fynd? Work on the deployment of machine learning and deep learning models. Help in accelerating model inference using various compression tools like onnx, Torch script, TensorRT, OpenVINO ....etc. Develop and maintain a data streaming pipeline (both batch and real-time) for data integration and large-scale machine learning. Deliver best practices recommendations and technical presentations around machine learning deployment including real-time modeling. Maintain and further enhance the internal model feature store and optimize the feature engineering script. Full life cycle implementation from requirements analysis, platform selection, technical architecture design, application design and development, testing, and deployment. Responsible for the end-to-end deployment of predictive models including scoping, testing, implementation, maintenance, tracking, and optimization of predictive models. Responsible for complete and accurate documentation of all development around machine learning engineering. Some Specific Requirements 4+ years of experience implementing and deploying machine learning and deep learning frameworks through distributions cluster and application programming in cloud platforms including AWS and GCP. Basic Knowledge of tools/libraries such as TensorFlow, PyTorch, Keras, NumPy, pandas etc. Strong understanding of data structures and algorithms and experience in Python. Some experience with computer vision(classification, segmentation, object detection) pipelines Deep experience across systems integration, information management, data management and architecture, and business analytics. Language: Python, Java, Scala, Unix bash script, REST API process. What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University: We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 3 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Skills: Deep Learning, TensorFlow/PyTorch, Python, Machine Learning, Robotics Software Architecture, ROS, Job Title: Senior AI/ML Engineer Location: Baner, Pune Company: Muks Robotics AI Pvt. Ltd. Experience: 2- 3+ years Employment Type: Full-time Position - 2 About Us At Muks Robotics AI Pvt. Ltd. , we are at the forefront of AI-driven robotics innovation. Our mission is to integrate Artificial Intelligence and Machine Learning into robotics solutions that transform industries and improve lives. Were looking for a talented and passionate Senior AI/ML Engineer to join our growing team and help build intelligent systems that shape the future. Roles & Responsibilities Design, develop, and implement cutting-edge machine learning and deep learning models for robotics and automation. Collaborate with cross-functional teams (Robotics, Embedded, Software) to integrate AI models into real-time robotic systems. Lead end-to-end AI/ML project lifecycles from data collection, preprocessing, model training, evaluation, and deployment. Optimize models for performance and real-time inference on edge devices. Research and implement novel algorithms and techniques from recent AI/ML publications. Mentor junior engineers and contribute to team knowledge-sharing sessions. Monitor and maintain deployed models, ensuring robustness and accuracy over time. Required Skills Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Strong understanding of machine learning, deep learning (CNNs, RNNs, Transformers), and computer vision. Hands-on experience with AI model deployment (TensorRT, ONNX, Docker, etc.). Experience with data pipelines, data augmentation, and annotation tools. Good knowledge of algorithms, data structures, and system design. Familiarity with ROS (Robot Operating System), edge computing, or real-time inference is a strong plus. Qualifications Bachelors or Masters degree in Computer Science, AI/ML, Data Science, Robotics, or related fields. Minimum 2 years of relevant industry experience in AI/ML with successful project deliveries. Publications in top AI/ML conferences (optional but desirable). Why Join Us? Work on cutting-edge robotics AI applications. Collaborative and research-oriented culture. Fast-paced, innovation-driven startup environment. Opportunities to lead and grow with the company.
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Excellent Python programming and debugging skills. (Refer to Pytho JD given below) - Proficiency with SQL, relational databases, & non-relational databases - Passion for API design and software architecture. - Strong communication skills and the ability to naturally explain difficult technical topics to everyone from data scientists to engineers to business partners - Experience with modern neural-network architectures and deep learning libraries (Keras, TensorFlow, PyTorch). - Experience unsupervised ML algorithms. - Experience in Timeseries models and Anomaly detection problems. - Experience with modern large language model (Chat GPT/BERT) and applications. - Expertise with performance optimization. - Experience or knowledge in public cloud AWS services - S3, Lambda. - Familiarity with distributed databases, such as Snowflake, Oracle. - Experience with containerization and orchestration technologies, such as Docker and Kubernetes. Managing large machine learning applications and designing and implementing new frameworks to build scalable and efficient data processing workflows and machine learning pipelines. - Build the tightly integrated pipeline that optimizes and compiles models and then orchestrates their execution. - Collaborate with CPU, GPU, and Neural Engine hardware backends to push inference performance and efficiency - Work closely with feature teams to facilitate and debug the integration of increasingly sophisticated models, including large language models - Automate data processing and extraction - Engage with sales team to find opportunities, understand requirements, and translate those requirements into technical solutions. - Develop reusable ML models and assets into production.
Posted 3 weeks ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are building the next-generation real-time enforcement platform that protects users, advertisers, and the integrity of Microsoft’s Ads and content ecosystems. This infrastructure processes and evaluates hundreds of billions of signals each day, applying safety and policy decisions with millisecond latency and global reliability. As a Principal Software Engineer , you will define and drive the architecture of the core systems behind this platform—spanning real-time decision services, streaming pipelines, and ML inference integration. You will also help lay the foundation for emerging AI-enabled enforcement flows, including agentic workflows that reason, adapt, and take multi-step actions using large language models and learned policies. This is a hands-on IC leadership role for someone who thrives at the intersection of deep system design, web-scale performance, and long-term platform evolution—and who is both curious about how AI can augment infrastructure, and pragmatic about where and how it should. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Design and evolve large-scale, low-latency distributed systems that evaluate ads, content, and signals in milliseconds across global workloads. Lead architectural efforts across stream processing pipelines, real-time scoring services, policy engines, and ML integration points. Define system-level strategies for scalability, performance optimization, observability, and failover resilience. Partner with ML engineers and applied scientists to integrate models into production with cost-efficiency, modularity, and runtime predictability. Guide technical direction for next-generation capabilities, including the early architecture of agentic/LLM-powered policy orchestration flows. Influence platform-wide standards, review designs across teams, and mentor senior engineers through deep technical leadership. Qualifications Required Qualifications: Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 6+ years of experience in backend or distributed systems engineering, with a proven record of system-level architectural leadership. Advanced proficiency in C++, C# or equivalent systems languages. Deep experience designing and scaling streaming or real-time systems (e.g., Kafka, Flink, Beam). Solid command of performance profiling, load testing, capacity planning, and operational rigor. Comfort designing systems for high QPS, low latency, and regulatory traceability. Familiarity with ML inference orchestration, model deployment workflows, or online feature pipelines. Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications Bachelor's Degree in Computer Science OR related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Master's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience. 10+ years of experience in backend or distributed systems engineering, with a proven record of system-level architectural leadership. Understanding of LLM integration, RAG, or agentic task flows, even at an architectural/infra layer. Experience building systems that support human-in-the-loop moderation, policy evolution, or adaptive enforcement logic. Experience building efficient, scalable ML inference platforms. #MicrosoftAI Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 3 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary Gen AI Technical Product Manager Overview: The GenAI Technical Product Manager having expertise to deal use cases exploration as business analyst, Architect develops and implements GEN AI architecture strategies, best practices, and standards to enhance AI ML model deployment and monitoring efficiency. Develop architecture roadmap and strategy for GenAI Platforms and tech stacks. This role will focus on the technical development, deployment, and optimization of GenAI solutions, ensuring alignment with business strategies and technological advancements. Responsibilities: Develop cutting-edge architectural strategies for Gen AI components and platforms, leveraging advanced techniques such as chunking, Retrieval-Augmented Generation (RAG), Ai agents, and embeddings. Balance build versus buy decisions, ensuring alignment with SaaS models and decision trees, particularly for the PepGenX platform. Emphasize low coupling and cohesive model development. Lead working sessions for Arch Alignment, pattern library development, GEN AI Tools Data Architect alignment, tag new components to reuse (components reuse strategy), patterns of the usecases, reuse components ( save efforts, time money). Lead the implementation of LLM operations, focusing on optimizing model performance, scalability, and efficiency. Design and implement LLM agentic processes to create autonomous AI systems capable of complex decision-making and task execution. Work closely with data scientists and AI professionals to identify and pilot innovative use cases that drive digital transformation. Assess the feasibility of these use cases, aligning them with business objectives, ROI and leveraging advanced AI techniques. Gather inputs from various stakeholders to align technical implementations with current and future requirements. Develop processes and products based on these inputs, incorporating state-of-the-art AI methodologies. Define AI architecture and select suitable technologies, with a focus on integrating RAG systems, embedding models, and advanced LLM frameworks. Decide on optimal deployment models, ensuring seamless integration with existing data management and analytics tools. Audit AI tools and practices, focusing on continuous improvement of LLM ops and agentic processes. Collaborate with security and risk leaders to mitigate risks such as data poisoning and model theft, ensuring ethical AI implementation. Stay updated on AI regulations and map them to best practices in AI architecture and pipeline planning. Develop expertise in ML and deep learning workflow architectures, with a focus on chunking strategies, embedding pipelines, and RAG system implementation. Apply advanced software engineering and DevOps principles, utilizing tools like Git, Kubernetes, and CI/CD for efficient LLM ops. Collaborate across teams to ensure AI platforms meet both business and technical requirements. Spearhead the exploration and application of cutting-edge Large Language Models (LLMs) and Generative AI, including multi-modal capabilities and agentic processes. Oversee MLOps, automating ML pipelines from training to deployment with a focus on RAG and embedding optimization. Engage in sophisticated model development from ideation to deployment, leveraging advanced chunking and RAG techniques. Effectively communicate complex analysis results to business partners and executives. Proactively reduce biases in model predictions, focusing on fair and inclusive AI systems through advanced debiasing techniques in embeddings and LLM training. Design efficient data pipelines to support large language model training and inference, with a focus on optimizing chunking strategies and embedding generation for RAG systems. Proven track record in shipping products and developing state-of-the-art Gen AI product architecture. Experience: 10-15 years of experience with a strong balance of business acumen and technical expertise in AI. 5+ years in building and releasing NLP/AI software, with specific experience in RAG , Agents systems and embedding models. Demonstrated experience in delivering Gen AI products, including Multi-modal LLMs, Foundation models, and agentic AI systems. Deep familiarity with cloud technologies, especially Azure, and experience deploying models for large-scale inference using advanced LLM ops techniques. Proficiency in PyTorch, TensorFlow, Kubernetes, Docker, LlamaIndex, LangChain, LLM, SLM, LAM, and cloud platforms, with a focus on implementing RAG and embedding pipelines. Excellent communication and interpersonal skills, with a strong design capability and ability to articulate complex AI concepts to diverse audiences. Hands-on experience with chunking strategies, RAG implementation, and optimizing embedding models for various AI applications. Qualifications: - Bachelor’s or master’s degree in computer science, Data Science, or a related technical field. - Demonstrated ability to translate complex technical concepts into actionable business strategies. Experience in data-driven decision-making processes. - Strong communication skills, with the ability to collaborate effectively with both technical and non-technical stakeholders. - Proven track record in managing and delivering AI/ML projects, with a focus on GenAI solutions, in large-scale enterprise environments.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France