Home
Jobs

187 Onnx Jobs - Page 6

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Compiler Lead Hyderabad Founded by highly respected Silicon Valley veterans - with its design centers established in Santa Clara, California. / Hyderabad/Bangalore A US based well-funded product-based startup looking for Highly talented Verification Engineers for the following roles. We are looking for a highly experienced systems engineer with deep expertise in compilers, machine learning infrastructure, and system-level performance optimization. This role is hands-on and research-driven, ideal for someone who thrives on solving low-level performance challenges and building core infrastructure that powers next-generation AI workloads. Key Responsibilities: Compiler Design & Optimization Develop and enhance compiler toolchains based on LLVM, MLIR, Open64, or Glow. Build and optimize intermediate representations, custom dialects, and code generation flows for AI accelerators. Implement transformations and optimizations for latency, memory usage, and compute efficiency. AI System Integration Work closely with hardware teams to co-design compilers targeting custom silicon. Integrate compiler backends with ML frameworks like PyTorch, TensorFlow, or ONNX. Build graph-level and kernel-level transformations for AI training and inference pipelines. Performance Tuning & System Analysis Conduct low-level profiling and performance tuning across compiler and runtime layers. Identify and eliminate bottlenecks across CPU/GPU/NPU workloads. Develop parallel programming solutions leveraging SIMD, multi-threading, and heterogeneous computing. Tooling & Infrastructure Develop tooling for performance analysis, debug, and test automation. Contribute to internal SDKs and devkits used by AI researchers and system engineers. Required Skills & Experience: Strong compiler development experience using LLVM, MLIR, Glow, or similar toolchains. Proficiency in C/C++, with solid command of Python for tooling and automation. In-depth understanding of compiler internals, including IR design, lowering, codegen, and scheduling. Deep knowledge of hardware-software co-design, particularly for AI/ML workloads. Experience with runtime systems, memory models, and performance modeling. Solid grasp of parallel and heterogeneous computing paradigms. Nice to Have: Experience working with custom AI hardware or edge inference platforms. Familiarity with quantization, scheduling for dataflow architectures, or compiler autotuning. Contributions to open-source compiler projects (e.g., LLVM, MLIR, TVM). Qualifications: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. 8–15 years of relevant hands-on experience in compilers, systems programming, or AI infrastructure. Contact: Uday Mulya Technologies muday_bhaskar@yahoo.com "Mining The Knowledge Community" Show more Show less

Posted 3 weeks ago

Apply

3.0 years

1 - 2 Lacs

Hyderābād

On-site

GlassDoor logo

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: Job Description More Details Below: Join The Exciting Generative AI Team At Qualcomm Focused On Integrating Cutting Edge GenAI Models On Qualcomm Chipsets. The Team Uses Qualcomm Chips’ Extensive Heterogeneous Computing Capabilities To Allow Inference Of GenAI Models On-Device Without A Need For Connection To The Cloud. Our Inference Engine Is Designed To Help Developers Run Neural Network Models Trained In A Variety Of Frameworks On Snapdragon Platforms At Blazing Speeds While Still Sipping The Smallest Amount Of Power. Utilize This Power Efficient Hardware And Software Stack To Run Large Language Models (LLMs) And Large Vision Models (LVM) At Near GPU Speeds! Responsibilities: In This Role, You Will Spearhead The Development And Commercialization Of The Qualcomm AI Runtime (QAIRT) SDK On Qualcomm SoCs. As An AI Inferencing Expert, You'll Push The Limits Of Performance From Large Models. Your Mastery In Deploying Large C/C++ Software Stacks Using Best Practices Will Be Essential. You'll Stay On The Cutting Edge Of GenAI Advancements, Understanding LLMs/Transformers And The Nuances Of Edge-Based GenAI Deployment. Most Importantly, Your Passion For The Role Of Edge In AI's Evolution Will Be Your Driving Force. Requirements: Master’s/Bachelor’s Degree In Computer Science Or Equivalent. 3+ Years Of Relevant Work Experience In Software Development. Strong Understanding Of Generative AI Models – LLM, LVM And LLMs And Building Blocks Floating-Point, Fixed-Point Representations And Quantization Concepts. Experience With Optimizing Algorithms For AI Hardware Accelerators (Like CPU/GPU/NPU). Strong Development Skills In C/C++ Excellent Analytical And Debugging Skills. Good Communication Skills (Verbal, Presentation, Written). Ability To Collaborate Across A Globally Diverse Team And Multiple Interests. Preferred Qualifications Strong Understanding Of SIMD Processor Architecture And System Design. Proficiency In Object-Oriented Software Development. Familiarity With Linux And Windows Environment Strong Background In Kernel Development For SIMD Architectures. Familiarity With Frameworks Like Llama.Cpp, MLX, And MLC Is A Plus. Good Knowledge Of PyTorch, TFLite, And ONNX Runtime Is Preferred. Experience With Parallel Computing Systems And Assembly Is A Plus. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Introduction IBM Systems helps IT leaders think differently about their infrastructure. IBM servers and storage are no longer inanimate - they can understand, reason, and learn so our clients can innovate while avoiding IT issues. Our systems power the worlds most important industries and our clients are the architects of the future. Join us to help build our leading-edge technology portfolio designed for cognitive business and optimized for cloud computing. Your role and responsibilities A hands-on engineering position responsible for designing, automating, and maintaining robust build systems and deployment pipelines for AI/ML components, with direct development responsibilities in C++ and Python. The role supports both model training infrastructure and high-performance inference systems. Design and implement robustbuild automation systemsthat support large, distributed AI/C++/Python codebases. Develop tools and scripts that enable developers and researchers to rapidly iterate, test, and deploy across diverse environments. Integrate C++ components with Python-based AI workflows, ensuring compatibility, performance, and maintainability. Lead the creation ofportable, reproducible development environments, ensuring parity between development and production. Maintain and extend CI/CD pipelines for Linux and z/OS, implementing best practices in automated testing, artifact management, and release validation. Collaborate with cross-functional teams - including AI researchers, system architects, and mainframe engineers - to align infrastructure with strategic goals. Proactively monitor and improve build performance, automation coverage, and system reliability. Contribute to internal documentation, process improvements, and knowledge sharing to scale your impact across teams. Required education Bachelors Degree Preferred education Bachelors Degree Required technical and professional expertise With 5+ years of strong programming skills in C++ and Python, with a deep understanding of both compiled and interpreted language paradigms. Hands-on experience building and maintainingcomplex automation pipelines(CI/CD) using tools likeJenkins, or GitLab CI. In-depth experience withbuild tools and systemssuch asCMake, Make, Meson, or Ninja, including custom script development and cross-compilation. Experience working onmulti-platform development, specifically onLinux and IBM z/OSenvironments, including understanding of their respective toolchains and constraints. Experience integratingnative C++ code with Python, leveragingpybind11,Cython, or similar tools for high-performance interoperability. Proven ability to troubleshoot and resolvebuild-time, runtime, and integration issuesin large-scale, multi-component systems. Comfortable withshell scripting(Bash, Zsh, etc.) and system-level operations. Familiarity withcontainerization technologieslike Docker for development and deployment environments. Preferred technical and professional experience Working knowledge of AI/ML frameworks such as PyTorch, TensorFlow, or ONNX, including understanding of how they integrate into production environments. Experience developing or maintaining software on IBM z/OS mainframe systems. Familiarity with z/OS build and packaging workflows, Understanding of system performance tuning, especially in high-throughput compute or I/O environments (e.g., large model training or inference). Knowledge of GPU computing and low-level profiling/debugging tools. Experience managing long-lifecycle enterprise systems and ensuring compatibility across releases and deployments. Background contributing to or maintaining open-source projects in the infrastructure, DevOps, or AI tooling space Proficiency in distributed systems, microservice architecture, and REST APIs. Experience in collaborating with cross-functional teams to integrate MLOps pipelines with CI/CD tools for continuous integration and deployment, ensuring seamless integration of AI/ML models into production workflows. Strong communication skills with the ability to communicate technical concepts effectively to non-technical stakeholders. Demonstrated excellence in interpersonal skills, fostering collaboration across diverse teams. Proven track record of ensuring compliance with industry best practices and standards in AI engineering. Maintained high standards of code quality, performance, and security in AI projects.

Posted 4 weeks ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

A hands-on engineering position responsible for designing, automating, and maintaining robust build systems and deployment pipelines for AI/ML components, with direct development responsibilities in C++ and Python. Design and implement robustbuild automation systemsthat support large, distributed AI/C++/Python codebases. Develop tools and scripts that enable developers and researchers to rapidly iterate, test, and deploy across diverse environments. Integrate C++ components with Python-based AI workflows, ensuring compatibility, performance, and maintainability. Lead the creation ofportable, reproducible development environments, ensuring parity between development and production. Maintain and extend CI/CD pipelines for Linux and z/OS, implementing best practices in automated testing, artifact management, and release validation. Collaborate with cross-functional teams — including AI researchers, system architects, and mainframe engineers to align infrastructure with strategic goals. Proactively monitor and improve build performance, automation coverage, and system reliability. Contribute to internal documentation, process improvements, and knowledge sharing to scale your impact across teams. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Strong programming skills in C++ and Python, with a deep understanding of both compiled and interpreted language paradigms. Hands-on experience building and maintainingcomplex automation pipelines(CI/CD) using tools likeJenkins, or GitLab CI. In-depth experience withbuild tools and systemssuch asCMake, Make, Meson, or Ninja, including custom script development and cross-compilation. Experience working onmulti-platform development, specifically onLinux and IBM z/OSenvironments, including understanding of their respective toolchains and constraints. Experience integratingnative C++ code with Python, leveragingpybind11,Cython, or similar tools for high-performance interoperability. Proven ability to troubleshoot and resolvebuild-time, runtime, and integration issuesin large-scale, multi-component systems. Comfortable withshell scripting(Bash, Zsh, etc.) and system-level operations. Familiarity withcontainerization technologieslike Docker for development and deployment environments. Preferred technical and professional experience Working knowledge of AI/ML frameworks such as PyTorch, TensorFlow, or ONNX, including understanding of how they integrate into production environments. Experience developing or maintaining software on IBM z/OS mainframe systems. Familiarity with z/OS build and packaging workflows, Understanding of system performance tuning, especially in high-throughput compute or I/O environments (e.g., large model training or inference). Knowledge of GPU computing and low-level profiling/debugging tools. Experience managing long-lifecycle enterprise systems and ensuring compatibility across releases and deployments. Background contributing to or maintaining open-source projects in the infrastructure, DevOps, or AI tooling space Proficiency in distributed systems, microservice architecture, and REST APIs. Experience in collaborating with cross-functional teams to integrate MLOps pipelines with CI/CD tools for continuous integration and deployment, ensuring seamless integration of AI/ML models into production workflows. Strong communication skills with the ability to communicate technical concepts effectively to non-technical stakeholders. Demonstrated excellence in interpersonal skills, fostering collaboration across diverse teams. Proven track record of ensuring compliance with industry best practices and standards in AI engineering. Maintained high standards of code quality, performance, and security in AI projects.

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Company Description O-Health is digital healthcare company dedicated to AI-driven digital solutions. Our platform connects patients in remote areas to doctors and utilizes NLP and AI for diagnostics. Role Description This is a full-time on-site role for a NLP + ML Engineer at O-Health located in Bengaluru. The NLP + ML Engineer will be responsible for pattern recognition in text, working with neural networks, implementing algorithms, and analyzing statistics on a daily basis in a healthtech ecosystem. Qualifications Experience in Neural Networks, Data Science and Pattern Recognition Strong background in Computer Science and Statistics Proficiency in machine learning frameworks and tools Excellent problem-solving and analytical skills Ability to work collaboratively in a team environment Master's/Bachelor's in Computer Science, Engineering, Mathematics, or related field with atleast 4 years of experience Experience in development of multi-lingual ASR systems Responsibilities: Design and develop robust backend systems to handle real-time patient data and ML outputs. Develop and integrate machine learning models with APIs to the main O-Health application. Optimize model serving pipelines (e.g. using TorchServe, FastAPI, or ONNX). Manage data pipelines for de-identified OPD datasets used in model training and inference. Implement data encryption, anonymization, and consent-based data access. Development of multilingual voice and text processing. Support versioning and A/B testing of health algorithms. Required Skills:Backend Engineering Strong in Python with frameworks like FastAPI with experience in DBMS. Experience with RESTful APIs, WebSockets, and asynchronous data flows. Familiar with PostgreSQL databases. Working knowledge of Docker, Git, and CI/CD pipelines. Machine Learning Ops Hands-on with PyTorch, Scikit-learn, or TensorFlow for inference integration. Comfortable with model optimization, quantization, and edge deployment formats (e.g. ONNX, TFLite). Familiarity with language models (LLMs) and multilingual NLP. Knowledge of data preprocessing, tokenization, and feature engineering for clinical/NLP tasks. Other Required Skills Understanding of HIPAA/GDPR compliance. Experience working on healthcare, social impact, or AI-for-good projects. What You'll Impact: You’ll play a pivotal role in connecting machine learning research with field-ready healthcare tools. Your work will help scale diagnosis support systems to thousands of underserved patients and power multilingual health consultations in real-time. Show more Show less

Posted 4 weeks ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

About Us At WaysAhead , we blend the latest AI techniques with deep industry and analytics expertise to unlock business value from data. Our mission? To transform complex data into actionable insights and help clients make smarter, faster decisions. If you're passionate about innovation and love solving real-world problems through data—let’s talk. Role Overview We’re looking for a Data Scientist to join our growing team in New Delhi. You'll work on cutting-edge problems at the intersection of retail intelligence and AI—building LLMs, vision-based systems, and scalable ML models. Expect to collaborate closely with product teams to turn ideas into impactful AI-powered features. What You’ll Do Build ML models for NLP, vision, recommendation, and classification tasks Fine-tune LLMs and implement RAG-based systems Develop REST APIs and deploy models via Docker, FastAPI, or Flask Write modular, clean Python code for ML pipelines Collaborate with cross-functional teams to deliver AI features Stay updated with the latest in AI, LLMs, and deployment best practices Must-Have Skills Python (Advanced), Sklearn, LLMs / Transformers REST API development, Model deployment (Docker, FastAPI, Flask) RAG pipelines, HuggingFace/OpenAI APIs MsSQL, Algorithm development Good-to-Have Skills Git, CI/CD, LangChain, Prompt Engineering, Streamlit Nice-to-Have Skills AWS/Azure pipelines, OpenCV, YOLO, ONNX, PyTorch Qualifications Master’s in Data Science, Statistics, Computer Science, or related field Strong analytical mindset, teamwork, and communication skills Experience with data visualization tools (Tableau, Power BI) is a plus Show more Show less

Posted 4 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Xylem is a Fortune 500 global water solutions company dedicated to advancing sustainable impact and empowering the people who make water work every day. As a leading water technology company with 23,000 employees operating in over 150 countries, Xylem is at the forefront of addressing the world's most critical water challenges. We invite passionate individuals to join our team, dedicated to exceeding customer expectations through innovative and sustainable solutions. Key Responsibilities Assist in the design, development of software applications using C++/AI Support the integration of AI models into computer vision systems Work with senior engineers on 2D/3D image processing, object detection, and pattern recognition tasks Debug and optimize code for performance and scalability Write clear, maintainable code and contribute to documentation Participate in code reviews and team meetings Qualifications Minimum 3-5 years of experience with Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) or Hydrographic Area. Experience developing software at a professional level. Experience working collaboratively within a team environment of engineers to meet aggressive goals and high-quality standards. Proficiency in C++ Basic understanding of AI/ML concepts, especially in Computer Vision (e.g., OpenCV, TensorFlow, PyTorch, ONNX, PCL, Cuda) Familiarity with image processing libraries and techniques Good problem-solving and communication skills Passion for learning and innovation Preferred Skills Good Experience in C++ for image processing Experience with OpenCV or similar libraries Exposure to deep learning frameworks (e.g., TensorFlow, PyTorch), CUDA, ONNX, PCL Knowledge of Git or other version control tools Join the global Xylem team to be a part of innovative technology solutions transforming water usage, conservation, and re-use. Our products impact public utilities, industrial sectors, residential areas, and commercial buildings, with a commitment to providing smart metering, network technologies, and advanced analytics for water, electric, and gas utilities. Partner with us in creating a world where water challenges are met with ingenuity and dedication; where we recognize the power of inclusion and belonging in driving innovation and allowing us to compete more effectively around the world. Show more Show less

Posted 4 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Company Name: Search.com LLC Website: www.search.com Headquarter: California, USA Job Location: Remote from Anywhere About the Company: Search.com is an AI-powered search and monetisation platform engineered for speed, intelligence, and reward. Search.com delivers accurate, context-aware results tailored to user intent—whether it's coding help, current events, creative writing, or shopping. Built on a scalable, modular architecture, the platform combines intelligent query routing, real-time ad bidding, and user-centric personalization to create a smarter search experience. Monetization is woven throughout via high-performing ad placements, user subscriptions, publisher partnerships, and cashback rewards. Users can search freely, earn cashback through advertiser links, or unlock unlimited access and exclusive features by subscribing. Publishers benefit from embeddable AI search widgets with integrated ad revenue sharing, while advertisers tap into performance-driven placements fueled by live intent signals and AI-optimized delivery. From cross-platform apps to multilingual search, Search.com is rapidly evolving into a full-featured ecosystem—built not only to answer questions, but to elevate how we search, learn, shop, and interact online. Job Description: Search.com is seeking a highly skilled AI Systems Engineer to join our growing engineering team. In this role, you’ll be responsible for architecting, building, and optimizing the infrastructure and backend systems that power our AI models and agent products. You will work closely with researchers, ML engineers, and product teams to deploy cutting-edge models at scale and ensure reliable, fast, and secure user experiences. This is a critical role at the intersection of AI innovation and high-performance systems engineering. Key Responsibilities: Design and implement robust, scalable systems to serve AI/ML models in production (LLMs, vector databases, RAG pipelines, etc.) Build and maintain distributed, low-latency inference services across multiple cloud environments or edge deployments. Build and optimize APIs for internal and external AI inference. Collaborate with research and data teams to productionize models, ensuring optimal performance and cost-efficiency. Develop infrastructure for monitoring, logging, model versioning, experimentation, and rollback. Optimize GPU/TPU utilization and manage compute orchestration (e.g., Kubernetes, Ray, Slurm). Ensure system security, fault tolerance, and compliance with data governance standards. Investigate and resolve system-level issues, conducting root cause analysis and performance tuning. Required Skills: Bachelor’s or Master’s in Computer Science, Engineering, or related technical field. 4+ years of experience in systems engineering, backend development, or MLOps. Strong proficiency in Python, with experience in systems programming (Go, Rust, or C++ is a plus). Deep understanding of distributed systems, API development, and containerization (Docker, Kubernetes). Experience with model deployment tools like TensorFlow Serving, Triton Inference Server, or ONNX Runtime. Familiarity with AI/ML workflows: model training, fine-tuning, deployment, and observability. Experience working with cloud providers (AWS, GCP, Azure) and tools like Terraform or Helm. Strong debugging, performance optimization, and troubleshooting skills. Comfortable working in fast-paced, remote-first environments with a high degree of autonomy. Excellent communications and organizational skills 24/7 on-call for emergency response, disaster recovery Preferred Qualifications: Experience deploying LLMs or agent systems in production (e.g., OpenAI, Multi-Head Attention, Grouped-Query, Hugging Face Transformers, LangChain, RAG pipelines). Knowledge of streaming data systems (Kafka, Apache Flink, etc.). Familiarity with search infrastructure (Elasticsearch, Pinecone, Weaviate, FAISS). Experience in security, privacy, and compliance for AI/ML applications. What we offer? Work with a mission-driven team shaping the future of AI search. 100% remote flexibility with a globally distributed team. Competitive compensation. Access to powerful computing resources and modern dev infrastructure. Fast-paced, high-impact environment where your ideas matter. Show more Show less

Posted 4 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Noida

Work from Office

Naukri logo

About The Role Were building an agentic AI platform that turns one line of text and a video feed into end-to-end, real-time computer-vision solutionsthink semantic video search, object / action recognition, and task-oriented visual agents deployable with a single click As a Gen AI ML Engineer, youll architect the core vision & multimodal-reasoning stack and pave the road from prototype to production. Roles And Responsibilities Semantic video search Ship a pipeline that allows users to type show every forklift near aisle 5 in the last 30 minutes and get keyed-off clips in Wire embeddings to a hybrid FAISS/HNSW index; surface results through a simple REST & React playground. Create agentic pipelines Chain vision language models and zero/few-shot vision models with LLM planners (Gemini, GPT-4o, AutoGen, etc.) so a single prompt becomes a multi-step perception workflow. Profile and accelerate inference (TensorRT, ONNX, quantization, batching) to meet latency / throughput targets on GPU and CPU fleets. Rapid prototyping loops Run weekly paper-to-prototype spikes: reproduce a fresh arXiv idea, benchmark, and decide go/no-go in Hand successful python scripts & checkpoints to MLOps for productionizationno plumbing marathons. Data & Evaluation Spin up scalable pipelines for video ingestion, labeling (active learning, weak supervision), experiment tracking, and continuous evaluation. Collaborate & Lead Partner with product and ML Ops engineers; set research direction, mentor future hires, and establish best practices. Must-have Skill Set 13 years deep-learning research experience (internships & grad work count). Fluency in Python + PyTorch; comfortable hacking large vision/LLM repos. Proof you ship ideasfirst-author paper, OSS repo, Kaggle medal, or faithful reproduction of a cutting-edge model. Hands-on with LLM prompting/fine-tuning and at least one agent framework. Able to turn fuzzy product asks into measurable experiments and explain results clearly. Bonus Cred Large-scale video retrieval or temporal grounding experience. Prior work building agentic-AI pipelines that combine perception models with LLM reasoning. Open-source contributions to GenAI/vision libs (OpenCLIP, Vid2Seq, ViperGPT, etc.). What can you expect? Ability to shape the future of manufacturing by leveraging best-in-class AI and software; we are a unique organization with niche skill set that you would also develop while working with us World class work culture, coaching and development Mentoring from highly experienced leadership from world class companies (refer to Ripik.AI website for details) International exposure Work Location NOIDA (Work from Office)

Posted 1 month ago

Apply

0.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Indeed logo

Job Information Company Yubi Date Opened 05/28/2025 Job Type Full time Work Experience 6-10 years Industry Technology City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600001 About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. Job Description About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. All of our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any and all capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset based securitization Spocto - Debt recovery & risk mitigation platform Corpository - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have on-boarded over 17000+ enterprises, 6200+ investors & lenders and have facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today, who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come, join the club to be a part of our epic growth story. About the Role We're looking for a highly skilled, results-driven AI Developer who thrives in fast-paced, high-impact environments. If you are passionate about pushing the boundaries of Computer Vision, OCR, NLP and and Large Language Models (LLMs) and have a strong foundation in building and deploying AI solutions, this role is for you. As a Lead Data Scientist, you will take ownership of designing and implementing state-of-the-art AI products. This role demands deep technical expertise, the ability to work autonomously, and a mindset that embraces complex challenges head-on. Here, you won't just fine-tune pre-trained models—you'll be architecting, optimizing, and scaling AI solutions that power real-world applications. Key Responsibilities Architect, develop, and deploy high-performance AI Solutions for real-world applications. Implement and optimize state-of-the-art LLM , OCR models and frameworks. Fine-tune and integrate LLMs (GPT, LLaMA, Mistral, etc.) to enhance text understanding and automation. Build and optimize end-to-end AI pipelines, ensuring efficient data processing and model deployment. Work closely with engineers to operationalize AI models in production (Docker, FastAPI, TensorRT, ONNX). Enhance GPU performance and model inference efficiency, applying techniques such as quantization and pruning. Stay ahead of industry advancements, continuously experimenting with new AI architectures and training techniques. Work in a highly dynamic, startup-like environment, balancing rapid experimentation with production-grade robustness. What We're Looking For Requirements Required Skills & Qualifications: Proven technical expertise – Strong programming skills in Python, PyTorch, TensorFlow with deep experience in NLP and LLM Hands-on experience in developing, training, and deploying LLM and Agentic workflows Strong background in vector databases, RAG pipelines, and fine-tuning LLMs for document intelligence. Deep understanding of Transformer-based architectures for vision and text processing. Experience working with Hugging Face, OpenCV, TensorRT, and NVIDIA GPUs for model acceleration. Autonomous problem solver – You take initiative, work independently, and drive projects from research to production. Strong experience in scaling AI solutions, including model optimization and deployment on cloud platforms (AWS/GCP/Azure). Thrives in fast-paced environments – You embrace challenges, pivot quickly, and execute effectively. Familiarity with MLOps tools (Docker, FastAPI, Kubernetes) for seamless model deployment. Experience in multi-modal models (Vision + Text). Good to Have Financial background and understanding corporate finance . Contributions to open-source AI projects.

Posted 1 month ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Job Description We are looking for a passionate and skilled Robotics AI/ML Engineer to join our team in developing intelligent and autonomous drone systems. You will lead the development of drone software stacks, integrating onboard intelligence (AI/ML) with robotic middleware (ROS/ROS2) and backend systems. The ideal candidate has at least 1 year of hands-on experience in building real-world robotics or drone software, with strong command over ROS/ROS2, computer vision, and machine learning applied to autonomous navigation, perception, or decision-making. Key Responsibilities Drone Software Development Build and maintain core ROS/ROS2-based software for autonomous flight, navigation, and perception Develop real-time systems to handle sensor fusion, path planning, obstacle avoidance, and mission execution Implement algorithms for drone localization (GPS, SLAM, visual odometry) and mapping AI/ML Integration Develop and train AI/ML models for perception (e.g., object detection, tracking, segmentation) Deploy and optimize AI models on edge hardware (Jetson, Raspberry Pi, Odroid, etc.) Work on multi-camera vision, lidar fusion, and real-time inference pipelines System Integration & Backend Communication Integrate drone software with backend/cloud systems using ROSBridge, WebSockets, MQTT, or custom APIs Build data pipelines for telemetry, health monitoring, and AI inference output Work with DevOps/Backend teams to ensure smooth interface with mission control and analytics dashboards Testing & Simulation Set up and manage simulated environments (e.g., Gazebo, Ignition) for testing flight logic and AI behavior Conduct real-world test flights with live data and iterative tuning of software models Required Qualifications Bachelor’s or Master’s degree in Robotics , Computer Science , Electrical Engineering , or related field Minimum 1 year experience building autonomous systems using ROS/ROS2 Proficient in Python and C++ with experience writing ROS nodes and launch files Experience deploying AI/ML models for perception or control (e.g., YOLO, DeepSORT, CNNs, LSTMs) Hands-on experience with drones or mobile robotics platforms (simulation or real-world) Comfortable with version control (Git), Linux environments, and debugging complex robotic systems Preferred Skills Experience with drone-specific stacks (PX4, ArduPilot, MAVROS) Experience with edge AI deployment tools (TensorRT, ONNX, OpenVINO) Familiarity with CV frameworks like OpenCV, TensorFlow, PyTorch Experience with cloud platforms for robotics (AWS RoboMaker, Azure, etc.) Understanding of control systems (PID, MPC), SLAM, or multi-agent systems Knowledge of cybersecurity best practices in robotics and IoT Job Types: Full-time, Permanent Pay: ₹180,000.00 - ₹240,000.00 per year Schedule: Day shift Fixed shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Have you ever worked on Drones or built a drone? Experience: Robotics AI / ML: 1 year (Required) License/Certification: AI / ML certification (Preferred) Location: Hyderabad, Telangana (Required) Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

About the Role: We are seeking an experienced MLOps Engineer with a strong background in NVIDIA GPU-based containerization and scalable ML infrastructure ( Contractual - Assignment Basis) . You will work closely with data scientists, ML engineers, and DevOps teams to build, deploy, and maintain robust, high-performance machine learning pipelines using NVIDIA NGC containers, Docker, Kubernetes , and modern MLOps practices. Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for training, validation, deployment, and monitoring of ML models. Implement GPU-accelerated workflows using NVIDIA NGC containers, CUDA, and RAPIDS . Containerize ML workloads using Docker and deploy on Kubernetes (preferably with GPU support like NVIDIA device plugin for K8s) . Integrate model versioning, reproducibility, CI/CD, and automated model retraining using tools like MLflow, DVC, Kubeflow, or similar . Optimize model deployment for inference on NVIDIA hardware using TensorRT, Triton Inference Server , or ONNX Runtime-GPU . Manage cloud/on-prem GPU infrastructure and monitor resource utilization and model performance in production. Collaborate with data scientists to transition models from research to production-ready pipelines. Required Skills: Proficiency in Python and ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Strong experience with Docker , Kubernetes , and NVIDIA GPU containerization (NGC, nvidia-docker) . Familiarity with NVIDIA Triton Inference Server , TensorRT , and CUDA . Experience with CI/CD for ML (GitHub Actions, GitLab CI, Jenkins, etc.). Deep understanding of ML lifecycle management , monitoring, and retraining. Experience working with cloud platforms (AWS/GCP/Azure) or on-prem GPU clusters. Preferred Qualifications: Experience with Kubeflow , Seldon Core , or similar orchestration tools. Exposure to Airflow , MLflow , Weights & Biases , or DVC . Knowledge of NVIDIA RAPIDS and distributed GPU workloads. MLOps certifications or NVIDIA Deep Learning Institute training (preferred but not mandatory). Show more Show less

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Lead the design, development, and implementation of AI/ML solutions across multiple domains Collaborate with cross-functional teams for seamless integration of AI/ML components Mentor and coach junior engineers, offering development opportunities and guidance Resolve issues related to AI model optimization for high performance and accuracy Conduct research on AI/ML trends and innovations to adopt best practices Develop and optimize quantization techniques for efficient AI/ML model execution on Qualcomm hardware Manage project timelines, objectives, and resource allocation across functions Minimum Qualifications: Bachelor's degree in Engineering, Computer Science, or a related field and 4+ years of Software Engineering or related experience OR Master's degree in Engineering, Computer Science, or a related field and 3+ years of Software Engineering or related experience Experience in software architecture and programming languages Proficiency with tools and frameworks such as PyTorch, TensorFlow, ONNX, etc. Preferred Qualifications: Excellent development skills in C++ / Python Strong knowledge of data structures and algorithms Hands-on expertise in deep learning frameworks like ONNX, PyTorch In-depth understanding of CV, NLP, LLM, GenAI, Classification, and Object Detection models Proficient in quantization (8-bit, 4-bit) and calibration algorithms Understanding of ML compiler techniques and graph optimizations Familiarity with software design patterns and SOLID principles Strong analytical, debugging, and development skills Knowledge of ML compilers (e.g., TVM, Glow) and runtimes (ONNX Runtime, TensorFlow Runtime) is a plus

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ SENIOR SOFTWARE DEVELOPMENT ENGINEER The Role AMD is looking for an influential software engineer role to enable AI acceleration at scale. You will be a member of the core team, working on developing ML tools and methodologies to optimize and realize full system performance for AI workloads on Ryzen AI SoC. Working on the latest AI models addressing vision, language, and generative models, working with the leading engineers in AMD’s CPU, GPU, and Adaptable Compute team Key Responsibilities Design and develop efficient code-generation and optimization techniques for AMD's Machine Learning products using MLIR/LLVM Consistently research and implement methods to improve the performance of our solutions. Stay informed of software and hardware trends and innovations, especially pertaining to ML algorithms and architectures. Optimizing current system and research alternative, more efficient ways for same goals. Develop technical relationships with peers and partners. Work with AMD’s architecture specialists to improve future products. Developing and optimizing code for VLIW processors. Preferred Experience Strong object-oriented programming background, C/C++ and/or Python with 3+ years of industry experience Prior experience in Graph compilers and optimizations Deep understanding of the performance implications on AI acceleration of different compute, memory, and communication configurations and hardware and software Good knowledge of AI frameworks like ONNX, Pytorch, TVM, TensorFlow, Familiarity with compiler technologies like MLIR/LLVM. Ability to write high quality code with a keen attention to detail. Familiar with implementation of key ML operations like GEMM, CONV. Experience with software development processes and tools such as debuggers, source code control systems (GitHub) and profilers. Effective communication and problem-solving skills Academic Credentials Bachelor’s or Master's degree in Computer Science, Computer Engineering, Electrical Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We’re on the lookout for a Data Science Manager with deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), and Generative AI to lead a high-impact Conversational AI initiative for one of our premier EMEA-based clients. You’ll not only guide a team of data scientists and ML engineers but also work hands-on to build cutting-edge systems for real-time transcription, sentiment analysis, summarization, and intelligent decision-making . Your solutions will enable smarter engagement strategies, unlock valuable insights, and directly impact client success. What You'll Do: Strategic Leadership & Delivery: Lead the end-to-end delivery of AI solutions for transcription and conversation analytics. Collaborate with client stakeholders to understand business problems and translate them into AI strategies. Provide mentorship to team members, foster best practices, and ensure high-quality technical delivery. Conversational AI Development: Oversee development and tuning of ASR models using tools like Whisper, DeepSpeech, Kaldi, AWS/GCP STT. Guide implementation of speaker diarization for multi-speaker conversations. Ensure solutions are domain-tuned and accurate in real-world conditions. Generative AI & NLP Applications: Architect LLM-based pipelines for summarization, topic extraction, and conversation analytics. Design and implement custom RAG pipelines to enrich conversational insights using external knowledge bases. Apply prompt engineering and NER techniques for context-aware interactions. Decision Intelligence & Sentiment Analysis: Drive the development of models for sentiment detection, intent classification , and predictive recommendations . Enable intelligent workflows that suggest next-best actions and enhance customer experiences. AI at Scale: Oversee deployment pipelines using Docker, Kubernetes, FastAPI , and cloud-native tools (AWS/GCP/Azure AI). Champion cost-effective model serving using ONNX, TensorRT, or Triton. Implement and monitor MLOps workflows to support continuous learning and model evolution. What You'll Bring to the Table: Technical Excellence 8+ Years of proven experience leading teams in Speech-to-Text, NLP, LLMs, and Conversational AI domains. Strong Python skills and experience with PyTorch, TensorFlow, Hugging Face, LangChain . Deep understanding of RAG architectures , vector DBs (FAISS, Pinecone, Weaviate), and cloud deployment practices. Hands-on experience with real-time applications and inference optimization. Leadership & Communication Ability to balance strategic thinking with hands-on execution. Strong mentorship and team management skills. Exceptional communication and stakeholder engagement capabilities. A passion for transforming business needs into scalable AI systems. Bonus Points For: Experience in healthcare, pharma, or life sciences conversational use cases. Exposure to knowledge graphs, RLHF , or multimodal AI . Demonstrated impact through cross-functional leadership and client-facing solutioning. What do you get in return? Competitive Salary: Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth: Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone: Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications: We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications . Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Description Millions of people experience Synaptics every day. Our technology impacts how people see, hear, touch, and engage with a wide range of IoT applications – at home, at work, in the car or on the go. We solve complex challenges alongside the most influential companies in the industry, using the most advanced algorithms in areas such as machine learning, biometrics and video processing, combined with world class software and silicon development. Overview Synaptics is looking for a talented Sr. Software Engineer to join our dynamic and growing organization. You will be responsible for the customer design-in activities from the design review phase through to mass production, for Synaptics Astra® SL Series of Embedded processors. The Astra® SL Series is a family of highly integrated AI-native Linux SoCs optimized for multi-modal Consumer and Industrial IoT workloads with high-performance hardware accelerators for edge-based inferencing, security, graphics, vision and audio. These processors incorporate multiple high-performance compute engines including a quad-core Arm64 CPU subsystem, multi-TOPS NPU, GPU for AI-acceleration and 3D Graphics, along with multimedia accelerators for Image Signal Processing, 4K video encode and decode, backed by industry-grade security certifications. This position reports to the Sr. Manager, Software Engineering. Responsibilities & Competencies Job Duties Design and develop NPU compilers optimized for machine learning applications Develop optimized computation kernels of Deep Learning Operators Create and implement algorithms and data structures to enhance NPU compiler performance Build software tools for visualization, analysis, debugging, and testing of compiler development Work with open-source compiler frameworks like MLIR to improve compiler functionality Contribute to the deep learning network front-end of the compiler Participate in all stages of the software development lifecycle, including requirements analysis, design, implementation, qualification, and production release Actively collaborate with a global team working on cutting-edge technology to create revolutionary products Competencies Strong understanding of system software and SoC architecture. Proficiency in C/C++ and Python with excellent coding skills. Proactive, self-starter, able to work independently in a fast-paced environment Well organized with strong attention to detail; proactively ensures work is accurate Positive attitude and work ethic; unafraid to ask questions and explore new ideas Good design, programming, and problem-solving skills and able to solve problems through practical use of technology and a solid understanding of product architecture Good verbal and written communication skills, in English Strong team player with the ability to work collaboratively within a diverse cross-functional team Qualifications (Requirements) Bachelor’s (or Master’s) degree in Electrical Engineering, Software Engineering, Computer Science or related field or equivalent 5+ years’ experience in embedded software development Expertise in analyzing, profiling, and debugging C/C++ and Python code. Familiarity with compiler frameworks like LLVM, MLIR, or similar is a strong plus. Hands-on experience in one or more of the following areas: deep learning frameworks such as PyTorch, ONNX, or TensorFlow, Python modules including NumPy and Pandas, Transformer-based architecture No travel required Belief in Diversity Synaptics is an Equal Opportunity Employer committed to workforce diversity. Qualified applicants will receive consideration without regard to race, sex, sexual orientation, gender identity, national origin, color, age, religion, protected veteran or disability status, or genetic information. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]” Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Job Overview We are looking for an experienced Computer Vision Engineer with expertise in image processing, machine learning, and deep learning. The ideal candidate should have hands-on experience developing and deploying computer vision algorithms and deep learning models using Python, OpenCV, and YOLO, along with proficiency in CUDA and NumPy. This role involves research, model optimization, and real-world deployment of cutting-edge computer vision Responsibilities : Develop and deploy computer vision algorithms and deep learning models for diverse applications. Design and implement computer vision models using state-of-the-art techniques and frameworks. Explore and analyze unstructured data like images through image processing techniques. Analyze, evaluate, and optimize existing computer vision systems for improved performance and accuracy. Test and validate computer vision code and models, ensuring robustness and reliability. Research and implement new computer vision technologies to stay at the forefront of the field. Collaborate with cross-functional teams to develop innovative solutions meeting project requirements. Monitor the performance and accuracy of deployed computer vision models, making necessary adjustments. Maintain and update computer vision systems to ensure their continued functionality and relevance. Provide technical support and guidance to team members and customers using computer vision : 5 years of experience as a Computer Vision Engineer. Bachelor's degree in Computer Science or a related field. Proven experience in developing and deploying computer vision systems. Strong knowledge of computer vision algorithms, libraries, and tools such as: OpenCV, TensorFlow, PyTorch, Keras, NumPy, scikit-image, Matplotlib, Seaborn, YOLO, etc. Familiarity with GPU acceleration and optimization tools like CUDA, OpenCL, OpenGL. Expertise in computer vision applications, including : Object detection, image classification, text detection & OCR, face detection, generative models, video analytics, object tracking, and model Experience with runtime AI frameworks such as ONNX, TensorRT, OpenVINO. Hands-on experience with cloud platforms (AWS, Azure), Docker, Kubernetes, and GitHub. Experience in training models using GPU computing or cloud-based environments. Familiarity with machine learning and deep learning concepts and frameworks. Strong problem-solving and analytical skills. Ability to work independently in a fast-paced environment and collaborate effectively in a Qualifications : Experience in real-time video processing and streaming analytics. Knowledge of Edge AI and deployment on embedded systems. Exposure to 3D vision, SLAM, and depth estimation. Contributions to open-source computer vision projects. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Work From Office only- Jaipur, Rajasthan Must have experience: 4 year+ Should be strongly skilled in FastAPI, RAG, LLM, Generative AI About the Role: We are seeking a hands-on and experienced Data Scientist with deep expertise in Generative AI to join our AI/ML team. You will be instrumental in building and deploying machine learning solutions, especially GenAI-powered applications. Key Responsibilities: - Design, develop, and deploy scalable ML and GenAI solutions using LLMs, RAG pipelines, and advanced NLP techniques. - Implement GenAI use cases involving embeddings, summarization, semantic search, and prompt engineering. - Fine-tune and serve LLMs using frameworks like vLLM, LoRA, and QLoRA; deploy on cloud and on-premise environments. - Build inference APIs using FastAPI and orchestrate them into robust services. - Utilize tools and frameworks such as LangChain, LlamaIndex, ONNX, Hugging Face, and VectorDBs (Qdrant, FAISS). - Collaborate closely with engineering and business teams to translate use cases into deployed solutions. - Guide junior team members, provide architectural insights, and ensure best practices in MLOps and model lifecycle. - Stay updated on latest research and developments in GenAI, LLMs, and NLP. Required Skills and Experience: - 4-8 years of hands-on experience in Data Science/Machine Learning, with a strong focus on NLP and Generative AI. - Proven experience with LLMs (LLaMA 1/2/3, Mistral, FLAN T5) and concepts like RAG, fine-tuning, embeddings, chunking, reranking, and prompt optimization. - Experience with LLM APIs (OpenAI, Hugging Face) and open-source model deployment. - Proficiency in LangChain, LlamaIndex, and FastAPI. - Understanding of cloud platforms (AWS/GCP) and certification in a cloud technology is preferred. - Familiarity with MLOps tools and practices for CI/CD, monitoring, and retraining of ML models. - Ability to read and interpret ML research papers and LLM architecture diagrams. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you're not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Computer Vision Engineer and help startups build AI systems that understand, analyze, and act on visual data. From object detection and facial recognition to medical imaging and video analytics, you'll work on real-world use cases that require state-of-the-art computer vision solutions. Role Overview As a Computer Vision Engineer, you will: Design, train, and deploy computer vision models for specific business applications Work with image, video, or 3D data to extract insights and automate workflows Collaborate with teams to integrate CV models into scalable products What You’ll Do Build and fine-tune models for classification, detection, segmentation, tracking, or OCR Use libraries like OpenCV, PyTorch, TensorFlow, Detectron2, or YOLO Preprocess and augment datasets to improve model robustness Deploy models using APIs, edge devices, or cloud-based inference tools Monitor performance and continuously optimize for accuracy and speed Technical Requirements 3+ years of experience in computer vision or deep learning Proficient in Python and frameworks like PyTorch, TensorFlow, or Keras Experience with OpenCV, scikit-image, and image/video processing pipelines Familiarity with model deployment using ONNX, TensorRT, or cloud services Bonus: experience with real-time CV, synthetic data, or 3D vision What We’re Looking For A hands-on developer who can take vision-based problems from idea to production A freelancer who enjoys working with data-rich products and diverse use cases Someone who can collaborate with both technical and product teams to deliver real impact Why Join Us Work on challenging computer vision projects across industries Fully remote and flexible freelance opportunities Get matched with future roles in CV, AI, and edge deployment Join a growing network solving real-world problems with intelligent vision systems Ready to bring vision to life? Apply now to become a Computer Vision Engineer with BeGig. Show more Show less

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

About US Omni's team is passionate about Commerce and Digital Transformation We've been successfully delivering Commerce solutions for clients across North America, Europe, Asia, and Australia The team has experience executing and delivering projects in B2B and B2C solutions JOB DESCRIPTION We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems, This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning, Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks, Own the complete ML lifecycle: data ingestion, model development, experimentation, evaluation, deployment, and monitoring, Leverage transfer learning, foundation models, or self-supervised approaches where suitable, Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow, Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure, Continuously monitor model performance and implement retraining workflows to ensure accuracy over time, Stay ahead of the curve on cutting-edge AI research (e-g , generative AI, video understanding, audio embeddings) and incorporate innovations into production systems, Write clean, well-documented, and reusable code to support agile experimentation and long-term platform sustainability, Requirements Bachelors or Masters degree in Computer Science, Artificial Intelligence, DataScience, or a related field, 58+ years of experience in AI/ML Engineering, with at least 3 years in applied deep learning, Technical Skills Languages: Expert in Python; good knowledge of R or Java is a plus, ML/DL Frameworks: Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX, Computer Vision: Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe), Audio AI: Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc ), Data Engineering: Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data, NLP/LLMs: Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred, Cloud & MLOps: Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML, Deployment & Infrastructure: Experience with Docker, Kubernetes, REST APIs, serverless ML inference, CI/CD & Version Control: Git, DVC, ML pipelines, Jenkins, Airflow, etc Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components, Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders, Proven ability to work cross-functionally with designers, engineers, product managers, and analysts, Demonstrated bias for action, rapid experimentation, and iterative delivery of impact,

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Position : AI / ML Engineer Job Type : Full-Time Location : Gurgaon, Haryana, India Experience : 2 Years Industry : Information Technology Domain : Demand Forecasting in Retail/Manufacturing Job Summary We are seeking a skilled Time Series Forecasting Engineer to enhance existing Python microservices into a modular, scalable forecasting engine. The ideal candidate will have a strong statistical background, expertise in handling multi-seasonal and intermittent data, and a passion for model interpretability and real-time insights. Key Responsibilities Develop and integrate advanced time-series models: MSTL, Croston, TSB, Box-Cox. Implement rolling-origin cross-validation and hyperparameter tuning. Blend models such as ARIMA, Prophet, and XGBoost for improved accuracy. Generate SHAP-based driver insights and deliver them to a React dashboard via GraphQL. Monitor forecast performance with Prometheus and Grafana; trigger alerts based on degradation. Core Technical Skills Languages : Python (pandas, statsmodels, scikit-learn) Time Series : ARIMA, MSTL, Croston, Prophet, TSB Tools : Docker, REST API, GraphQL, Git-flow, Unit Testing Database : PostgreSQL Monitoring : Prometheus, Grafana Nice-to-Have : MLflow, ONNX, TensorFlow Probability Soft Skills Strong communication and collaboration skills Ability to explain statistical models in layman terms Proactive problem-solving attitude Comfort working cross-functionally in iterative development environments Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Application Question(s): Do you have at least 2 years of hands-on experience in Python-based time series forecasting? Have you worked in retail or manufacturing domains where demand forecasting was a core responsibility? Are you currently authorized to work in India without sponsorship? Have you implemented or used ARIMA, Prophet, or MSTL in any of your projects? Have you used Croston or TSB models for forecasting intermittent demand? Are you familiar with SHAP for model interpretability? Have you containerized a forecasting pipeline using Docker and exposed it through a REST or GraphQL API? Have you used Prometheus and Grafana to monitor model performance in production? Work Location: In person Application Deadline: 05/06/2025 Expected Start Date: 05/06/2025

Posted 1 month ago

Apply

5 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Systems Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Systems Engineer, you will research, design, develop, simulate, and/or validate systems-level software, hardware, architecture, algorithms, and solutions that enables the development of cutting-edge technology. Qualcomm Systems Engineers collaborate across functional teams to meet and exceed system-level requirements and standards. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 8+ years of Systems Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 7+ years of Systems Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 6+ years of Systems Engineering or related work experience. Principal Engineer – Machine Learning We are looking for a Principal AI/ML Engineer with expertise in model inference , optimization , debugging , and hardware acceleration . This role will focus on building efficient AI inference systems, debugging deep learning models, optimizing AI workloads for low latency, and accelerating deployment across diverse hardware platforms. In addition to hands-on engineering, this role involves cutting-edge research in efficient deep learning, model compression, quantization, and AI hardware-aware optimization techniques . You will explore and implement state-of-the-art AI acceleration methods while collaborating with researchers, industry experts, and open-source communities to push the boundaries of AI performance. This is an exciting opportunity for someone passionate about both applied AI development and AI research , with a strong focus on real-world deployment, model interpretability, and high-performance inference . Education & Experience 20+ years of experience in AI/ML development, with at least 5 years in model inference, optimization, debugging, and Python-based AI deployment. Master’s or Ph.D. in Computer Science, Machine Learning, AI Leadership & Collaboration Lead a team of AI engineers in Python-based AI inference development. Collaborate with ML researchers, software engineers, and DevOps teams to deploy optimized AI solutions. Define and enforce best practices for debugging and optimizing AI models Key Responsibilities Model Optimization & Quantization Optimize deep learning models using quantization (INT8, INT4, mixed precision etc), pruning, and knowledge distillation. Implement Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT) for deployment. Familiarity with TensorRT, ONNX Runtime, OpenVINO, TVM AI Hardware Acceleration & Deployment Optimize AI workloads for Qualcomm Hexagon DSP, GPUs (CUDA, Tensor Cores), TPUs, NPUs, FPGAs, Habana Gaudi, Apple Neural Engine. Leverage Python APIs for hardware-specific acceleration, including cuDNN, XLA, MLIR. Benchmark models on AI hardware architectures and debug performance issues AI Research & Innovation Conduct state-of-the-art research on AI inference efficiency, model compression, low-bit precision, sparse computing, and algorithmic acceleration. Explore new deep learning architectures (Sparse Transformers, Mixture of Experts, Flash Attention) for better inference performance. Contribute to open-source AI projects and publish findings in top-tier ML conferences (NeurIPS, ICML, CVPR). Collaborate with hardware vendors and AI research teams to optimize deep learning models for next-gen AI accelerators. Details Of Expertise Experience optimizing LLMs, LVMs, LMMs for inference Experience with deep learning frameworks: TensorFlow, PyTorch, JAX, ONNX. Advanced skills in model quantization, pruning, and compression. Proficiency in CUDA programming and Python GPU acceleration using cuPy, Numba, and TensorRT. Hands-on experience with ML inference runtimes (TensorRT, TVM, ONNX Runtime, OpenVINO) Experience working with RunTimes Delegates (TFLite, ONNX, Qualcomm) Strong expertise in Python programming, writing optimized and scalable AI code. Experience with debugging AI models, including examining computation graphs using Netron Viewer, TensorBoard, and ONNX Runtime Debugger. Strong debugging skills using profiling tools (PyTorch Profiler, TensorFlow Profiler, cProfile, Nsight Systems, perf, Py-Spy). Expertise in cloud-based AI inference (AWS Inferentia, Azure ML, GCP AI Platform, Habana Gaudi). Knowledge of hardware-aware optimizations (oneDNN, XLA, cuDNN, ROCm, MLIR, SparseML). Contributions to open-source community Publications in International forums / conferences / journals Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3072372 Show more Show less

Posted 1 month ago

Apply

6 - 9 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

WHAT YOU DO AT AMD CHANGES EVERYTHING We care deeply about transforming lives with AMD technology to enrich our industry, our communities, and the world. Our mission is to build great products that accelerate next-generation computing experiences - the building blocks for the data center, artificial intelligence, PCs, gaming and embedded. Underpinning our mission is the AMD culture. We push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence while being direct, humble, collaborative, and inclusive of diverse perspectives. AMD together we advance_ MTS SOFTWARE DEVELOPMENT ENGINEER The Role AMD S3 Software team works with the world first class companies for their customized products. Our responsibility is to co-work with the customer to develop platform drivers, develop best-in-class feature rich drivers, debug the corresponding internal/external issues, and deliver the reference/production drivers to the customer. The working domain includes but not limited to Windows/Linux/Android, Virtualization, Cloud Gaming, Machine Learning, etc. You will be working with the global pre-silicon and post-silicon teams for the leading projects which will have profound impact on the world. The Person The ideal candidate should be passionate about software engineering and possess leadership skills to drive sophisticated design challenges and issues to good quality resolution. He/She should be able to communicate effectively and work optimally with different teams across AMD sites. Key Responsibilities Work with AMD’s architecture specialists to improve future products. Design, develop and deliver to customer specific SW/FW requirements and enhancements related to neural/ inferencing engine. Own the SW/FW deliverables for IPU/NPU stack Work closely with needed key stakeholders for efficient feature implementation and issue resolution and be responsible for the commitments. Apply a data driven approach to resolve IPU/NPU driver/FW issues and delight the customers. Scope and perform quick feasibility study of new asks from the customers. Participating in new ASIC and hardware bring ups. Develop technical relationships with peers and partners. Preferred Experience Strong object-oriented programming background, C/C++ preferred. Ability to write high quality code with a keen attention to detail. Experience working on driver and firmware (FW) components for neural accelerator for various neural network frameworks like ONNX, TensorFlow/ TensorFlow Lite, and/ or PyTorch. Experience with Windows, Linux and/or Android operating system development Experience with Windows Driver Development either in platform or Multimedia/GFX driver development experience is preferable with 6 to 9 years of experience. Experience with software development processes and tools such as debuggers, source code control systems (GitHub) and profilers. Effective communication and problem-solving skills Knowledge in WindowsML, DirectML, Cuda, RoCm, D3D WDDM and Graphics rendering pipeline is a plus. Academic Credentials Bachelor’s or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Vapi, Gujarat, India

On-site

Linkedin logo

Job Title: AI Lead Engineer Location: Vapi, Gujarat Experience Required: 5+ Years Working Days: 6 Days a Week (Monday–Saturday) Industry Exposure: Manufacturing, Retail, Finance, Healthcare, life sciences or related field. Job Description: We are seeking a highly skilled and hands-on AI Lead to join our team in Vapi . The ideal candidate will have a proven track record of developing and deploying machine learning systems in real-world environments, along with the ability to lead AI projects from concept to production. You will work closely with business and technical stakeholders to drive innovation, optimize operations, and implement intelligent automation solutions. Key Responsibilities: Lead the design, development, and deployment of AI/ML models for business-critical applications. Build and implement computer vision systems (e.g., defect detection, image recognition) using frameworks like OpenCV and YOLO. Develop predictive analytics models (e.g., predictive maintenance, forecasting) using time series and machine learning algorithms such as XGBoost. Build and deploy recommendation engines and optimization models to improve operational efficiency. Establish and maintain robust MLOps pipelines using tools such as MLflow, Docker, and Jenkins. Collaborate with stakeholders across business and IT to define KPIs and deliver AI solutions aligned with organizational objectives. Integrate AI models into existing ERP or production systems using REST APIs and microservices. Mentor and guide a team of junior ML engineers and data scientists. Required Skills & Technologies: Programming Languages: Python (advanced), SQL, Bash, Java (basic) ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost DevOps & MLOps Tools: Docker, FastAPI, MLflow, Jenkins, Git Data Engineering & Visualization: Pandas, Spark, Airflow, Tableau Cloud Platforms: AWS (S3, EC2, SageMaker – basic) Specializations: Computer Vision (YOLOv8, OpenCV), NLP (spaCy, Transformers), Time Series Analysis Deployment: ONNX, REST APIs, ERP System Integration Qualifications: B.Tech / M.Tech / M.Sc in Computer Science, Data Science, or related field. 6+ years of experience in AI/ML with a strong focus on product-ready deployments. Demonstrated experience leading AI/ML teams or projects. Strong problem-solving skills and the ability to communicate effectively with cross-functional teams. Domain experience in manufacturing, retail, or healthcare preferred. What We Offer: A leadership role in an innovation-driven team Exposure to end-to-end AI product development in a dynamic industry environment Opportunities to lead, innovate, and mentor Competitive salary and benefits package 6-day work culture supporting growth and accountability This is a startup environment but with good reputable company, We are looking for someone who can work Monday to Saturday and who can lead a team and generate new solutions and ideas and lead / manage the project effectively-- Please fill this given below form before applying https://forms.gle/8b3gdxzvc2JwnYfZ6 Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies