Senior Staff Engineer/Principal Engineer/Manager

8 - 13 years

14 - 19 Lacs

Posted:2 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Senior Staff Engineer/Principal Engineer/Manager - AIML/Hardware Accelerators

Job Area:
Engineering Group, Engineering Group > Systems Engineering

General Summary:

As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Systems Engineer, you will research, design, develop, simulate, and/or validate systems-level software, hardware, architecture, algorithms, and solutions that enables the development of cutting-edge technology. Qualcomm Systems Engineers collaborate across functional teams to meet and exceed system-level requirements and standards.

Minimum Qualifications:

Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 8+ years of Systems Engineering or related work experience.
OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 7+ years of Systems Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 6+ years of Systems Engineering or related work experience.

Senior Staff/Principal Engineer/Manager Machine Learning
We are looking for a Senior Staff/Principal AI/ML Engineer/Manager with expertise in model inference , optimization , debugging , and hardware acceleration . This role will focus on building efficient AI inference systems, debugging deep learning models, optimizing AI workloads for low latency, and accelerating deployment across diverse hardware platforms.
In addition to hands-on engineering, this role involves cutting-edge research in efficient deep learning, model compression, quantization, and AI hardware-aware optimization techniques . You will explore and implement state-of-the-art AI acceleration methods while collaborating with researchers, industry experts, and open-source communities to push the boundaries of AI performance.
This is an exciting opportunity for someone passionate about both applied AI development and AI research , with a strong focus on real-world deployment, model interpretability, and high-performance inference .
Education & Experience:
  • 17+ years of experience in AI/ML development, with at least 5 years in model inference, optimization, debugging, and Python-based AI deployment.
  • Masters or Ph.D. in Computer Science, Machine Learning, AI
Leadership & Collaboration
  • Lead a team of AI engineers in Python-based AI inference development .
  • Collaborate with ML researchers, software engineers, and DevOps teams to deploy optimized AI solutions.
  • Define and enforce best practices for debugging and optimizing AI models
Key Responsibilities
Model Optimization & Quantization
  • Optimize deep learning models using quantization (INT8, INT4, mixed precision etc), pruning, and knowledge distillation .
  • Implement Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT) for deployment.
  • Familiarity with TensorRT, ONNX Runtime, OpenVINO, TVM
AI Hardware Acceleration & Deployment
  • Optimize AI workloads for Qualcomm Hexagon DSP, GPUs (CUDA, Tensor Cores), TPUs, NPUs, FPGAs, Habana Gaudi, Apple Neural Engine .
  • Leverage Python APIs for hardware-specific acceleration , including cuDNN, XLA, MLIR .
  • Benchmark models on AI hardware architectures and debug performance issues
AI Research & Innovation
  • Conduct state-of-the-art research on AI inference efficiency, model compression, low-bit precision, sparse computing, and algorithmic acceleration .
  • Explore new deep learning architectures (Sparse Transformers, Mixture of Experts, Flash Attention) for better inference performance .
  • Contribute to open-source AI projects and publish findings in top-tier ML conferences (NeurIPS, ICML, CVPR).
  • Collaborate with hardware vendors and AI research teams to optimize deep learning models for next-gen AI accelerators.
Details of Expertise:
  • Experience optimizing LLMs, LVMs, LMMs for inference
  • Experience with deep learning frameworks : TensorFlow, PyTorch, JAX, ONNX.
  • Advanced skills in model quantization, pruning, and compression .
  • Proficiency in CUDA programming and Python GPU acceleration using cuPy, Numba, and TensorRT .
  • Hands-on experience with ML inference runtimes (TensorRT, TVM, ONNX Runtime, OpenVINO)
  • Experience working with RunTimes Delegates (TFLite, ONNX, Qualcomm)
  • Strong expertise in Python programming , writing optimized and scalable AI code.
  • Experience with debugging AI models , including examining computation graphs using Netron Viewer, TensorBoard, and ONNX Runtime Debugger .
  • Strong debugging skills using profiling tools (PyTorch Profiler, TensorFlow Profiler, cProfile, Nsight Systems, perf, Py-Spy) .
  • Expertise in cloud-based AI inference (AWS Inferentia, Azure ML, GCP AI Platform, Habana Gaudi).
  • Knowledge of hardware-aware optimizations (oneDNN, XLA, cuDNN, ROCm, MLIR, SparseML).
  • Contributions to open-source community
  • Publications in International forums conferences journals

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Qualcomm logo
Qualcomm

Technology

San Diego

RecommendedJobs for You