Jobs
Interviews

1 Acceleratorbased Architectures Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You should have a Bachelor's degree in Computer Science, Electrical Engineering or equivalent practical experience, along with 8 years of experience with compilers (e.g., optimization, parallelization, etc.) and familiarity with Multi-Level Intermediate Representation (MLIR) or Low Level Virtual Machines (LLVM). A Master's degree or PhD in Computer Science or a related field would be preferred. It would be advantageous to have experience in compiling for architectures across Internet protocols (IPs) like Central Processing Unit (CPU), Graphics Processing Unit (GPU), and Neural Processing Unit (NPUs), as well as experience in executing programs or several projects. Additionally, experience with compiler development for accelerator-based architectures is desired. As a software engineer at Google, you will be working on cutting-edge technologies that impact billions of users worldwide. The projects you work on will involve handling massive amounts of information beyond web search and will require expertise in information retrieval, distributed computing, system design, networking, security, artificial intelligence, and more. Versatility, leadership qualities, and a passion for tackling new challenges are essential qualities for this role. The compiler team at Google is responsible for analyzing, optimizing, and compiling machine learning models to further Google's mission of organizing information and making it universally accessible and useful. Combining AI, software, and hardware expertise, the team aims to create innovative technologies that enhance computing speed, seamlessness, and power to improve people's lives. As part of the Edge Tensor Processing Unit (TPU) compiler team, your responsibilities will include analyzing and enhancing compiler quality and performance, developing algorithms for optimization, parallelization, and scheduling to optimize compute and data movement costs for Machine Learning (ML) workloads on the Edge TPU, collaborating with Edge TPU architects on designing future accelerators and hardware/software interface, mapping AI models and other workloads into Edge TPU instructions through the compiler, and managing a team of compiler engineers.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies