Posted at: 12 March

Senior Software Engineer, AI and DL Kernel Libraries

Company

CompanyNVIDIA

NVIDIA Corporation is a Santa Clara-based technology company specializing in designing GPUs and AI solutions for gaming, professional visualization, and cloud services, operating in both B2B and B2C markets globally.

Remote Hiring Policy:

NVIDIA supports flexible remote work arrangements and hires from various regions globally, including the Americas, Europe, Asia, and the Middle East, with roles that may require collaboration across time zones.

Job Type

Full-time

Allowed Applicant Locations

United States

Salary

$184,000 to $287,500 per year

Job Description

We're looking for outstanding AI systems engineers to develop groundbreaking technologies in the inference systems software stack! We build innovative AI systems software to accelerate for AI inference. As a member of the team, you'll develop libraries, code generators, and GPU kernel technologies for NVIDIA's hardware architecture. This means designing and building things like new abstractions, efficient attention kernel implementations, new LLM inference runtimes components, and kernel code generators to accelerate large language models, agents, and other high-impact AI workloads.What you'll be doing:Innovating and developing new AI systems technologies for efficient inferenceDesigning, implementing, and optimizing kernels for high impact AI workloadsDesigning and implementing extensible abstractions for LLM serving enginesBuilding efficient just-in-time domain specific compilers and runtimesCollaborating closely with other engineers at NVIDIA across deep learning frameworks, libraries, kernels, and GPU arch teamsContributing to open source communities like FlashInfer, vLLM, and SGLangWhat we need to see:Masters degree in Computer Science, Electrical Engineering, or related field (or equivalent experience); PhD are preferred6+ years (academic/ industry) experience with ML/DL systems development preferableStrong experience in developing or using deep learning frameworks (e.g. PyTorch, JAX, TensorFlow, ONNX, etc) and ideally inference engines and runtimes such as vLLM, SGLang, and MLC.Strong Python and C/C++ programming skillsStrong experience in GPU kernel development and performance optimizations (especially using CUDA C/C++, cuTile, Triton, or similar)Ways to stand out from the crowd:Background in domain specific compiler and library solutions for LLM inference and training (e.g. FlashInfer, Flash Attention)Expertise in inference engines like vLLM and SGLangExpertise in machine learning compilers (e.g. Apache TVM, MLIR)Open source project ownership or contributionsYour base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until March 15, 2026.This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.