Posted at: 2 May

Senior Machine Learning Engineer, Quantized Inference

Company

CompanyNVIDIA

NVIDIA Corporation is a Santa Clara-based technology company specializing in designing GPUs and AI solutions for gaming, professional visualization, and cloud services, operating in both B2B and B2C markets globally.

Remote Hiring Policy:

NVIDIA supports flexible remote work arrangements and hires from various regions globally, including the Americas, Europe, Asia, and the Middle East, with roles that may require collaboration across time zones.

Job Type

Full-time

Allowed Applicant Locations

United States

Salary

$152,000 to $287,500 per year

Job Description

We are now looking for a Senior Machine Learning Engineer for Quantized Inference! NVIDIA is seeking machine learning engineers to accelerate the discovery and deployment of efficient inference recipes for LLMs. A recipe defines which operators are transformed into low-precision or sparsified variants unlocking throughput and latency gains without regressing accuracy nor verbosity. Recipes may incorporate techniques such as rotations, block scaling to attenuate outlier impact, or improved calibration data drawn from SFT/RL pipelines.Pushing the frontier of inference efficiency requires a holistic view of the workload. The candidate will navigate the full design space: identifying which layers are sensitive to quantization relative to their inference cost, diagnosing why specific recipes fail, and adapting training techniques such as quantization-aware distillation or targeted fine-tuning to recover accuracy where needed. Our team develops quantized and sparse recipes that ship and run at scale across NVIDIA's LLM product portfolio. Our recipes directly determine the cost and latency of serving models to millions of users. We collaborate with inference framework teams (vLLM, TRT-LLM) to ensure recipes translate into real throughput gains, and with post-training teams to source calibration data and co-design quantization-aware training curricula.What you'll be doing:Prototype state-of-the-art quantization and sparsity recipes applied to LLM workloadsDesign and execute post-training quantization or quantization-aware distillation experiments: prepare SFT/RL calibration datasets, manage checkpoint-level eval sweeps, and iterate on recipes based on resultsRun accuracy and verbosity evaluations of quantized/sparsified LLM workloads at cluster scaleDevelop data analysis tooling and visualizations for numerics debuggingParticipate in code reviews and incorporate feedbackContribute improvements upstream to open-source inference and optimization libraries; publish findings at ML conferences where appropriateWhat we need to see:Proficient in Python and PyTorchExperience with quantization, sparsity, or other model compression techniquesAbility to design and run rigorous experiments: controlled ablations, statistical significance, reproducibilityFamiliarity with LLM evaluation methodology (benchmarks, human-preference proxies, verbosity metrics)MS/PhD in Computer Science, Computer Engineering, Machine Learning, or equivalent experience. 3+ years of experience in an applied ML roleDemonstrated ability to move fast with ambiguous requirements, with strong written and verbal communicationWays to stand out from the crowd:Published work or production experience in post-training quantization or quantization-aware trainingExperience with SFT, RLHF/DPO, or distillation pipelinesFamiliarity with inference serving frameworks (vLLM, TRT-LLM, SGLang)Track record of debugging numerical issues in mixed-precision training or inferenceYour base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.You will also be eligible for equity and benefits.Applications for this job will be accepted at least until May 3, 2026.This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.