AI Insights
NVIDIA

Deep Learning Performance Architect - New College Graduate 2026

NVIDIA · Santa Clara, California, US
full-timejunior (0-3 yrs)Posted 26d ago
Hardware/ML Architecture EngineeringIC2ICOn-siteVisa Sponsored
StackPythonCC++GPU ArchitectureDeep LearningMachine LearningPerformance ModelingArchitecture SimulationProfilingPyTorchJAXTensorRTcuDNNcuBLASCUTLASSMLIRTritonCUDAOpenCLASIC Architecture

Summary

NVIDIA is hiring new grad (2026) Deep Learning Performance Architects to analyze, model, and develop next-generation GPU architectures accelerating AI and HPC workloads. Roles span performance modeling, simulator development, and cross-functional collaboration with HW/SW/research teams.

About the role

We are now seeking a Deep Learning Performance Architect!

NVIDIA is looking for outstanding Performance Architects with a background in performance analysis, performance modeling, and AI/deep learning to help analyze and develop the next generation of architectures that accelerate AI and high-performance computing applications.

What you’ll be doing:

  • Develop innovative architectures to extend the state of the art in deep learning performance and efficiency

  • Analyze performance, cost and power trade-offs by developing analytical models, simulators and test suites

  • Understand and analyze the interplay of hardware and software architectures on future algorithms, programming models and applications

  • Develop, analyze, and harness groundbreaking Deep Learning frameworks, libraries, and compilers

  • Actively collaborate with software, product and research teams to guide the direction of deep learning HW and SW

What we need to see:

  • MS or PhD in Computer Science, Computer Engineering, Electrical Engineering or equivalent experience

  • Strong background in GPU or Deep Learning ASIC architecture for training and/or inference

  • Experience with performance modeling, architecture simulation, profiling, and analysis

  • Solid foundation in machine learning and deep learning

  • Strong programming skills in Python, C, C++

Ways to stand out from the crowd:

  • Background with deep neural network training, inference and optimization in leading frameworks (e.g. Pytorch, JAX, TensorRT)

  • Experience with relevant libraries, compilers, and languages - CUDNN, CUBLAS, CUTLASS, MLIR, Triton, CUDA, OpenCL

  • Experience with the architecture of or workload analysis on other DL accelerators

  • Demonstration of self-motivation, with a knack for critical thinking and thinking outside the box

Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. NVIDIA's GPUs run AI algorithms, simulating human intelligence, and act as the brains of computers, robots and self-driving cars that can perceive and understand the world. Increasingly known as “the AI computing company”, NVIDIA wants you! Come, join our Deep Learning Architecture team, where you can help build real-time, efficient computing platforms driving our success in this exciting and rapidly growing field.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 124,000 USD - 195,500 USD for Level 2, and 152,000 USD - 241,500 USD for Level 3.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until March 3, 2026.

This posting is for an existing vacancy. 

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

What you'll do

1Develop innovative architectures to extend the state of the art in deep learning performance and efficiency
2Analyze performance, cost, and power trade-offs by developing analytical models, simulators, and test suites
3Understand and analyze the interplay of hardware and software architectures on future algorithms, programming models, and applications
4Develop, analyze, and harness deep learning frameworks, libraries, and compilers
5Actively collaborate with software, product, and research teams to guide the direction of deep learning HW and SW

Requirements

MS or PhD in CS, CE, EE or equivalent with strong GPU or Deep Learning ASIC architecture background for training and/or inference
Hands-on experience with performance modeling, architecture simulation, profiling, and analysis
Solid foundation in machine learning and deep learning theory and practice
Strong programming skills in Python, C, and C++

Nice to have

PyTorch
JAX
TensorRT
cuDNN
cuBLAS
CUTLASS
MLIR
Triton
CUDA
OpenCL
DL accelerator architecture analysis

Role overview

Role family
Hardware/ML Architecture Engineering
Level
IC2 — ml_ai
Experience
0–3 years
Type
Individual Contributor
Remote policy
On-site
Visa sponsorship
Available

Tech stack analysis

LANGUAGES
PythonCC++CUDAOpenCL
FRAMEWORKS
PyTorchJAXTensorRTMLIRTriton
INFRASTRUCTURE
CUDAcuDNNcuBLASCUTLASS
TOOLS
GPU profilersarchitecture simulatorsperformance modeling tools

Green flags

5 items
Salary range is fully disclosed for both Level 2 ($124K–$195.5K) and Level 3 ($152K–$241.5K), plus equity — highly transparent for a new grad posting.compensation

Discover all 5 green flags for this role

Sign up free →

Benefits breakdown

See all benefits organized by category — health, financial, time off & more

Sign up free →

Hiring insights

JD quality
8/10
Urgency
medium
Autonomy
high
Team size
medium (5-15)

See JD quality score, hiring urgency & team details

Sign up free →

Red flags

PRO4 items
Requires MS or PhD for a 'new college graduate' role, which is a high academic bar that may screen out strong BS candidates with relevant experience.requirements

See all 4 red flags — what the JD isn't telling you

Sign up free →

Interview insights

PRO
Rounds
5
Duration
4 wks
Difficulty
very hard
Take-home
Yes

Get full interview breakdown — rounds, likely topics & prep tips

Sign up free →

Career path

PRO
Next roles
Senior Deep Learning ArchitectStaff GPU ArchitectML Systems Research Scientist

See where this role leads — full career progression

Sign up free →
About the company

NVIDIA is the world's leading designer of GPUs and AI computing platforms. Its chips power everything from gaming and data centers to autonomous vehicles and scientific research. With a market cap exceeding $2 trillion, NVIDIA's CUDA platform and AI accelerators have become the backbone of the global AI revolution.

HQSanta Clara, CA, USA
Interview difficultyvery hard
Build vs Maintainbuild
Cross-functionalYes