AI Insights
NVIDIA

Senior Site Reliability Engineer - Datacenter Automation

NVIDIA · Santa Clara, California, US
full-timesenior (5-12 yrs)Posted 31d ago
DevOps / Site Reliability EngineeringIC3ICOn-siteVisa SponsoredRelocation
StackGoPythonKubernetesSlurmBright Cluster ManagerSite Reliability EngineeringIncident ManagementObservabilityMonitoringAlertingGPU InfrastructureDistributed SystemsCluster ManagementTelemetryCI/CDDatacenter AutomationInfrastructure as CodeData Structures and Algorithms

Summary

NVIDIA is seeking a Senior SRE to join its DGX Cloud team, responsible for scaling and operating large GPU clusters powering AI infrastructure. The role involves datacenter automation, observability, incident management, and lifecycle management of GPU assets across multiple cloud providers.

About the role

NVIDIA is hiring experienced SRE engineers to help scale up its AI Infrastructure. We expect you to have significant experience with site reliability principles and techniques including reliability assessments, incident management processes, production system observability,  monitoring and alerting, automated deployments and toil elimination. We view SRE as a software engineering discipline and expect significant contributions to our codebase. We welcome out-of-the-box thinkers who can provide new ideas with strong execution bias. Expect to be constantly challenged, improving, and evolving for the better. You will help advance NVIDIA's capacity to build and deploy leading infrastructure solutions for a broad range of AI-based applications. If you're creative, passionate about SRE, and love having fun, please apply today!
 

For two decades, we have pioneered visual computing, the art and science of computer graphics. With the invention of the GPU - the engine of modern visual computing - the field has expanded to encompass video games, movie production, product design, medical diagnosis and scientific research. Today, we stand at the beginning of the next era, the AI computing era, ignited by a new computing model, GPU deep learning.

What you will be doing:

  • You will be part of an DGX Cloud team responsible for production systems that enable large scalable GPU clusters to be used for a variety of AI workloads. This includes working on supporting the operation of custom software related to GPU asset provisioning, configuration, and lifecycle management across many cloud providers.

  • Implementing monitoring and health management capabilities that enable industry leading reliability, availability, and scalability of GPU assets. You will be harnessing multiple data streams, ranging from GPU hardware diagnostics to cluster and network telemetry.

  • Working with teams across NVIDIA to ensure production AI clusters run reliability and consistently with maximum performance.  Evaluating system failures and improving services based on a well-defined incident management process. 

What we need to see:

  • Direct experience in a DevOps/SRE role within a highly technical organization with demonstrable impact from your work.

  • Highly motivated with strong communication skills, you can work successfully with multi-functional teams, principles, and architects and coordinate effectively across organizational boundaries and geographies.

  • 5+ years in similar role and experience on large-scale production systems.  Experience with the aforementioned DevOps/SRE principles, tools and techniques. 

  • You possess a BS in Computer Science, Engineering, Physics, Mathematics or a comparable Degree or equivalent experience.

  • Technical knowledge, including a systems programming language (Go, Python) and a solid understanding of data structures and algorithms.  
     

Ways to stand out from the crowd:

  • Technical competency in managing and automating large-scale distributed systems independent of cloud providers. Advanced hands-on experience and deep understanding of cluster management systems (Kubernetes, Slurm, Bright Cluster Manager)

  • Proven operational excellence in maintaining reliable and performant AI infrastructure.

What you'll do

1Operate and support production systems enabling large-scale GPU clusters for AI workloads on DGX Cloud
2Develop and maintain custom software for GPU asset provisioning, configuration, and lifecycle management across cloud providers
3Implement monitoring and health management capabilities to ensure reliability, availability, and scalability of GPU assets
4Harness data streams from GPU hardware diagnostics, cluster telemetry, and network telemetry for observability
5Collaborate across NVIDIA teams to ensure AI clusters run reliably and consistently with maximum performance
6Evaluate system failures and improve services through a well-defined incident management process
7Contribute significantly to the engineering codebase in an SRE-as-software-engineering capacity
8Eliminate toil through automation and improve deployment pipelines

Requirements

5+ years of hands-on DevOps/SRE experience on large-scale production systems with demonstrable impact
Proficiency in systems programming languages such as Go or Python with solid understanding of data structures and algorithms
Experience with SRE principles including reliability assessments, incident management, monitoring/alerting, and toil elimination
Strong cross-functional communication skills to coordinate across multi-disciplinary teams, architects, and global organizational boundaries
BS in Computer Science, Engineering, Physics, Mathematics or equivalent practical experience

Nice to have

Kubernetes
Slurm
Bright Cluster Manager
GPU infrastructure management
Cloud-agnostic distributed systems automation
AI infrastructure operations

Role overview

Role family
DevOps / Site Reliability Engineering
Level
IC3 — devops_sre
Experience
5–12 years
Type
Individual Contributor
Remote policy
On-site
Visa sponsorship
Available

Tech stack analysis

LANGUAGES
GoPython
INFRASTRUCTURE
KubernetesSlurmBright Cluster ManagerMulti-cloud (AWS, GCP, Azure implied)GPU clusters (DGX)CI/CD pipelines
TOOLS
GPU hardware diagnostics (DCGM/NVML implied)Cluster telemetry toolsNetwork telemetry toolsMonitoring and alerting platforms

Salary estimate

$175K – $240K
AI-estimated salary range
Confidence82%
Reasoning

NVIDIA is a top-tier semiconductor/AI company headquartered in Santa Clara, CA. Senior SRE roles at NVIDIA with 5+ years of experience and specialized GPU/AI infrastructure expertise typically command $175K–$240K base salary, consistent with Levels.fyi and Glassdoor data for NVIDIA L5/L6-equivalent SRE positions. Total compensation including RSUs and bonus is likely $300K–$450K+.

See the AI-estimated salary range for this role

Sign up free →

Green flags

5 items
Role sits at the cutting edge of AI infrastructure, offering rare experience with GPU cluster management at massive scale on DGX Cloud.growth

Discover all 5 green flags for this role

Sign up free →

Benefits breakdown

HEALTH & WELLNESS
Medical insurance
Dental insurance
Vision insurance

See all benefits organized by category — health, financial, time off & more

Sign up free →

Hiring insights

JD quality
7/10
Urgency
high
Autonomy
high
Team size
medium (5-15)

See JD quality score, hiring urgency & team details

Sign up free →

Red flags

PRO4 items
No explicit mention of remote or hybrid flexibility — role appears to be on-site at Santa Clara HQ, which may limit flexibility.work life balance

See all 4 red flags — what the JD isn't telling you

Sign up free →

Interview insights

PRO
Rounds
5
Duration
4 wks
Difficulty
hard
Take-home
No

Get full interview breakdown — rounds, likely topics & prep tips

Sign up free →

Career path

PRO
Next roles
Staff SREPrincipal SREEngineering Manager – Infrastructure

See where this role leads — full career progression

Sign up free →
About the company

NVIDIA is the world's leading designer of GPUs and AI computing platforms. Its chips power everything from gaming and data centers to autonomous vehicles and scientific research. With a market cap exceeding $2 trillion, NVIDIA's CUDA platform and AI accelerators have become the backbone of the global AI revolution.

HQSanta Clara, CA, USA
Interview difficultyhard
Build vs Maintainboth
Cross-functionalYes