NVIDIA's Hidden $31B Second Pillar: How Jensen Huang's Mellanox Acquisition Became the Backbone of AI Factories
The Networking Business Nobody Talks About
While the market's attention remains fixed on NVIDIA's AI chips, Jensen Huang has quietly built a second pillar worth hundreds of billions of dollars — and it has nothing to do with GPUs.
NVIDIA's data center networking business, born from the 2020 acquisition of Mellanox Technologies for $7 billion, has become the company's second-largest revenue driver and its fastest-growing division.
The Numbers
| Metric | Value |
|---|---|
| Last quarter networking revenue | $11 billion |
| Year-over-year growth | 267% |
| Full-year networking revenue | $31+ billion |
| vs. Cisco's full-year network revenue | NVIDIA exceeds it per quarter |
| Original acquisition cost (Mellanox) | $7 billion (2020) |
NVIDIA's single-quarter networking revenue now exceeds what legacy networking giant Cisco is estimated to generate in networking revenue for an entire year.
The "AI Factory" Stack
The networking division provides the complete infrastructure needed to build what NVIDIA calls "AI factories" — dedicated data centers for training AI models:
- NVLink — Connects GPUs within a server rack
- InfiniBand Switches — In-network computing platform for inter-rack communication
- Spectrum-X — AI-optimized Ethernet platform
- Co-Packaged Optics (CPO) Switches — Next-generation optical interconnects
As NVIDIA networking SVP Kevin Deierling explains:
"People typically think of networking as 'I have a printer, I need to connect it.' But Jensen said on the first day he acquired us: the data center is the new computing unit. Networking isn't just moving small amounts of data between compute nodes — it is the foundation."
The Mellanox Acquisition: $7B That Changed Everything
When NVIDIA acquired Mellanox in 2020, even employees didn't fully understand the vision:
"When Jensen acquired Mellanox in 2020, he saw it as the missing puzzle piece to make GPUs a complete solution," said analyst Cook.
The strategic insight was that GPUs alone are useless without the network that connects them. Training large language models requires thousands of GPUs communicating in parallel — and the network determines how efficiently they can share data. Without NVLink and InfiniBand, GPU clusters would be bottlenecked by interconnect bandwidth rather than compute.
Full-Stack Only: No Components Sold Separately
A key element of NVIDIA's networking strategy is that these technologies are only sold as part of full-stack solutions, not as individual components. This mirrors the GPU strategy: NVIDIA doesn't sell bare chips — it sells complete systems (DGX, HGX).
This approach:
- Increases average selling prices
- Strengthens customer lock-in
- Makes it harder for competitors to match the integrated experience
- Justifies premium pricing through demonstrated performance
GTC 2026 Reinforcements
At the NVIDIA GTC conference on March 16, the company doubled down on networking with:
- Rubin platform — featuring six new chips
- Inference Context Memory Storage — new memory architecture
- Spectrum-X Ethernet Photonics switches — higher-efficiency optical interconnects
Why This Matters
The networking business is strategically critical for several reasons:
- Moat deepening: As AI models grow larger, networking becomes more important relative to compute. The bottleneck is shifting from FLOPs to bandwidth.
- Revenue diversification: NVIDIA isn't just a chip company — it's a full-stack data center infrastructure provider.
- Competitive insulation: Even if rivals match NVIDIA's GPU performance, they still need the networking stack to deploy at scale.
- Pricing power: Full-stack solutions command higher margins than commodity components.
"The network is no longer a peripheral device connecting printers or slow I/O. It is the computer's foundation. In the past, computers had backplanes. Today, the network is the AI factory's backplane. It is absolutely critical." — Kevin Deierling, NVIDIA SVP
Source: WallstreetCN