This is what $1 billion worth of AI GPUs look like — Elon Musk publishes video tour of Cortex, X's AI training supercluster powered by Nvidia's now obsolete H100

Tesla's Cortex AI training supercluster
(Image credit: Elon Musk via X)

We love a good look inside a supercomputer, with one of our recent favorites being the glimpse Nvidia gave us of Eos, the ninth fastest supercomputer on the planet.

Now, Elon Musk has provided a peek at the massive AI supercluster, newly dubbed Cortex, being used by X (formerly Twitter).

The supercluster, currently under construction at Tesla’s Giga Texas plant, is set to house 70,000 AI servers, with an initial power and cooling requirement of 130 megawatts, scaling up to 500 megawatts by 2026.

Tesla's AI strategy

In the video, embedded below, Musk shows rows upon rows of server racks, potentially holding up to 2,000 GPU servers - just a fraction of the 50,000 Nvidia H100 GPUs and 20,000 Tesla hardware units expected to eventually populate Cortex. The video, although brief, offers a rare inside look at the infrastructure that will soon drive Tesla’s most ambitious AI projects.

Cortex is being developed to advance Tesla’s AI capabilities, particularly for training the Full Self-Driving (FSD) autopilot system used in its cars and the Optimus robot, an autonomous humanoid set for limited production in 2025. The supercluster's cooling system, featuring massive fans and Supermicro-provided liquid cooling, is designed to handle the extensive power demands, which, Tom's Hardware points out, is comparable to a large coal power plant.

Cortex is part of Musk's broader strategy to deploy several supercomputers, including the operational Memphis Supercluster, which is powered by 100,000 Nvidia H100 GPUs, and the upcoming $500 million Dojo supercomputer in Buffalo, New York.

Despite some delays in upgrading to Nvidia's latest Blackwell GPUs, Musk's aggressive acquisition of AI hardware shows how keen Tesla is to be at the forefront of AI development.

The divisive billionaire said earlier this year the company was planning to spend "over a billion dollars" on Nvidia and AMD hardware this year alone just to stay competitive in the AI space.

More from TechRadar Pro

Wayne Williams
Editor

Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.