"Predatory pre-annoucement" - The brain behind the largest CPU ever calls out Nvidia for spreading ‘FUD’ amidst surprise updated GPU roadmap announcement
Nvidia is promising annual GPU releases until 2026, but not everyone thinks they can follow through on this
Nvidia is using deceptive practices and abusing its market dominance to quash the competition, according to Cerebras Systems CEO Andrew Feldman, after the firm unexpectedly announced its latest GPU product roadmap in October 2023.
Nvidia outlined new graphics cards set for annual release between 2024 and 2026 to add to the industry leading A100 and H100 GPUs currently in such high demand, with organizations across the industry sphere swallowing them up for generative AI workloads.
But Feldman labelled this news a “predetary pre-announcement” speaking to HPCWire, highlighting the firm has no obligation to see through on releasing any of the components it’s teased. By doing this, he’s speculated it’s only confused the market, especially in light of the fact Nvidia was, say, a year late with the H100 GPU. And he doubts Nvidia can see through on this strategy, nor might it want to.
Nvidia is just ‘throwing sand up in the air’
Nvidia teased yearly leaps on a single architecture in its announcement, with the Hopper Next following the Hpper GPU in 2024, followed by the Ada Lovelace-Next GPU, a successor to the Ada Lovelace graphics card, set for release in 2025.
“Companies have been making chips for a long time, and nobody has ever been able to succeed on a one-year cadence because the fabs do not change at a one-year pace, Feldman countered to HPCWire.
“In many ways, it has been a terrible block of time for Nvidia. Stability AI said they were going to go on Intel. Amazon said the Anthropic was going to run on them. We announced a monstrous deal that would produce enough compute so it would be clear that you could build… large clusters with us.
“[Nvidia’s] response, not surprising to me, in the strategy realm, is not a better product. It’s… throw sand up in the air and move your hands a lot. And you know, Nvidia was a year late with the H100.”
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Feldman has designed the world’s largest AI chip in the world, the Cerebras Wafer-Scale Engine 2 CPU – which is 46,226 square-mm and contains 2.6 trillion transistors across 850,000 cores.
He told the New Yorker that massive chips are better than smaller ones because cores communicate faster when they’re on the same chip rather than being scattered across a server room.
More from TechRadar Pro
- Check out our roundup of the best graphics cards for all purposes
- These are the best graphics cards from AMD
- AMD vs Nvidia: who makes the best GPUs?
Keumars Afifi-Sabet is the Technology Editor for Live Science. He has written for a variety of publications including ITPro, The Week Digital and ComputerActive. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. In his previous role, he oversaw the commissioning and publishing of long form in areas including AI, cyber security, cloud computing and digital transformation.