Nvidia, beware! IBM has a new analog AI chip that could give the H100 a run for its money
The 14nm analog chip mimics the human brain and is up to 14 times more efficient than leading GPUs
There’s every chance IBM has just unveiled the blueprint for the future of AI development with an analog AI chip that’s said to be up to 14 times more energy efficient than current industry-leading components.
One of the major problems with generative AI is how power-hungry the technology currently is – and may one day become.
The costs involved in training models and running the infrastructure will only skyrocket as the space matures. ChatGPT, for example, costs more than $700,000 per day to run, according to Insider.
IBM progress
IBM’s prototype chip, which the firm revealed in Nature, aims to ease the pressure on enterprises that build and operate generative AI platforms like Midjourney or GPT-4 by slashing energy consumption.
This is due to the way the analog chip is built; these components differ from digital chips in that can manipulate analog signals and understand gradations between 0 and 1. Digital chips are the most widespread in today’s age, but they only work with distinct binary signals. There are also variations in functionality, signal processing, and application areas.
Nvidia’s chips, including the H100 Tensor Core GPU and A100 Tensor Core GPU, are chiefly the components that power many of today’s generative AI platforms. Should IBM iterate on the prototype and prime it for the mass market, however, it may well one day displace Nvidia as the current mainstay.
IBM claims its 14nm analog AI chip, which can encode 35 million phase-change memory devices per component, can model up to 17 million parameters. The firm has also said its chip mimics the way in which a human brain would operate, with the microchip performing computations directly within memory.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
It demonstrated the merits of using such a chip in several experiments, including one in which a system was able to transcribe audio of people speaking with accuracy very close to digital hardware setups.
The IBM prototype was roughly 14 times more efficient per watt, although simulations have previously shown such hardware might be between 40 and 140 times as energy-efficient as today’s leading GPUs.
Read more
Keumars Afifi-Sabet is the Technology Editor for Live Science. He has written for a variety of publications including ITPro, The Week Digital and ComputerActive. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. In his previous role, he oversaw the commissioning and publishing of long form in areas including AI, cyber security, cloud computing and digital transformation.