AMD takes the AI networking battle to Nvidia with new DPU launch

AMD Pensando Salina DPU
(Image credit: AMD)

AMD has revealed an upgraded data processing unit (DPU) as it looks to stake its claim to power the next generation of AI.

The new Pensando Salina DPU is the company's third-generation release, promises 2x performance, bandwidth and scale compared to the previous generation.

AMD says it can support 400G throughput, meaning faster data transfer rates than ever before, a huge advantage as companies around the world look for quicker and more efficient infrastructure to keep up with AI demands.

Pensando Salina DPU

As with previous generations, AMD's latest DPU is split into two parts: the front-end, which delivers data and information to an AI cluster, and the backend, which manages data transfer between accelerators and clusters.

Alongside the Pensando Salina DPU (which governs the front-end), the company has also announced the AMD Pensando Pollara 400 to manage the back-end.

The industry’s first Ultra Ethernet Consortium (UEC) ready AI NIC, the Pensando Pollara 400 supports the next-gen RDMA software and is backed by an open ecosystem of networking, offering customers the flexibility needed to embrace the new AI age.

The AMD Pensando Salina DPU and AMD Pensando Pollara 400 are sampling with customers now, with a public release scheduled for the first half of 2025.

More from TechRadar Pro

Mike Moore
Deputy Editor, TechRadar Pro

Mike Moore is Deputy Editor at TechRadar Pro. He has worked as a B2B and B2C tech journalist for nearly a decade, including at one of the UK's leading national newspapers and fellow Future title ITProPortal, and when he's not keeping track of all the latest enterprise and workplace trends, can most likely be found watching, following or taking part in some kind of sport.

TOPICS