Obscure startup wins prestigious CES 2024 award — you've probably never heard of it, but Panmnesia is the company that could make ChatGPT 6 (or 7) times faster

Panmnesia's CXL 3.0 technology winning a coveted award
(Image credit: Panmnesia)

The highly coveted Innovation Award at the forthcoming Consumer Electronics Show (CES) 2024 event in January has been snapped up by a Korean startup for its AI accelerator. 

Panmnesia has built its AI accelerator device on Compute Express Link (CXL) 3.0 technology, which allows an external memory pool to be shared with host computers, and components like the CPU, which can translate to near-limitless memory capacity. This is thanks to the incorporation of a CXL 3.0 controller into the accelerator chip.

CXL is used to connect system devices – including accelerators, memory expanders, processors, and switches. By linking up multiple accelerators and memory expanders using CXL switches, the technology can provide enough memory to an intensive system for AI applications.

What CXL 3.0 means for LLMs

The use of CXL 2.0 in devices like this would allow particular hosts access to their dedicated portion of pooled external memory, while the latest generation allows hosts to access the entire pool as and when needed.

“We believe that our CXL technology will be a cornerstone for next-generation AI acceleration system," said Panmesia founder and CEO Myoungsoo Jung in a statement

"We remain committed to our endeavor revolutionizing not only for AI acceleration system, but also other general-purpose environments such as data centers, cloud computing, and high-performance computing.”

Panmnesia's technology works akin to how clusters of servers may share external SSDs to store data, and would be particularly useful for servers because they'll often need to access more data that they can hold in the memory that's in-built.

This device is built specifically for large-scale AI applications – and its creators claim it's 101 times faster at performing AI-based search functions than conventional services, which use SSDs to store data, linked via networks. The architecture also minimizes energy costs and operational expenditure.

If used in the configuration of servers that the likes of OpenAI use to host its large language models (LLMs) such as ChatGPT, alongside hardware from other suppliers, it might drastically improve the performance of these models.

More from TechRadar Pro

TOPICS
Keumars Afifi-Sabet
Channel Editor (Technology), Live Science

Keumars Afifi-Sabet is the Technology Editor for Live Science. He has written for a variety of publications including ITPro, The Week Digital and ComputerActive. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. In his previous role, he oversaw the commissioning and publishing of long form in areas including AI, cyber security, cloud computing and digital transformation.

Read more
Cerebras WSE-3
DeepSeek on steroids: Cerebras embraces controversial Chinese ChatGPT rival and promises 57x faster inference speeds
d-Matrix Corsair card
Tech startup proposes a novel way to tackle massive LLMs using the fastest memory available to mankind
Representation of AI
These are the 10 hottest AI hardware companies to follow in 2025
A Corsair One i500 on a desk
Microsoft backed a tiny hardware startup that just launched its first AI processor that does inference without GPU or expensive HBM memory and a key Nvidia partner is collaborating with it
Half man, half AI.
Yet another tech startup wants to topple Nvidia with 'orders of magnitude' better energy efficiency; Sagence AI bets on analog in-memory compute to deliver 666K tokens/s on Llama2-70B
A hand reaching out to touch a futuristic rendering of an AI processor.
What is an AI accelerator?
Latest in Pro
FlexiSpot office furniture next to a TechRadar-branded badge that reads Big Savings.
Upgrade your home office for under $500 in the Amazon Spring Sale: My top picks and biggest savings
cybersecurity
Chinese government hackers allegedly spent years undetected in foreign phone networks
Data leak
A major Keenetic router data leak could put a million households at risk
Code Skull
Interpol operation arrests 300 suspects linked to African cybercrime rings
Insecure network with several red platforms connected through glowing data lines and a black hat hacker symbol
Multiple routers hit by new critical severity remote command injection vulnerability, with no fix in sight
An AI face in profile against a digital background.
Smarter, faster, better: how AI is elevating the customer experience industry
Latest in News
OpenAI logo
OpenAI just launched a free ChatGPT bible that will help you master the AI chatbot and Sora
Monster Hunter Wilds
Monster Hunter Wilds Title Update 1 launches in early April, adding new monsters and some of the best-looking armor sets I need to add to my collection
Zotac Gaming RTX 5090 Graphics Card
Nvidia Blackwell stock woes are compounded by price hikes as more RTX 5090 GPUs soar in pricing, and I’m sick and tired of it all at this point
A collage of Elizabeth Olsen's Scarlet Witch and Tatiana Maslany's She-Hulk
Marvel fans are already tired of Doomsday and Secret Wars cast gossip as two more superheroes get linked with roles in the next two Avengers movies
Four operators survey Verdansk. One holds a sniper rifle, one binoculars, another holds is landing with their parachute, while the last wears a skull mask
New Call of Duty: Warzone trailer shows a beautiful rebuilt Verdansk, but some fans want more: 'it won't be the same unfortunately'
An Apple Music pink/pixellated poster advertising DJ with Apple Music
DJ with Apple Music lands, allowing subscribers to build and mix DJ sets directly from its +100 million-song catalog