This startup wants to take on Nvidia with a server-on-a-chip to eliminate what it calls an already flawed system — faster GPU, CPU, LPU, TPU or NIC will not deliver the leap that many firms are aiming for
NeuReality claims to have built the ideal AI Inference system
According to Israeli startup NeuReality, many AI possibilities aren't fully realized due to the cost and complexity of building and scaling AI systems.
Current solutions are not optimized for inference and rely on general-purpose CPUs, which were not designed for AI. Moreover, CPU-centric architectures necessitate multiple hardware components, resulting in underutilized Deep Learning Accelerators (DLAs) due to CPU bottlenecks.
NeuReality's answer to this problem is the NR1AI Inference Solution, a combination of purpose-built software and a unique network addressable inference server-on-a-chip. NeuReality says this will deliver improved performance and scalability at a lower cost alongside reduced power consumption.
An express lane for large AI pipelines
“Our disruptive AI Inference technology is unbound by conventional CPUs, GPUs, and NICs," said NeuReality’s CEO Moshe Tanach. "We didn’t try to just improve an already flawed system. Instead, we unpacked and redefined the ideal AI Inference system from top to bottom and end to end, to deliver breakthrough performance, cost savings, and energy efficiency."
The key to NeuReality's solution is a Network Addressable Processing Unit (NAPU), a new architecture design that leverages the power of DLAs. The NeuReality NR1, a network addressable inference Server-on-a-Chip, has an embedded Neural Network Engine and a NAPU.
This new architecture enables inference through hardware with AI-over-Fabric, an AI hypervisor, and AI-pipeline offload.
The company has two products that utilize its Server-on-a-Chip: the NR1-M AI Inference Module and the NR1-S AI Inference Appliance. The former is a Full-Height, Double-wide PCIe card that contains one NR1 NAPU system-on-a-chip and a network-addressable Inference Server that can connect to an external DLA. The latter is an AI-centric inference server containing NR1-M modules with the NR1 NAPU. NeuReality claims the server “lowers cost and power performance by up to 50X but doesn’t require IT to implement for end users.”
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“Investing in more and more DLAs, GPUs, LPUs, TPUs… won’t address your core issue of system inefficiency,” said Tanach. “It's akin to installing a faster engine in your car to navigate through traffic congestion and dead ends - it simply won't get you to your destination any faster. NeuReality, on the other hand, provides an express lane for large AI pipelines, seamlessly routing tasks to purpose-built AI devices and swiftly delivering responses to your customers, while conserving both resources and capital.”
NeuReality recently secured $20 million in funding from the European Innovation Council (EIC) Fund, Varana Capital, Cleveland Avenue, XT Hi-Tech and OurCrowd.
More from TechRadar Pro
Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.
The ultimate steampunk machine? Toshiba glued an old-school dot matrix printer with a DVD drive, a touchscreen display and a PC with two SSDs — and it even runs Windows 10
After iPhone and iMac, Apple may be courting Foxconn to build AI servers based on its M-series CPU to accelerate Apple Intelligence potential