Yet another tech startup wants to topple Nvidia with 'orders of magnitude' better energy efficiency; Sagence AI bets on analog in-memory compute to deliver 666K tokens/s on Llama2-70B
Simplifies AI inference by eliminating dynamic scheduling complexities
- Sagence brings analog in-memory compute to redefine AI inference
- Ten times lower power and 20 times lower costs
- Also offers integration with PyTorch and TensorFlow
Sagence AI has introduced an advanced analog in-memory compute architecture designed to address issues of power, cost, and scalability in AI inference.
Using an analog-based approach, the architecture offers improvements in energy efficiency and cost-effectiveness while delivering performance comparable to existing high-end GPU and CPU systems.
This bold step positions Sagence AI as a potential disruptor in a market dominated by Nvidia.
Efficiency and performance
The Sagence architecture offers benefits when processing large language models like Llama2-70B. When normalized to 666,000 tokens per second, Sagence’s technology delivers its results with 10 times lower power consumption, 20 times lower costs, and 20 times smaller rack space compared to leading GPU-based solutions.
This design prioritizes the demands of inference over training, reflecting the shift in AI compute focus within data centers. With its efficiency and affordability, Sagence offers a solution to the growing challenge of ensuring return on investment (ROI) as AI applications expand to large-scale deployment.
At the heart of Sagence’s innovation is its analog in-memory computing technology, which merges storage and computation within memory cells. By eliminating the need for separate storage and scheduled multiply-accumulate circuits, this approach simplifies chip designs, reduces costs, and improves power efficiency.
Sagence also employs deep subthreshold computing in multi-level memory cells - an industry-first innovation - to achieve the efficiency gains required for scalable AI inference.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Traditional CPU and GPU-based systems rely on complex dynamic scheduling, which increases hardware demands, inefficiencies, and power consumption. Sagence’s statically scheduled architecture simplifies these processes, mirroring biological neural networks.
The system is also designed to integrate with existing AI development frameworks like PyTorch, ONNX, and TensorFlow. Once trained neural networks are imported, Sagence’s architecture negates the need for further GPU-based processing, simplifying deployment and reducing costs.
“A fundamental advancement in AI inference hardware is vital to the future of AI. Use of large language models (LLMs) and Generative AI drives demand for rapid and massive change at the nucleus of computing, requiring an unprecedented combination of highest performance at lowest power and economics that match costs to the value created,” said Vishal Sarin, CEO & Founder, Sagence AI.
“The legacy computing devices today that are capable of extreme high-performance AI inferencing cost too much to be economically viable and consume too much energy to be environmentally sustainable. Our mission is to break those performance and economic limitations in an environmentally responsible way,” Sarin added.
Via IEEE Spectrum
You may also like
Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com