Slim-Llama is an LLM ASIC processor that can tackle 3-bllion parameters while sipping only 4.69mW - and we'll find out more on this potential AI game changer very soon

Slim Llama
(Image credit: KAIST)

  • Slim-Llama reduces power needs using binary/ternary quantization
  • Achieves 4.59x efficiency boost, consuming 4.69–82.07mW at scale
  • Supports 3B-parameter models with 489ms latency, enabling efficiency

Traditional large language models (LLMs) often suffer from excessive power demands due to frequent external memory access - however researchers at the Korea Advanced Institute of Science and Technology (KAIST), have now developed Slim-Llama, an ASIC designed to address this issue through clever quantization and data management.

Slim-Llama employs binary/ternary quantization which reduces the precision of model weights to just 1 or 2 bits, significantly lowering the computational and memory requirements.

To further improve efficiency, it integrates a Sparsity-aware Look-up Table, improving sparse data handling and reducing unnecessary computations. The design also incorporates an output reuse scheme and index vector reordering, minimizing redundant operations and improving data flow efficiency.

Reduced dependency on external memory

According to the team, the technology demonstrates a 4.59x improvement in benchmark energy efficiency compared to previous state-of-the-art solutions.

Slim-Llama achieves system power consumption as low as 4.69mW at 25MHz and scales to 82.07mW at 200MHz, maintaining impressive energy efficiency even at higher frequencies. It is capable of delivering peak performance of up to 4.92 TOPS at 1.31 TOPS/W, further showcasing its efficiency.

The chip features a total die area of 20.25mm², utilizing Samsung’s 28nm CMOS technology. With 500KB of on-chip SRAM, Slim-Llama reduces dependency on external memory, significantly cutting energy costs associated with data movement. The system supports external bandwidth of 1.6GB/s at 200MHz, promising smooth data handling.

Slim-Llama supports models like Llama 1bit and Llama 1.5bit, with up to 3 billion parameters, and KAIST says it delivers benchmark performance that meets the demands of modern AI applications. With a latency of 489ms for the Llama 1bit model, Slim-Llama demonstrates both efficiency and performance, and making it the first ASIC to run billion-parameter models with such low power consumption.

Although it's early days, this breakthrough in energy-efficient computing could potentially pave the way for more sustainable and accessible AI hardware solutions, catering to the growing demand for efficient LLM deployment. The KAIST team is set to reveal more about Slim-Llama at the 2025 IEEE International Solid-State Circuits Conference in San Francisco on Wednesday, February 19.

You might also like

Wayne Williams
Editor

Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.

Read more
Half man, half AI.
Yet another tech startup wants to topple Nvidia with 'orders of magnitude' better energy efficiency; Sagence AI bets on analog in-memory compute to deliver 666K tokens/s on Llama2-70B
SambaNova runs DeepSeek
Nvidia rival claims DeepSeek world record as it delivers industry-first performance with 95% fewer chips
A hand reaching out to touch a futuristic rendering of an AI processor.
Researchers want to embrace Arm's celebrated paradigm for a universal generative AI processor; a puzzling MEGA.mini core architecture
d-Matrix Corsair card
Tech startup proposes a novel way to tackle massive LLMs using the fastest memory available to mankind
Cerebras WSE-3
DeepSeek on steroids: Cerebras embraces controversial Chinese ChatGPT rival and promises 57x faster inference speeds
Apple Mac Studio
Apple Mac Studio M3 Ultra workstation can run Deepseek R1 671B AI model entirely in memory using less than 200W, reviewer finds
Latest in Pro
Google Gemini AI
Gmail is adding a new Gemini AI tool to help smarten up your work emails
IBM office logo
IBM to provide platform for flagship cyber skills programme for girls
Teams
Microsoft Teams is finally adding a tiny but crucial feature I honestly can't believe it never had
Judge sitting behind laptop in office
A day in the life of an AI-augmented lawyer
Cyber-security
Why Windows End of Life deadlines require a change of mindset
cybersecurity
What's the right type of web hosting for me?
Latest in News
Tesla Roadster 2
Tesla is still taking deposits on its long overdue Roadster, despite promising it would arrive in 2020
Samsung HW-Q990D soundbar with Halloween theme over the top
Samsung promises to repair soundbars bricked by its disastrous software update for free – but it'll probably involve shipping
Google Gemini AI
Gmail is adding a new Gemini AI tool to help smarten up your work emails
DJI Mavic 3 Pro
More DJI Mavic 4 Pro leaks seemingly reveal launch date, price and key features of the triple camera drone – here's what to expect
Android 16 logo on a phone
Here's how Android 16 will upgrade the screen unlocking process on your Pixel
Man sitting on sofa, drinking coffee, looking at phone in surprise
Thousands of coffee lovers warned to stop using their espresso machines immediately after reports of burns and lacerations