100x less compute with GPT-level LLM performance: How a little known open source project could help solve the GPU power conundrum — RWKV looks promising but challenges remain
New model requires significantly fewer resources for running and training LLMs
Recurrent Neural Networks (RNNs) are a type of Artificial Intelligence primarily used in the field of deep learning. Unlike traditional neural networks, RNNs have a memory that captures information about what has been calculated so far. In other words, they use their understanding from previous inputs to influence the output they will produce.
RNNs are called "recurrent" because they perform the same task for every element in a sequence, with the output being dependent on the previous computations. RNNs are still used to power smart technologies like Apple's Siri and Google Translate.
However, with the advent of transformers like ChatGPT, the landscape of natural language processing (NLP) has shifted. While transformers revolutionized NLP tasks, their memory and computational complexity scaled quadratically with sequence length, demanding more resources.
NVIDIA Tesla M40 24GB Module: $240 at Amazon
The NVIDIA Tesla M40 GPU Accelerator is the world's fastest accelerator for deep learning training. It provides accurate speech recognition, deep understanding in video and natural language content and better detection of anomalies in medical images.
Enter RWKV
Now, a new open source project, RWKV, is offering promising solutions to the GPU power conundrum. The project, backed by the Linux Foundation, aims to drastically reduce the compute requirement for GPT-level language learning models (LLMs), potentially by up to 100x.
RNNs exhibit linear scaling in memory and computational requirements, but struggle to match the performance of transformers due to their limitations in parallelization and scalability. This is where RWKV comes into play.
RWKV, or Receptance Weighted Key Value, is a novel model architecture that combines the parallelizable training efficiency of transformers with the efficient inference of RNNs. The result? A model that requires significantly fewer resources (VRAM, CPU, GPU, etc) for running and training, while maintaining high-quality performance. It also scales linearly to any context length and is generally better trained in languages other than English.
Despite these promising features, the RWKV model is not without its challenges. It is sensitive to prompt formatting and weaker at tasks requiring look-back. However, these issues are being addressed, and the model's potential benefits far outweigh the current limitations.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The implications of the RWKV project are profound. Instead of needing 100 GPUs to train a LLM model, a RWKV model could deliver similar results with fewer than 10 GPUs. This not only makes the technology more accessible but also opens up possibilities for further advancements.
More from TechRadar Pro
Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.