'No one knows yet': Donut design could create quadrillion-transitor compute monster — analysts discuss unusual interconnection as Cerebras CEO acknowledges that we don't know what happens when multiple WSEs are connected
Cerebras' Wafer-Scale Engine excelled at a timely scientific simulation
Tri-Labs (comprised of three major US research institutions - the Lawrence Livermore National Laboratory (LLNL), Sandia National Laboratories (SNL), and Los Alamos National Laboratory (LANL)) has been working with AI firm Cerebras on a number of scientific problems, including breaking the molecular dynamics (MD) timescale barrier.
There’s a paper explaining this particular challenge, which you can read here, but essentially it refers to the problem of conducting molecular dynamics simulations on a larger timescale than would normally be possible.
The barriers here are twofold: computational power and communication latency between different nodes of an HPC system. Traditionally, to compensate for the lack of computational power, scientists assign more work to each node and scale up the simulation size with the node count. Unfortunately, the slow inter-node communication caused by high latency further exacerbates the timescale problem.
Like a donut
MD simulations are crucial to several scientific fields as they bridge the gap between quantum electronic methods and continuum mechanics methods. However, these simulations encounter timescale limitations, as they have to account for atomic vibrations, which take place over very short timescales, and other phenomena that occur over much longer periods.
The authors of the paper sought to overcome the timescale barrier by employing a more efficient computational system, specifically Cerebras' Wafer-Scale Engine.
As The Next Platform explains, “The specific simulation was to beam radiation into three different crystal lattices made of tungsten, copper, and tantalum. In these particular simulations, which were for 801,792 atoms in each lattice, the idea is to bombard the lattices with radiation and see what happens.”
Running the simulations on Frontier, the world’s fastest supercomputer based at the Oak Ridge National Laboratory in Tennessee, and on Quartz at LLNL, scientists were only able to witness nanoseconds of what was happening to the lattices as they were bombarded with radiation. Using WSE, they were given tens of milliseconds of time to watch what happened.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
For the tests, Tri-Labs used Cerebras Wafer Scale Engine 2 (WSE-2), rather than the newer, and more powerful WSE-3 launched earlier this year, but as detailed above the results were impressive. As the paper reports, “By dedicating a processor core for each simulated atom, we demonstrate a 179-fold improvement in timesteps per second versus the Frontier GPU-based Exascale platform, along with a large improvement in timesteps per unit energy. Reducing every year of runtime to two days unlocks currently inaccessible timescales of slow microstructure transformation processes that are critical for understanding material behavior and function.”
The Next Platform’s Timothy Prickett Morgan asked Cerebras CEO and co-founder, Andrew Feldman, what happens when you connect multiple wafer scale engines together and try to run the same simulation and was told “no one knows yet”.
Prickett Morgan went on to note, “The proprietary interconnect in the WSE-2 systems could scale to 192 devices, and with the WSE-3, that number was boosted by more than an order of magnitude to 2,048 devices,” but he “strongly suspects that the same scaling principles apply to WSEs as apply to GPUs and CPUs.”
He went onto suggest, however, that there could be some way to lash WSEs together physically, and make a “stovepipe of squares of interconnected WSEs,” potentially creating a donut design with power running on the inside and cooling on the outside. As Prickett Morgan concludes, “This kind of configuration could not be worse than using InfiniBand or Ethernet to interlink CPUs or GPUs.”
More from TechRadar Pro
Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.