A public cloud service has ranked among the most powerful supercomputers for the first time

Best virtual desktop
(Image credit: Shutterstock/Bluebay)

At last year's SC19 conference, Microsoft Azure unveiled is HBv2 virtual machine clusters with the bold claim that they “rival the most advanced supercomputers on the planet”.

Just a year later at the virtual Supercomputing 2020 (SC20) event, the software giant has revealed that its public cloud computing service has joined the ranks of the world's most powerful data-intensive supercomputers by placing 17th on the prestigious Graph500 list. According to Microsoft, this is the first time a public cloud has placed on the Graph500 and as the company's HBv2 VMs yield 1,151 GTEPs (Giga-Traversed Edges Per Second), Azure's placement on the list ranks among the top six percent all-time for published submissions.

Microsoft also announced that it has achieved a new record for Message Passing Interface-based (MPI) HPC scaling on the public cloud. By running Nanoscale Molecular Dynamics (NAMD) across 86,400, CPU cores, Azure has demonstrated that researchers anywhere can have petascale computing at their fingertips.

The company also participated in the COVID-19 HPC Consortium and and a team led by Azure's Dr. Jer-Ming Chia worked with researchers from the Beckman Institute for Advanced Science and Technology at the University of Illinois to evaluate HBv2 VMS for supporting future simulations of the SARS-CoV-2 virus. To the team's surprise, they found that HBv2 clusters were not only able to meet the researchers' requirements but that their performance and scalability on Azure rivaled and even surpassed the capabilities of the Frontera supercomputer in some cases.

Graph500 vs TOP500

To compile its list of the top 500 supercomputers twice a year, TOP500 uses Jack Dongarra's Linpack benchmark because it is widely used and performance numbers are available for almost all relevant systems. The Graph500 list on the other hand focuses on data-intensive workloads which is why it uses its own benchmark.

As government, enterprise and research organizations become increasingly data-centric, the Graph500 serves as a useful barometer for customers and partners trying to migrate challenging data problems to the cloud.

The Breadth-first search (BST) test is part of the Graph500 benchmark and it stresses HPC and supercomputing environments in a number of ways while placing an emphasis on the ability to move data. The test uses the “popcount” CPU instruction which is particularly useful for customer workloads in cryptography, molecular fingerprinting and extremely dense data storage.

In a blog post, VP of Mission Systems at Microsoft, Dr. William Chappell explained how organizations can now use the company's HBv2 clusters to solve challenging data problems as opposed to setting up their own systems, saying:

"When critical customers have a unique need, like a challenging sparse graph problem, they no longer have to set up their own system to have world-class performance. Since we are rivaling results of the top ten machines in the world, this demo shows that anyone with a unique mission, including critical government users, can tap into our already existing capabilities. Because this comes without the cost and burden of ownership, this changes how high-performance compute will be accessed by mission users. I see this as greatly democratizing the impact of HPC."

TOPICS
Anthony Spadafora

After working with the TechRadar Pro team for the last several years, Anthony is now the security and networking editor at Tom’s Guide where he covers everything from data breaches and ransomware gangs to the best way to cover your whole home or business with Wi-Fi. When not writing, you can find him tinkering with PCs and game consoles, managing cables and upgrading his smart home.