The most powerful supercomputers in the world – and what they do
Answering the big questions of science
In supercomputing, is bigger better?
"For some, supercomputer size remains the key factor in determining standing in the high-performance computing world – the larger the supercomputer, runs the argument, the more realistic the simulations, comprehensive the data analytics and innovative the scientific enquiries," says Andy Grant, director of High-Performance Computing and Big Data at Bull Information Systems, an architect of supercomputers.
"This made sense back at the dawn of the supercomputing era over half a century ago [when] the technology was used primarily for specialist applications ranging from weather forecasting to complex telephone switching systems and nuclear weapons testing."
With the arrival of cloud-based supercomputing – sometimes called HPC-on-demand – that's all changing. "The continuing emphasis on size merely acts as a distraction from what really matters," says Grant. "HPC-on-demand can drive 'time to insight', shortening the time taken between the presentation of the problem and reaching an understanding of how to solve it."
The result, thinks Grant, is faster innovation, more accuracy and efficiency, and a greater understanding of complex issues. "The focus needs to change from the size of the computer to the economic benefit that the whole HPC infrastructure can deliver."
What are the trends in supercomputing?
Power efficiency is the big one, with supercomputer sellers often presented with a 'power envelope' by their customers. The challenge is to configure a system with a given maximum power draw per node.
"That means achieving the maximum possible server and memory density, it means designing in components that are frugal in electricity consumption and efficiency terms, and it also means taking account of heat production and disposal at the stage of system design," says Jason Coari, Director of International Marketing, EMEA/APAC at SGI, who says that cooling and climate control are being closely looked at. That's not surprising since for each 100W of power that a server consumes to do its job, it needs about 70W for cooling. The key figures for CPUs are therefore the output per watt.
"There are components or technologies – such as flash memories – that enable us to develop systems with a much smaller footprint and which consequently take up far less space for the same power," says Coari. "The virtualisation of servers, storage and desktops presents computing centre operators with another viable means of saving electricity."
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
New at number 11
Our favourite supercomputer of all was a new entry at number 11 in the Top 500. It's got to be NASA's delightfully-named (aren't they all?) Pleiades – named after the most beautiful stargazing sight of all, the Seven Sisters. Upgraded in October using an Intel Xeon E5-2680 v3 processor-powered SGI ICE X system, this supercomputer at NASA's Ames Research Center in Mountain View, California, is now using its newly-found 3.37 petaflops for nothing less than perfecting the future of human and robotic space exploration.
When it comes to 'grand questions', supercomputers have it sewn up.
Jamie is a freelance tech, travel and space journalist based in the UK. He’s been writing regularly for Techradar since it was launched in 2008 and also writes regularly for Forbes, The Telegraph, the South China Morning Post, Sky & Telescope and the Sky At Night magazine as well as other Future titles T3, Digital Camera World, All About Space and Space.com. He also edits two of his own websites, TravGear.com and WhenIsTheNextEclipse.com that reflect his obsession with travel gear and solar eclipse travel. He is the author of A Stargazing Program For Beginners (Springer, 2015),