The top supercomputer surprises on the horizon
How to predict the hot trends in the World's most powerful supercomputers
It's always amazing to see the speed of change for the top spot in the annual ranking of Top500 supercomputers. Japan's K computer, which is currently fourth and, will have no doubt dropped further was in first position as recently as June 2011.
In 3-years its been outdone by China's Tianhe 2, is nearly five times more powerful. The performance gains at the top have been pretty 'straight line' since the 1990s, so changes like this shouldn't be a huge surprise.
ARCHER
Launched in March 2014, I'll also be keen to see where Edinburgh University's new ARCHER supercomputer, largest in the UK, will feature. I also wonder how long it will be before the Exascale performance barrier is broken.
In past years the purpose [and funder] for each machine has been split between academia, research, Government and private industry. The top100 is generally dominated by academia and research based machines, ie those organisations with big budgets and even bigger aspirations. As we move out of the top100, private industry based machines dominate the remaining 400 entries. It's a trend I think we'll see continue in June and future lists.
Linux is king
From a technology perspective, the Windows Operating System has come and gone so quickly. Linux is the dominant OS and this will definitely be reflected – and its position will strengthen. New software is being developed to make it increasingly easy for users to access supercomputers without really coming into contact with Linux.
I've seen a Linux system, which, in simplified terms, enables users to select and send a job to their supercomputer using a drop-down menu. I think it is these advances that have largely inhibited much broader adoption of Windows in the supercomputing industry.
Intel's Phi
I'm also excited by the arrival of Intel's Phi accelerator on the list. Working with Phi is relatively straight-forward and can be the same as programming a Core i7 (or similar) processor that most people in technical computing have on their desktops, so there is no need to learn any new API. Simply, it is easier to program and the tools are available -- if a developer is coding now, it is the same code and the same expertise.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
I would balance all this though by saying whilst Phi is relatively easy to program, getting best performance is difficult – it will take 10 minutes to learn and 10,000 hours to master! Getting the very best from Phi is going to take a very long time.
Nonetheless, Phi is already making an impact. It's in use five times on the current list, including top spot and I expect this will continue in June. When you combine this trend with Intel's growing impact on the chip market, I think its fair to say Intel is moving towards total domination of supercomputing.
Standard clusters
My last thoughts would be around the placement of "standard clusters" on the list – these are the sorts of machines we build at OCF for customers, largely using x86 server technology. The highest placed machine in the current list [and I'm not sure this will change much in June] is 10th. It uses IBM's iDataPlex server technology.
It's nice to see one of these machines make the top10 because they are really the workhorse for many research intensive organisations. They are easy to work with, easy to upgrade, easy to re-compile software code to work effectively on the system, and are suitable for supporting lots of users and applications.
- Andrew Dean is HPC business development manager for OCF since 2007.