The challenges of exascale
However, exascale computing is more than just a question of calculation and simulation speed. There are memory storage requirements too. You need to be able to store the results of calculations that have been fed into the supercomputer at amazing speeds. Supply the data too slowly, and the power of your compute engine will be untested.
This throws up all manner of challenges. The need to pass data between components quickly enough makes designing and building the necessary interconnects very difficult. In fact, suitably fast interconnects haven't even been invented yet.
At HP Labs, promising research into photonics – using fibre optics to move data at the speed of light between the components inside a computer and even inside the CPU itself – is in its early stages. Stan Williams, who invented memristor technology to improve memory speeds, is in charge of developing interconnects that will be used in future supercomputers.
Interconnects are important because of scale considerations. Scaling is a grand way of describing the process of bolting several Blue Gene/L supercomputers together. At the moment it's not possible to do this properly because current interconnects aren't up to the job – they're just not fast enough. Finding a solution is essential due to the type of work these supercomputers are doing.
Increasingly, the machines need to be capable of performing calculations that are now known in the industry as 'embarrassingly parallel'. An embarrassingly parallel workload is one where it's very easy to break the central problem down into separate elements, all of which can then be worked on in parallel.
This type of work is perfect fodder for cluster computing. In the SETI@home project, for example, each PC can beaver away on its slice of the supplied data in splendid isolation. When it comes to analysing the effects of aging on a bomb, the different nuances and variables of the whole problem must also, by their very nature, be worked on in parallel.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
But doing so is far from easy because there are lots of dependencies between processes. For example, scientists need to model the weather in order to explain the effects of wind and rain. This research can then generate variables essential to modelling the spread of rust. While all this is going on, they need to predict how the growing rust will affect the bomb mechanisms themselves. And all the while that the bomb is rusting, the weather will be changing too.
The huge computer engines needed to handle all these factors also need communication channels that are incredibly fast. Scaling is also a problem because each individual supercomputer is built to handle the data bandwidth for its own particular design. For example, the memory storage used for the Blue Gene/L architecture relies on standard magnetic disk drives – thousands of them – but an exascale supercomputer would require radically new forms of memory storage.
"You would need not just two or three but several thousand Blue Gene computers to achieve an exaflop," says Reza Rooholamini, the Director of Engineering at Dell. "Even if it were practical to bolt these computers together, there would be many other factors that limit scalability, such as the network bandwidth between these computers and I/O bandwidth to storage. The problem mainly lies in access to shared resources, such as network, storage and memory."
Memory and storage
When it comes to memory and storage requirements for an exascale computer, the story doesn't get any easier. Current technology is neither fast nor reliable enough to handle the trillions upon trillions of calculations required for such a machine.
While memory technology advances at a similar rate to processor speeds (both rely on silicon advancements), mechanical components like today's hard disks have not scaled nearly as quickly. Seager notes that, in current supercomputers, hard disk failure has been one of the most common and serious sources of problems.
At exascale level, more components mean higher failure rates, because the faster data moves, the more error-prone it becomes. This would also necessitate a move to 128-bit computing to avoid latency issues. "To put it in perspective, in the time it takes for the mechanical head of a disk drive to find a piece of data (about five thousandths of a second), an exascale system could have executed 200,000,000,000,000 instructions," says David Flynn, the CTO of Fusion-io, a company that makes solid state and high I/O products.
"What this means in practical terms is that the system would sit idle for most of the time, having nothing to do but wait for storage. Indeed, even today's petaflop-scale systems sit idle about for about five to 30 per cent of the time waiting for access to storage. That means an utter waste of up to 30 per cent of the dollars spent on the system, because they aren't getting work done."
"On the hardware side, most people think about power requirements," adds Seager. "But there's also a memory wall. The density of DRAM has increased, but the speed has not had a relative increase. Today, we hide that latency problem with caches, but those techniques are running out of gas. We are now looking at innovative parallelism strategies along the bus so that memory is not getting hung up somewhere."
Current page: The challenges of exascale
Prev Page A computer to save the world Next Page Reliability and sustainability