SIAM News Blog
SIAM News
Print

The Future of High Performance Scientific Computing is Anything but Clear

By Bruce Hendrickson, Sivan Toledo

Cutting-edge scientific computing has relied for decades on what seems to be a never-ending parade of faster and faster computers. The continuously-growing power of supercomputers has been well documented by the TOP500 List, a website that publishes a biannual list of the 500 most powerful computers in the world. However, more recent signs indicate that the underlying dynamics driving this growth are slowing down and will eventually stop, at least in their current form, in about a decade.

To help the high performance scientific community prepare for the future, three experts shared their views on the future of high performance computing with attendees at the SIAM Conference on Parallel Processing for Scientific Computing, which was held in Paris, France, this April.

Panelists Horst Simon, deputy director of Lawrence Berkeley National Laboratory (and one of the authors of the TOP500 List); Thomas Sterling, a professor and chief scientist of the Center for Research on Extreme-Scale Technologies at Indiana University; and Mike Heroux, a senior scientist at Sandia National Labs, agreed that significant improvements in supercomputing will require overcoming major challenges. But interestingly, they had completely different views on what challenges are most important.

Figure 1. The growth in the floating-point performance of supercomputers, based on the TOP500 list. The graph shows the 2008 slowdown in the growth-rate of the floating-point performance of the least-powerful computer among the world’s 500 most powerful computers (bottom data series) and the 2013 slowdown in the growth-rate of the cumulative performance of the top 500 supercomputers (top data series). The performance of the most powerful computer (middle data series) in the world is not a smooth function of time, which makes it a poor metric for assessing growth rates. Figure courtesy of [1] © 2015 IEEE.
Discussing the underlying technologies used to build processors, memories, and communication channels, Simon argued that it is hard to know what’s coming next. All we know for sure is that progress is slowing down, and that the rate of progress will halve from its current level in a decade or less. He illustrated a clear slowing down in some stable metrics of progress in the TOP500 List. The rate of progress in performance of the trailing computer on the list (number 500) dropped in 2008, while the rate of progress in cumulative performance of the computers on the list dropped in 2013 (see Figure 1). This is clearly related to the nearing end of progress in photolithography, the technology that is used to mass-produce computer chips. This technology has been advancing at a fairly constant exponential rate known as Moore’s law from the early 1970s, when Intel produced its first processor using photolithography with a feature size of 10μm. Photolithography is still used to produce computer chips today, but the feature size dropped to around 15nm, allowing semiconductor manufacturers to cram about a million times more transistors into a chip than 45 years ago. Experts expect this process to continue for a while, down to 10nm and then 7 or even 5nm, but it is unlikely that it would go much further. At 5nm, transistors are about 20 silicon atoms wide; below that, quantum effects take over the electronic principles that currently make computers tick. Furthermore, as feature sizes shrink, the cost to develop and build chip-manufacturing plants skyrockets, making investments in them difficult to justify. At the current slower rate of progress, we will reach the 7nm or 5nm limit in about a decade.

Simon and other experts believe that in the next few years, progress will rely on two emerging technologies that still lie within the domain of conventional photolithography, which produces logic gates known as  complimentary metal-oxide semiconductor (CMOS) gates. One is the three-dimensional stacking of chips, which can increase the density of memory and logic gates per unit volume. The other is silicon photonics, which can reduce power consumption and improve data-transfer performance by tightly integrating digital electronics with optical communication links. Further in the future, Simon envisions three classes of technologies that may enable additional improvements in computing power. One is the so-called post-CMOS transistor, which  aims to build memory cells and logic gates that exploit rather than suffer from quantum effects. There are many candidate technologies, but it is unclear which of them can be mass produced in a cost-effective manner. The second technology is quantum computing, though it is hard to tell how significant and general-purpose it may become. The third is an early-stage technology, called neuromorphic computers, which aims to mimic brains. However, it is fairly clear that no technology is poised to take over smoothly from von Neumann architectures and CMOS devices. We may well experience a period with very little progress in hardware capabilities.

Acknowledging the possible end of CMOS scaling, Thomas Sterling made a case for radically rethinking computer architectures and programming models. He reasoned that the architecture of supercomputers has remained stagnant, relying on interconnected von Neumann compute nodes. Sterling believes that new architectures will be necessary to increase the power of supercomputers beyond about 1018 floating-point operations per second. Even with current machines, the movement of data is the limiting factor in performance and consumes the bulk of the power. To circumvent this limitation, Sterling argued that future architectures will have to seamlessly blend computational and arithmetic units with memory in order to optimize data transfers, instead of optimizing the utilization of arithmetic units. He also believes that these future architectures will be based on data-flow models, rather than the von Neumann model. Programs will execute in a highly asynchronous and dynamic manner.

Mike Heroux asserted that any major change to computer architectures will have a dramatic impact on large simulation codes. Consequently, he feels that the biggest challenge will be developing software with the complexity and sophistication to actually benefit from future supercomputers, and that such software may not materialize without significant investments in software engineering. In the national labs, new simulation codes are being written not by single scientists, not even by teams of scientists, but by large collections of teams from different labs and universities. These teams consist of both application scientists and computer scientists. This scale of software development is necessary in order to build novel multiphysics, multiscale simulations. Such massive efforts require new software-engineering skills, training for the use of new tools, and new incentive structures from both publishers and funding agencies.

One clear message from the panel discussion was that a major change is coming to supercomputing. Many other sessions at the conference touched on the same topics, including minisymposium sessions on next-generation architectures, post-Moore era tuning, new types of accelerators, extreme-scale scientific software engineering, and more. There is a strong consensus that current approaches are running out of steam, but little clarity on what will come next. It is safe to say that the next decade in supercomputing will be marked by both uncertainty and opportunity.

References

[1] Strohmaier, E., Meuer, H., Dongarra, J., & Simon, H. (2015). The TOP500 List and Progress in High Performance Computing. IEEE Computer, 48(11), 42-49.

Bruce Hendrickson is the director of the Center for Computing Research at Sandia National Labs and an affiliated faculty member in the Department of Computer Science at the University of New Mexico.Sivan Toledo is a professor of computer science in the Blavatnik School of Computer Science at Tel-Aviv University.

blog comments powered by Disqus