Source: The Next Platform | Rob Johnson | January 22, 2019

For exascale hardware to be useful, systems software is going to have to be stacked up and optimized to bend that hardware to the will of applications. This, in many ways, is the hardest part of reaching exascale class systems.

According to TOP500.org’s November 2018 rankings, five of the top ten HPC systems in the world support advanced research at the United States Department of Energy (DOE). At the number one position, the aptly named “Summit system housed at the Oak Ridge National Laboratory (ORNL) offers theoretical peak performance above 200,000 teraflops with the underlying support of 2,397,824 processor cores. That benchmark represents a stunning achievement.

However, by 2021, the team behind the Exascale Computing Project (ECP) foresees an exascale computing ecosystem capable of 50 times the application performance of the leading 20 petaflops systems, and five times the performance of Summit. With all that prowess on tap, exascale-level systems will be able to solve problems of greater complexity, and support applications that deliver high-fidelity solutions more quickly. However, the applications need a software stack that allows them to access that capability. The new Extreme-Scale Scientific Software Stack (E4S) release represents a major step toward the ECP’s larger goal.

The key stakeholders behind the ECP’s mission are the DOE’s National Nuclear Security Administration (NNSA) and Office of Science. These government agencies seek to create increasingly-powerful compute tools to tackle the massive workloads associated with complex modeling, simulation, and machine learning challenges in fields like nuclear energy production, national security, and economic analysis. However, the ECP team – comprised of contributors from academia, private industry, and government research – envisions a broader scope of use cases in the coming years.

EXASCALE PERFORMANCE EMPOWERING DISCOVERY

Robert Wisniewski, chief software architect for extreme scale computing at Intel, resides among an elite group of industry HPC experts helping bring the ECP software stack to life. “While the DOE’s needs and advocacy served to initiate the ECP effort, our community envisions the ECP having an impact across a broader scope of scientific and engineering endeavors,” said Wisniewski. “Once deployed, researchers and engineers from the labs, academia, and industry, have the opportunity to apply for time on these leadership facility supercomputers. This capability helps drive scientific and engineering innovation.”

While Moore’s Law predicts compute advancement in the coming years to surpass current capabilities by a significant margin, hardware represents only one piece of a much bigger HPC puzzle. Wisniewski’s role focuses on the development of the software stack, a key ingredient enabling the success of the exascale-class machines. According to Wisniewski, “Intel is an advocate of the ECP’s work, and I am excited to be able to contribute to the Extreme-Scale Scientific Software Stack effort.”

Realizing an achievement of this magnitude requires industry collaboration. The ECP must empower the supporting architecture, system software, applications, and experts with the technical skill sets needed to derive the greatest level of performance from modern hardware.

CONVERGED WORKLOADS

As Wisniewski points out, measuring system performance is not just about benchmarking like the linear equations software package (Linpack). The charter for the exascale machines is providing real applications with high performance across an increasingly broad set of applications.

Click here to read the full article.