Exascale: Waiting, Not Precisely Patiently

Source: The Next Platform.com | Timothy Prickett Morgan | June 28, 2021

There was an outside chance that China might pull a surprise on the HPC community and launch the first true exascale system – meaning capable of more than 1 exaflops of peak theoretical 64-bit floating point performance if you want to be generous, and 1 exaflops sustained on the High Performance Linpack (HPL) benchmark if you don’t – but that didn’t happen. And so, we wait.

There was a reasonable amount of churn on the semi-annual Top500 ranking of supercomputers, the June list of which was divulged at the International Supercomputing (ISC) conference as is tradition. ISC21 was hosted online, however, rather than in Germany, and like many of you, we miss traveling to see colleagues and friends – and even competitors – and hope that this coronavirus pandemic will not have a resurgence in the fall with the Delta variant. We shall see, or more precisely, the supercomputers of the world will help us see. If there is any good that comes out of this, it is that the value of supercomputing has been shown to the world. So there is that, and in the long run, this is what is actually important. But we still miss that German breakfast. . . .

The new machine at the top of the Top500 list is the “Perlmutter” pre-exascale system at Lawrence Berkeley National Laboratory, which is a Cray EX system from Hewlett Packard Enterprise that we detailed last month. Perlmutter is interesting in that it mixes AMD CPUs with Nvidia GPU accelerators and an HPE Cray Slingshot interconnect between the nodes. This machine was installed in phases. In Phase 1, the 1,500 nodes in a dozen cabinets had a single 64-core “Milan” Epyc 7763 processor running at 2.45 GHz with 256 GB of memory and four Nvidia “Ampere” A100 GPU accelerators with 40 GB of HBM2E memory, for a total of 6,000 GPUs. In Phase 2, another dozen cabinets were installed that had 3,000 all-GPU nodes with a pair of the 64-core Milan Epyc 7763 processors.

Click here to read the full article.