Source: diginomica.com | Neil Raden | October 21, 2019

What can you do with 1,000,000,000,000,000,000 Floating-point calculations per second (EXAflops)? Simulate nuclear weapons? Yes. Cure cancer? Maybe…

Two topics that are of continuing interest to me are supercomputing and cancer research. The former because I was involved in it in the past, and the latter because the time and expense it involves makes me wonder if anyone is actually serious about it. For example, every scientific article I read is about treating cancer, especially developing new drugs, but doesn’t address the more critical, foundational issue – why is there cancer?

In that vein, an article crossed my desk from Frontiers in Oncology: AI Meets Exascale Computing: Advancing Cancer Research With Large-Scale High-Performance Computing.

That’s the message you hear about supercomputers, especially the forthcoming “exascale” computer, which is promoted by the Department of Energy as the promise to cure cancer.

Before examining what this means, let’s look at a brief history of supercomputing to put this into perspective. I was involved with the fastest supercomputer ever In June 1997 at Sandia Labs, Intel‘s ASCI Red. It was the world’s first computer to achieve one teraFLOP and beyond. A teraFLOP is a trillion floating-point calculations per second.

It had distributed memory MIMD (Multiple Instruction, Multiple Data) message-passing, every compute node had two 200 MHz Pentium Pro processors, each with a 16 KB level-1 cache and a 256 KB level-2 cache. These were later upgraded to two 333 MHz Pentium II OverDrive processors, each with a 32 KB level-1 cache and a 512 KB level-2 cache.[9] According to Intel, the ASCI Red Computer is also the first large scale supercomputer to be built entirely of commercially available components.

ASCI Red had the best reliability of any supercomputer ever built and was supercomputing’s high-water mark in longevity, price, and performance.

It should not come as a surprise that ASCI Red was developed at Sandia National Laboratories. If you’re not familiar with the National Labs under the direction of the Department of Energy, Sandia, Los Alamos, and Livermore are all under the directive of the National Nuclear Security Administration. Los Alamos was always the “Theoretical Division.” Sandia designed and built nuclear weapons. ASCI Red was designed to simulate new nuclear weapons and to simulate the efficacy of the existing nuclear weapon stockpile because nuclear testing was banned.

Today, twenty-two years later:, two IBM-built supercomputers, Summit and Sierra, installed at the Department of Energy’s Oak Ridge National Laboratory (ORNL) in Tennessee and Lawrence Livermore National Laboratory in California, respectively, retain the first two positions as the fastest supercomputers at 148.6 petaflops for the Summit and 94.6 petaflops for the Sierra.

Taking up 7,000 square feet (650 sq m), Sierra has 240 computing racks and 4,320 nodes. Each node has two IBM Power9 CPUs, four Nvidia V100 GPUs, a Mellanox EDR InfiniBand interconnect, and Nvidia’s NVLink interconnect. Across 24 racks of Elastic Storage Servers, Sierra has 154 petabytes of IBM’s software-defined parallel file system Spectrum Scale. The 11MW system is thought to be five times as power-efficient as its predecessor, Sequoia.

Click here to read the full article.