Source: HPC WireTiffany Trader | July 14, 2018

‘Big Data Challenges and Advanced Computing Solutions’ Focus of House Committee Meeting.

On Thursday, July 12, the House Committee on Science, Space, and Technology heard from four academic and industry leaders – representatives from Berkeley Lab, Argonne Lab, GE Global Research and Carnegie Mellon University – on the opportunities springing from the intersection of machine learning and advanced-scale computing.

“As the nation’s largest federal sponsor of basic research in the physical sciences, with expertise in big data science, advanced algorithms, data analytics and high performance computing, the Department of Energy (DOE) is uniquely equipped to fund robust fundamental research in machine learning,” said Energy Subcommittee Chairman Randy Weber (R-Texas), who opened the meeting.

Weber noted there are broad applications for machine learning and advanced computing in the DOE mission space, including high energy physics, fusion energy sciences and nuclear weapons development. He also emphasized the importance of data-driven technologies for academia and industry, citing the Rice University researchers who are exploring machine learning-based approaches for modeling flood waters and aiding in evacuation planning. “In Texas, we are still recovering from Hurricane Harvey—the wettest storm in United States history!” he said.

Kathy Yelick, associate laboratory director for Computing Sciences at Lawrence Berkeley National Laboratory, described some of the large-scale data challenges in the DOE Office of Science and gave an overview of how machine learning, and specifically deep learning, are poised to impact scientific discovery. “Machine learning has revolutionized the field of artificial intelligence and it requires three things: Large amounts of data, fast computers and good algorithms,” Yelick stated. “DOE has all of these. Scientific instruments are the eyes, ears and hands of science, but unlike artificial intelligence the goal is not to replicate human behavior but to augment it with superhuman measurement, control and analysis capabilities, empowering scientists to handle data at unprecedented scales, provide new scientific insights and solve societal challenges.”

“Machine learning does not replace the need for the high-performance computing simulations, but adds a complimentary tool for science,” Yelick said. “Recent earthquake simulations of the Bay Area show that just a three-mile difference in location of an identical building makes a significant difference in the safety of that building; it really is all about location, location, location. The team that did this work is looking at taking data from embedded sensors and eventually from smart meters to give even more detailed location-specific results.”

There is tremendous enthusiasm for machine learning in science, Yelick observed, but there’s also a need for caution. “Machine learning results are often lacking in explanations, interpretations, or error bars–a frustration for scientists–and scientific data is complicated and often incomplete,” she said. Bias in algorithms is also a concern, for example, a self driving car trained on one regional dialect may not recognize drivers from another region, or a cosmic event in the Southern hemisphere may not be recognized by a model that was trained on Northern hemisphere data.

“Foundational research in machine learning is needed along with a network to move the data to the computers and share it with the community and make it as easy to search for scientific data as it is to find a used car online,” she said.

In her full testimony report, Yelick highlighted DOE’s investments in supercomputing that are advancing machine learning, referencing early work on the recently-deployed pre-exascale systems Summit (at Oak Ridge National Lab) and Sierra (at Lawrence Livermore).

“One of the key computational kernels in deep learning is multiplying two matrices, which also is the dominant kernel in the Linpack benchmark used for the TOP500 list, where Summit and Sierra are in the #1 and #3 spots respectively,” said Yelick. “Finalists for the 2018 Gordon Bell prize include a deep learning computation at over 200 petaops /sec computation on Summit, which was a partnership between NERSC, OLCF, Nvidia, and Google, that was used to analyze data from cosmology and extreme weather events. A second finalist is a project lead by Oak Ridge National Laboratory with researchers from the University of Missouri in St. Louis, which used an entirely different algorithm to learn relationships between genetic mutations across an enormous set of genomes, with potential applications in biomanufacturing and human health. This algorithm can also be mapped to matrix-multiply like operations. It runs at a impressive 1.88 exaop/second! These are the fastest deep learning and other machine learning computations to date.”

Click here to read the full article and view the video.