SIAM News Blog
SIAM News
Print

January 2020 Prize Spotlight

Congratulations to the following members of the SIAM community who will receive awards at PP20. 

Edgar Solomonik - SIAM Activity Group on Supercomputing Early Career Prize 

Edgar Solomonik
Edgar Solomonik of the University of Illinois at Urbana-Champaign is the recipient of the 2020 SIAM Activity Group on Supercomputing Early Career Prize. The prize will be presented to him at the 2020 SIAM Conference on Parallel Processing for Scientific Computing (PP20), to be held February 12-15, 2020 in Seattle, Washington. Solomonik will receive the award and deliver his talk, “Scalable Algorithms for Tensor Computations,” on February 14, 2020.

The SIAM Activity Group on Supercomputing (SIAG/SC) awards the SIAG/SC Early Career Prize every two years to an individual in their early career for outstanding research contributions in the field of algorithms research and development for parallel scientific and engineering computing in the three calendar years prior to the award year. The award recognizes Solomonik for his contributions to communication-avoiding algorithms for a wide range of problems in numerical linear algebra and beyond and to tensor contraction algorithms and software.

Edgar Solomonik is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He received his PhD at the University of California, Berkeley in 2014 and was a postdoctoral researcher at ETH Zürich prior to joining the University of Illinois. He has received the Alston S. Householder Award, David J. Sakrison Memorial Prize, and IEEE TCHPC Award for Early Career Researchers in High Performance Computing. His main research focus is the development of parallel algorithms and software for matrix and tensor computations.

Q: Why are you excited to be awarded this prize?

A: I am excited to be awarded the SIAG/SC Early Career Prize as I am passionate about the field of parallel numerical algorithms. It’s an honor to receive such an award and to be listed among the previous recipients, who are all leaders in the field.

Q: Could you tell us a bit about the research that won you the prize?

A: The nominated paper developed the first communication-optimal parallel algorithm for computing the eigenvalues of a dense symmetric matrix. To do so, we changed the underlying numerical algorithm by performing many stages of reduction to successively narrower-banded matrices and reformulating matrix updates to delay their application. Additionally, we leveraged a new communication-avoiding rectangular QR factorization algorithm to reduce the matrix from full to the first band-width, then proceeded with a pipeline of parallel QR factorizations to reduce to narrower bands. This approach allows us to move as little data asymptotically as square matrix multiplication, irrespective of memory footprint constraints, something which previously existing algorithms could not achieve. These theoretical innovations are closely tied to practice, as similar techniques have demonstrated significant improvements to performance and parallel scalability of LU and QR algorithms over existing library implementations.

Q: What does your research mean to the public?

A: The symmetric eigenvalue problem and the closely related singular value decomposition are important kernels in a variety of scientific computing applications. For example, the former is a scalability bottleneck within methods for quantum chemistry that approximate the configuration and dynamics of molecular systems at the level of electronic interactions. Improvements in performance for these fundamental linear algebra problems enables such science and engineering applications to reach new frontiers in problem size and accuracy. As architectures embrace more parallelism with limited improvements to communication throughput, innovations in parallel algorithms comprise an increasingly important avenue for advancement of computational science.

Q: What does being a SIAM member mean to you?

A: SIAM is at the heart of the applied mathematics and broader computational science community. The structure and organization of SIAM events is exceptional, and it’s a pleasure and pride for me to be a part of the SIAM community.

Steve Plimpton - SIAM Activity Group on Supercomputing Career Prize

Steve Plimpton
Steve Plimpton of Sandia National Laboratories is the recipient of the 2020 SIAM Activity Group on Supercomputing Career Prize. The award will be presented to him at the SIAM Conference on Parallel Processing for Scientific Computing (PP20), to the held February 12-15, 2020 in Seattle, Washington. Plimpton will receive the award and present his talk, “The Ghosts of Parallel Computing: Past, Present, and Future,” on February 14, 2020.

The SIAM Activity Group on Supercomputing (SIAG/SC) awards the SIAG/SC Career Prize every two years to an outstanding senior researcher who has made broad and distinguished contributions to the field of algorithms research and development for parallel scientific and engineering computing. The prize is awarded to a scientist who has held a PhD or equivalent degree for at least 15 years. This award recognizes Plimpton for his seminal algorithmic and software contributions to parallel molecular dynamics, to parallel crash and impact simulations, and for leadership in modular open-source parallel software.

Steve Plimpton is a staff member at Sandia National Laboratories, in the Computational Multiscale department of the Center for Computing Research. He earned his PhD in applied & engineering physics at Cornell University, and he has been at Sandia National Laboratories since 1989. Much of his research has involved development of efficient codes and algorithms which use particles to model materials on large computers, with occasional forays into continuum modeling or biology. Open-source codes he has helped develop and support include LAMMPS (molecular dynamics), SPPARKS (kinetic Monte Carlo), SPARTA (Direct Simulation Monte Carlo), ChemCell (stochastic particle modeling of biological cells), and MR-MPI (MapReduce on top of MPI).

Q: Why are you excited to be winning the SIAG/SC Career Prize?

A: I'm very honored to receive this prize. For me, developing open-source scientific software has been a highly collaborative process. So I also view this award as recognition for the many friends and colleagues who've made key contributions to the software and algorithms we've worked on together.

Q: Could you tell us a bit about the research that won you the prize?

A: I view myself as someone who creates tools (open-source software) that enable researchers from many disciplines to tackle interesting science problems. The majority of my work has been on codes that use particle methods to model materials, from the atomistic to meso or even continuum scales. These include codes for molecular dynamics and different flavors of Monte Carlo methods. Spending my career at a US DOE national lab, I've also had many chances to collaborate with experts on algorithms and applications far afield from my own expertise! These include contact detection in structural dynamics simulations, radiation transport, combinatoric graph algorithms, and models of neuromorphic hardware. I guess the unifying theme has been figuring out ways to do all of this efficiently in parallel.

Q: What does your research mean to the public?

A: What the public probably knows about supercomputing is that machines continually get bigger and faster (and more expensive!). What they may not appreciate is that they also have become harder for many scientists to use at scale and for trying out new ideas and models. I like to think that the software I work on makes that task a little easier for the scientists who use it.

Q: What does being a SIAM member mean to you?

A: What I really like about SIAM, especially its supercomputing activity group, is the broad spectrum of disciplines that fit under its umbrella. Meetings like parallel processing (PP) and computational science and engineering (CSE) attract mathematicians, computer scientists, physical scientists, and engineers from all kinds of backgrounds. This makes the meetings both fun to attend and ideal for learning about new ideas and fostering collaborations.

Field Van Zee & Robert van de Geijn - SIAM Activity Group on Supercomputing Best Paper Prize


The SIAM Activity Group on Supercomputing Best Paper Prize will be awarded in 2020 to Field Van Zee, Robert van de Geijn, and members of the Science of High-Performance Computing Group (SHPC Group) at the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin.

The award recognizes the authors for their paper, “The BLIS Framework: Experiments in Portability,” published in ACM Transactions on Mathematical Software in 2016. The author team consists of: Field G. Van Zee (The University of Texas at Austin), Robert A. van de Geijn (The University of Texas at Austin), Tyler M. Smith (ETH Zürich), Bryan Marker (INDEED), Tze Meng Low (Carnegie Mellon University), Francisco D. Igual (Universidad Complutense Madrid), Mikhail Smelyanskiy (Facebook), Xianyi Zhang (PerfXLab Technology Co. Ltd. Beijing), Michael Kistler (IBM Austin Research Laboratory), Vernon Austel (IBM Armonk), John A. Gunnels (IBM T. J. Watson Research Center), and Lee Killough (AMD).


Field Van Zee and Robert van de Geijn will accept the award on behalf of the authors at the 2020
SIAM Conference on Parallel Processing for Scientific Computing (PP20), to be held February 12-15, 2020 in Seattle, Washington. The award will be presented and Robert van de Geijn will present the paper in his talk of the same title on February 14, 2020.


The SIAM Activity Group on Supercomputing (SIAG/SC) awards the SIAG/SC Best Paper Prize every two years to the authors of the most outstanding paper, as determined by the selection committee, in the field of parallel scientific and engineering computing published within the four calendar years preceding the award year. The 2020 award recognizes the authors for their paper,
which validates BLIS, a framework relying on the notion of microkernels that enables both productivity and high performance. The framework will continue having an important influence on the design and the instantiation of dense linear algebra libraries.

Field Van Zee
Field Van Zee is a Research Scientist in the Science of High-Performance Computing Group at the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin, where he designs, creates, and maintains dense linear algebra libraries. He focuses on applying fundamentals of computer architecture and computer science towards the goal of creating linear algebra software that is not only portable and maintainable, but that also yields high performance on modern hardware. Aside from numerous publications exploring the science of high-performance linear algebra algorithms and implementations, he is most known as the architect and lead maintainer of the BLIS framework, an open-source, high-performance matrix computation library that has since been adopted by AMD. He also co-created libflame, a higher-level dense linear algebra library that re-implements much of LAPACK. He received his undergraduate and graduate degrees from The University of Texas at Austin.

Robert van de Geijn
Robert van de Geijn is a leading expert in linear algebra, high-performance computing, parallel computing, scientific computing, numerical analysis, software architecture of linear algebra libraries, and formal derivation of algorithms. He leads the Science of High-Performance Computing Group and is a core member of the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin. He received his Ph.D. in applied mathematics from the University of Maryland, College Park, and his B.S. in mathematics and computer science from the University of Wisconsin-Madison.

Field and Robert answered our questions about the authors’ accomplishments.

Q: Why are you excited to be awarded the SIAG/SC Best Paper Prize?

A: The award recognizes decades of fundamental computer science on algorithms for computing matrix operations, the structure of software that implements those algorithms, and the techniques that map them to hardware. It is upon these decades of insights by others as well as our own research group that the BLAS-like Library Instantiation Software (BLIS) is built. It is particularly satisfying that many in academia and industry pooled their expertise to then demonstrate the flexibility and high performance that BLIS enables. Some of these individuals became coauthors on the prize-winning paper while others have contributed to related papers or the software itself.

Receiving the award also allows us to better reach the broader scientific computing audience. When this research was in its infancy, our motto was “no users, no complaints.” Eventually, prodded along by two grants from NSF’s Software Infrastructure for Sustained Innovation (SI2) program as well as gifts from industry, we turned the foundational results into open-source software that is now available to the scientific computing community on GitHub. We have even created a Massive Open Online Course (MOOC), offered on the edX platform, that teaches the basic techniques. While we believe we have great “products,” we have always lacked in the advertising department. With this award, SIAM is helping us spread the word.

Q: Could you tell us a bit about the research that won you the prize?

A: For scientific computing applications, dense linear algebra is often at the bottom of the food chain. This was already recognized in the 1970s, when the level-1 Basic Linear Algebra Subprograms (BLAS) interface was first proposed. By casting computing in terms of this interface, portable high performance could be achieved. When computers with cache memory were introduced in the 1980s, the level-2 and -3 BLAS similarly supported libraries that could cast computation in terms of matrix-matrix multiplication operations (e.g., LAPACK). At first, vendors were expected to deliver high-performing implementations. By the late 1990s, open-source implementations like ATLAS and GotoBLAS (later forked as OpenBLAS) supplemented these black box libraries.

BLIS is a refactoring of Goto’s approach to implementing matrix-matrix operations. It structures these implementations so that machine-specific details are limited to a “micro-kernel” that is carefully optimized in assembly language. All other parts of the code are written in C99. The portability benefits are demonstrated in the paper on a wide range of architectures: the AMD A10, Intel Sandy Bridge, IBM Power7, ARM Cortex A9, Loongson 3A, TI C6678, IBM Blue Gene/Q, and Intel Xeon Phi (Knight's Corner). The paper also shows that scalable parallelism on multicore and many-core architectures can be easily achieved. The work enables new, BLAS-like, functionality to be investigated and supported, including more flexible matrix storage (with independent row and column strides), high-performance in-place tensor contraction, practical implementation of Strassen’s algorithm, and operations of importance to machine learning such as solving the k-Nearest Neighbor problem.

Q: What does your research mean to the public?

A: Our research and development delivers some of the fundamental software used for scientific discovery through computing. More recently, Machine Learning applications have started to transform the world in which we live. Such applications often cast much of their computation in terms of linear algebra as well. Thus, the research described in the paper enables advances that touch our everyday lives.

Q: What does participation in SIAM mean to you?

A: Robert writes:  I first became a SIAM member in the early 1980s, when I was still a graduate student. I gave my first conference talk as a graduate student in 1985 at the Second SIAM Conference on Parallel Processing for Scientific Computing. For the next 30 years, SIAM events have enabled interaction with colleagues in academia and industry.

Field comments:  Over the years, I have rarely traveled to conferences, but when I did, I attended SIAM meetings!

blog comments powered by Disqus