Per-Gunnar Martinsson was awarded the 2017 Germund Dahlquist Prize by the Society for Industrial and Applied Mathematics on September 11, at the 2017 International Conference on Scientific Computation and Differential Equations (SciCADE 2017) held at University of Bath, UK. The Germund Dahlquist Prize is awarded for original contributions to fields associated with Germund Dahlquist, especially the numerical solution of differential equations and numerical methods for scientific computing.
The prize honors Martinsson for fundamental contributions to numerical analysis and scientific computing that are making a significant impact in data science applications. Specific contributions include his development of linear time algorithms for dense matrix operations related to multidimensional elliptic PDEs and integral equations; and he has made deep and innovative contributions to the development of probabilistic algorithms for the rapid solution of certain classes of large-scale linear algebra problems.
Martinsson is currently Professor of Mathematics at the University of Oxford. He is a graduate of Chalmers University, Sweden, and received his Ph.D. in 2002 in Computational and Applied Mathematics at the Institute for Computational Engineering and Sciences (ICES) at the University of Texas at Austin. Hear more from Martinsson in this Q & A:
Q: Why are you excited to be winning the prize?
A: The pioneering work of Germund Dahlquist has been an inspiration to me for a long time. He played a key role in establishing numerical analysis as an area of particular research excellence in Sweden, and I was very fortunate to have the opportunity to start my studies in applied mathematics in that environment.
I am very enthusiastic about the new methods that my collaborators and I have developed over the last several years. Besides providing new tools for computational scientists, I hope that these results have opened up possibilities for new exciting research at the interfaces between mathematics, numerics, and applications. If this award helps to draw attention to randomized methods and fast direct solvers, then that would be very exciting.
Q: What does your research mean to the public?
A: The speed of any computational task depends in part on how fast the hardware is and in part on how effective our algorithms are. Getting faster algorithms for solving common linear algebraic tasks means in practice that we can now perform much more accurate computational simulations of physical phenomena since we can include more details and work with more realistic models. The computational tasks we've been focusing on are also core building blocks in machine learning and data mining, so this has led to better search algorithms, smarter smartphones, faster algorithms for genomics, etc.
Q: Could you tell us a bit about the research that won you the prize?
A: One project I've worked on for several years now has been to develop randomized methods for ubiquitous computations in linear algebra, with a particular emphasis on algorithms that work well for very large datasets. This started with a well-established observation (dating back to work by Johnson and Lindenstrauss and others) that randomized projections provide fantastic tools for getting a handle on problems in high dimensional spaces. The challenge here is that the randomized projection itself results in substantial distortions, and so you need to do a bit of additional work to obtain algorithms that with near certainty provide a highly accurate answer. In the end, the algorithms we've developed turn out to be very easy to use and have proven highly effective for tasks such as factorizing matrices or solving linear systems. Curiously, although the original goal was to design methods suited for gigantic matrices, we found that large accelerations can be obtained for everyday computations involving matrices of almost every size. Let me mention that this work started as a collaboration with Vladimir Rokhlin and Mark Tygert, and that later key results came out of a collaboration with Joel Tropp.
The other area of research that was cited has been the development of solvers for linear elliptic PDEs, which model phenomena such as acoustic and electro-magnetic scattering, deformations of solid bodies, gravitation, and many others. The standard paradigm in high-performance computing has been to solve such equations using iterative solvers which construct a sequence of successively more accurate approximate solutions. My objective has been to develop algorithms that directly build an approximate solution without the need for iterations. Mathematically, these methods can be viewed as generalizations of the Fast Multipole Method (FMM) by Rokhlin and Greengard. The FMM provided us with a tremendously powerful direct solver in situations where the exact solution operator can be written down explicitly in the form of a convolution integral. Our work has led to analogous solvers that work for a more general class of problems involving, e.g., complex geometries and variable coefficient differential equations.