# Prize Spotlight: Lek-Heng Lim

The James H. Wilkinson Prize in Numerical Analysis and Scientific Computing, established in 1979, is awarded for research in, or other contributions to, numerical analysis and scientific computing during the six years preceding the award. The purpose of the prize is to stimulate younger contributors and to help them in their careers.The prize will be awarded at the 2017 SIAM Annual Meeting in July.

**Q: ***Why are you excited about winning the prize?*

**A:** Looking at the list of past winners, I am deeply honored and humbled that the prize committee decided that I belong to this list. Usually when I get an email from Nick (Higham), he is either seeking a referee report or conveying one; but his last email (informing me of the prize) came as a very pleasant surprise. I am extremely grateful to the people who kindly nominated me.

**Q:** *What does your research mean to the public?*

**A:** I will describe one aspect related to a real-world application that the public might appreciate. In joint work with Thomas Schultz, an exceptionally talented computer/neuroscientist in Bonn, we figured out a way to get high resolution 3D images of neural fibers in the human brain from diffusion MRI measurements. An example is shown in Figure 1. Among other things, mapping major bundles of neural fibers in the human brain is vitally important in neurosurgical planning and is an integral component of the Human Connectome Project.

**Figure 1.**Bundles of neural fibers reconstructed from dMRI measurements.

**Q: ***Given that James H. Wilkinson is one of the pioneers of numerical linear algebra, could you tell us a bit about how your work relates to his?*

**A: **Together with Ke Ye, an outstanding postdoc and brilliant collaborator, we proposed an algebraic framework for systematically discovering Strassen-like algorithms for structured matrix computations and we derived the fastest algorithms for matrix-vector product when the matrix is Toeplitz, Hankel, f-circulant, triangular Toeplitz, Toeplitz-plus-Hankel, block- Toeplitz-Toeplitz-block, etc. Some of these algorithms (e.g., the one for Toeplitz matrix-vector product) turn out to be classical but others are new and contain a few surprises: For example, the fastest algorithm to multiply a symmetric matrix to a vector requires that we first express the symmetric matrix as a sum of Hankel matrices of decreasing dimensions bordered by zeroes, say,

*tensor nuclear norm*gives, in an analogous sense, the optimal

*numerical stability*of a bilinear operation.

### Biosketch

*Linear Algebra and its Applications and Linear and Multilinear Algebra*. He has received an AFOSR Young Investigator Award, a DARPA Young Faculty Award, an NSF Faculty Early Career Award, and, more recently, a Smale Prize from the FoCM Society and a Director’s Fellowship from DARPA.