# Convergence in Imaging Sciences

For those of us trained in the mathematical sciences, the notion of convergence has a very specific connotation of coming together without ever moving apart (you know the drill: for every \(\epsilon,\) there exists a \(\delta\) such that ….). Here I will focus on a more expansive idea of convergence as the basis for divergence — an explosion of new developments and opportunities, at least in the area of imaging sciences. In recent years, imaging sciences has experienced a rather marked increase in fundamentally new advances enabled by the convergence of technological capabilities and interests, some of which are far removed from the world of applied mathematics.

One source of these developments is the wealth of novel—and in many cases, challenging—sensor technologies. The role of applied mathematics in sensor data modeling and processing is certainly not new. The search for hydrocarbons in Earth’s subsurface is perhaps the quintessential example of a highly successful collaboration, dating back to at least the 1970s, between those who built sensors (seismic, acoustic, electromagnetic, etc.) and those tasked with modeling and extracting information from the resulting data. However, the quantity and diversity of sensing technologies that have emerged over the past 10 to 15 years is unprecedented. This is perhaps most evident in the general field of optics. From the single-pixel camera developed by Richard Baraniuk’s group at Rice University to the gigapixel camera created by David Brady and his team at Duke University, there is no shortage of examples that intimately tie a new sensing method’s success with a suite of associated mathematical models and processing methods. Biomedical applications are driving many of these advances. Laura Waller (University of California, Berkeley), Vasilis Ntziachristos (Technical University of Munich), and Lihong Wang (California Institute of Technology) are developing sensing systems that represent some of the most compelling instances of new imaging modalities employing light; in many cases these are “mixed” with sound, giving rise to improvements in both computational imaging methods and the mathematical analysis accompanying the resulting inverse problems. Peter Kuchment’s (Texas A&M University) work on the analysis of photoacoustic imaging problems is a good example.

As physicists and engineers create new sensing methods, convergence is also evident within the mathematics community proper. I would like to focus specifically on the area of inverse problems in which a physical model stands between the data one possesses and the information one desires. In some cases, researchers can develop closed form, analytical methods for turning sensor data into images; the best known among these is convolution back-projection (also called filtered back projection or Radon inversion), originally developed for parallel beam X-ray computed tomography and since generalized in a variety of mathematically-interesting and practically-useful directions, including fan beam, cone beam, and helical scan cases. Implementation of these methods generally has low computational overheard, i.e., they are “fast.” However, they also tend to apply to very specific sensing geometries and assumptions about the underlying physics, a fact that hasn’t deterred their recent, rather remarkable expansion. Using sophisticated ideas in microlocal analysis, mathematicians and their colleagues—including Todd Quinto (Tufts University), Margaret Cheney (Colorado State University), and Bill Lionheart (University of Manchester)—have developed interesting methods for solving imaging problems when the sensing geometry is less than ideal. They have demonstrated the utility of these ideas not only in the case of X-ray imaging, but more broadly to problems of wave propagation, including sonar, radar, and more recently Compton scatter imaging. I would be remiss to not acknowledge that these recent advancements build on an existing base of work dating back at least (to the best of my knowledge) to the efforts of folks like Gregory Beylkin, Douglas Miller, Michael Oristaglio, and others who in the 1980s pioneered many of these ideas in the context of geophysical sensing for hydrocarbon exploration.

A large body of work in the use of numerical/computational techniques for solving inverse problems also exists. The intent is to discretize the physical model and pose image formation as the answer to a variational problem in which a “good” solution balances fidelity to the data against information one possesses in addition to the data itself, often quantified mathematically in terms of some degree of smoothness of the image or its derivatives. Interpreting the variational problem through a probabilistic lens (a technique known for decades) has recently produced some rather compelling results in the area of uncertainty quantification (UQ), where the output is not a single image but an entire probabilistic model. This model offers insight into not only the most likely image but also the level of confidence in such an estimate, which is valuable information when deciding how best to collect new data. The work of Omar Ghattas’s group at the University of Texas and Youssef Marzouk’s group at the Massachusetts Institute of Technology provide great examples of this line of inquiry.

In contrast to the analytical methods, the computational ones do provide more flexibility for addressing nonideal problems in which sensors may be arbitrarily located, the underlying medium inhomogeneous, or the physics not well approximated in a “nice” manner. The price is computational: this approach typically demands the solution to a high-dimensional, non-convex optimization problem, where both gradient information and the evaluation of the cost function require the solution of tens, hundreds, or even thousands of discretized partial differential equations. Thus, regardless of whether one seeks a single solution to a variational problem or an entire UQ model, the corresponding mathematical challenges tend to center around problems in numerical linear algebra (including fast linear systems solves, preconditioning, and reduced order modeling) as well as optimization. Recent studies focus on theory and methods that use randomization as a tool for reducing system size and hence processing complexity.

Moving forward, I have to believe that there will be opportunities to bring together these two separate approaches to imaging—the analytical and the computational—because the strengths of one balance the shortcomings of the other. Samuli Siltanen (University of Helsinki) and his collaborators developed complex geometric optics methods for an array of inverse problems, most notably electrical impedance tomography, which may offer a clue to a possible union. Their work is based on a rather deep and analytically elegant mathematical formulation of the physics of the problem, which requires the solution to a numerical inverse problem at one crucial point. Perhaps these ideas will lead to progress in combining some of the aforementioned areas. Or maybe a totally different variety of insight will be necessary. Regardless of the details, one thing is certain: imaging sciences will continue to provide relevant, intellectually-stimulating problems that allow applied mathematicians and their collaborators to impact the field for years to come.

**References**

[1] Goldstein, T., & Osher, S. (2009). The Split Bregman Method for L1-Regularized Problems. *SIAM Journal on Imaging Sciences, 2*(2), 323-343.