# A Computationally Efficient Solution of Large-Scale Image Reconstruction Problems

During an up-close and personal encounter with medical technology, few patients likely ponder the underlying science. SIAM members might be exceptions, especially those who heard Misha Kilmer of Tufts University speak about image reconstruction at the 2017 SIAM Conference on Computational Science and Engineering (CSE17), held this February in Atlanta, Ga.

Much of Kilmer’s work is motivated by diffuse optical tomography (DOT), an imaging protocol that passes near-infrared light through tissue in search of tumors. Image reconstruction identifies a set of tissue properties that best accounts for the patterns seen in a given image. The medical question is whether those properties signal dangerous tissue anomalies.

The theme of Kilmer’s invited presentation at CSE17 was choosing “ingredients for a computationally efficient solution of image reconstruction problems.” In effect, she invited the audience to improve critical components of the image reconstruction process in order to deliver better answers more efficiently.

Kilmer identified five categories of “ingredients” drawn from medical imaging, hydrology, deblurring, and other reconstruction problems that warrant the expert attention of the CSE community:

- Representation of images to enhance reconstruction of the particular features that are important in a given setting
- Efficient approximations to the forward model, e.g., efficient computational simulation of the image produced by a hypothetical tissue sample with specific material properties
- Regularization to reduce sensitivity to measurement and computational errors
- Optimization tools to quickly and reliably find the material properties that best explain a given image
- Quantification of the uncertainty inherent in noisy inverse problems.

Kilmer focused on the first two: the selection of a “natural image space” within which to describe images, and more efficient evaluation of the computational model for the physical processes that produce them. A well-chosen image space can guide the reconstruction process towards physically reasonable material properties while offering some built-in stability to random error.

Model evaluation consumes the lion’s share of the computational time in most image reconstructions. For example, the computation that simulates a DOT image for a given set of tissue properties is the solution of a discretized, time-dependent partial differential equation (PDE). The tissue properties—spatially-varying coefficients of photon diffusion and absorption—appear both as coefficients in the PDE and as the unknowns sought by the image reconstruction. Since the model could be asked to produce an image for every tissue property distribution considered, efficient evaluation of the forward model could pay huge dividends in reducing the time needed for an image reconstruction.

Kilmer and her colleagues have advanced two ideas for modeling the image space: dictionary learning and low-order parameterization. The dictionary approach extracts a catalog of vectorized prototype subimages—of fixed size—from a training image. Given an image requiring reconstruction, machine-learning techniques and a low-rank approximate factorization combine to produce a regularized, approximate representation of that image in terms of a relatively sparse set of dictionary images. The optimization problem of finding the best match to the given image is thereby reduced in size and restricted to images that are inherently better-behaved.

Figure 1 compares a more sophisticated form of dictionary reconstruction with two contenders: Filtered Back Projection, which Kilmer called “for many years, the gold standard,” and classic Tikhonov regularization, each applied to an image of peppercorns. Tensor dictionary reconstruction, the best of the three methods shown in Figure 1, extends the dictionary idea by replacing the approximate matrix factorization that slims the dictionary with a tensor counterpart.

**Figure 1.**A comparison of reconstructions of an image of peppercorns using three different techniques.

**Left.**Filtered Back Projection.

**Middle.**Classic Tikhonov regularization.

**Right.**A tensor-based form of dictionary reconstruction. Image courtesy of [2]. Image credit: Samuli Siltanen.

Low-order parametrization entirely abandons the idea of reconstructing tissue properties via point-by-point inversion of an image. In its simplest form, it turns instead to a shape-based approach that models variations in a sample’s absorption coefficient, for example, using characteristic (or indicator) functions that have value 1 within the suspect region, 0 outside. That formulation provides a low-order parametric representation of the coefficient’s level sets, which delineate the margins of a potential tumor in a computationally efficient manner.

Low-order parameterization is attractive, in part because DOT imaging of breast tissue seeks only to differentiate among three types of tissue: adipose, fibroglandular, and tumor. Knowledge of the precise material properties at every point in the tissue is unnecessary when the fundamental goal is to locate tumor tissue.

Figure 2 illustrates how a few compactly supported radial basis functions can capture potentially challenging level sets, such as the square shown on the right. Kilmer calls such representations “pretty remarkable, given that they are *radial* basis functions.” Those basis functions also provide built-in regularization and a best-match optimization problem of low dimension. Trust region methods seem well suited to the optimization itself.

**Figure 2.**A parametric level set representation of the boundary of a nearly square level set (right) formed using radial basis functions (left). Image courtesy of [1].

Evaluating the forward model—finding the image produced by a given distribution of material properties—is the computational bottleneck in nonlinear image reconstruction problems like DOT and hydraulic tomography. To break that dam, Kilmer and her colleagues have studied two classes of ideas: approximation of the forward model operator and reduced-order modeling.

If the forward model is a linear operator, then low-rank approximations to the matrix operators can be practical. If the forward model is a time-dependent PDE, then transfer functions—an idea borrowed from control theory—are the silver bullets: evaluating particular transfer functions at selected frequencies provides the various function values necessary for optimization. Coupling this approach with both low-order parametrization of the image space and trust region optimization can efficiently solve DOT problems.

Reduced-order modeling is motivated by the recognition that it is possible to estimate the residual from a given image reconstruction using data from a much smaller collection of randomly selected detectors. Although the smaller problems are solved more quickly, the corresponding reductions in the residual may stall short of a good reconstruction. Swapping in a few well-chosen sources and detectors can restart the optimization productively. Kilmer describes this approach as “randomize, then optimize.”

As Kilmer made clear, image reconstruction, like many other important areas of CSE, has an immense river of mathematical ideas running through it. Her careful descriptions of the power of some of those ideas amply demonstrate the potential of CSE-wide community collaboration, which she espoused, “Complementary scientific computing techniques working together have the best chance of improving the speed and maintaining the accuracy of image reconstruction.”

*Kilmer’s CSE17 presentation is available from SIAM either as slides with synchronized audio, or as a PDF of slides only.*

**References**

[1] Aghasi, A., Kilmer, M., & Miller, E.L. (2011). Parametric Level Set Methods for Inverse Problems. *SIAM Journal on Imaging Sciences, 4*(2), 618-650.

[2] Soltani, S., Kilmer, M.E., & Hansen, P.C. (2016). A tensor-based dictionary learning approach to tomographic image reconstruction. *BIT Numerical Mathematics, 56*(4), 1425-1454.* *