About the Author

Your Companion for All Things Applied Math

By James Case

The Princeton Companion to Applied Mathematics. Ed. Nicholas J. Higham. Princeton University Press, Princeton, NJ, 2015, 1016 pages, $99.50.

The Princeton Companion to Applied Mathematics. Edited by Nicholas Higham. Courtesy of Princeton University Press.
In 2007, Princeton University Press published—to wide acclaim—The Princeton Companion to Mathematics (PCM), edited by Timothy Gowers. The term “companion” indicates that the volume, though broad in its coverage, is less than encyclopedic. The same may be said of The Princeton Companion to Applied Mathematics (PCAM), edited by Nicholas Higham and published in 2015.

PCAM’s 186 self-contained articles—many of them contributed by long-time SIAM members—cover subjects ranging from the obligatory PDEs, ODEs, and numerical analysis to such arcana as “Benford’s Law” and “Modeling a Pregnancy Testing Kit.”

Gil Strang has contributed an elegant discussion of the KKT matrix \[ M = \left( \begin{array}{ccc} C^{-1} & A \\ A^T & 0 \\ \end{array} \right) \] in which \(A\) is \(m\times n\) and \(C\) is positive-definite symmetric. Such matrices occur in (constrained) numerical optimization problems, finite element analysis, and graph theory, among other such uses. Strang also describes an infinite dimensional analogue in which \(A\) and \(C\) are (differential or integral) operators. Applications invariably exploit the fact that \(K=A^{T}CA\) is positive-definite symmetric/self-adjoint.

Moving on, Jack Dongarra offers a state-of-the-art overview of high performance computing. His point of departure is a plot, seen in Figure 1, depicting the (very nearly) exponential growth of computing power since the dawn of the computer age. Each black dot represents the performance, in flops per second, of a generation of supercomputers; speed has doubled 46 times since 1950, or about once every 1.4 years!

Figure 1. Peak performance of the fastest computer systems over the last six decades (page 840 in PCAM ). Courtesy of Princeton University Press.

Petaflop computing (1015 flops per second) was achieved in June 2008, and exascale computing (1018 flops per second) is anticipated in the near future, perhaps as soon as 2020. Such speeds require new standards of fault tolerance; unlike modern personal computers, which may run for weeks or months without rebooting, the current generation of supercomputers must be rebooted every day or two. And since future supercomputers will generate an almost continuous flow of faults, it will no longer be feasible to allow every detected failure to halt whatever applications are running on the affected resources before returning to the most recent checkpoint.  Additionally, the current checkpoint/restart technique will not scale to the massively parallel systems of the future because new faults will occur before an application even has a chance to restart. This will throw the machine into a continuous halt/restart cycle. “New fault-tolerant paradigms need to be integrated into both system’s software and user applications,” writes Dongarra.

Unforeseen factors can cause difficulties in supercomputing. In one case, a pseudorandom number generator is used to fill the (lengthy) columns of a square matrix \(A\) and vector \(b\) in the linear system \(Ax=b\),  to be solved by Gaussian elimination. After a run of some 20 hours, the resulting \(x\) was found to be incorrect, due apparently to the non-singularity of \(A\). Because the number of random elements exceeded the period of the generator, which happened to be a multiple of the dimension of \(A\), the latter contained identical columns leading to a fatal “zero-pivot,” despite round-off error.

Because modern supercomputers use enormous quantities of energy, their operation can cost upwards of a million dollars per year. Minimizing power consumption is therefore an increasingly important goal of algorithm design, and flops per watt an increasingly popular measure of computer performance. Though none of the recognized problems appear insurmountable, all require attention.

PCAM also features contributions from Barbara Lee Keyfitz, who offers articles on conservation laws and shock waves. The former is particularly interesting, as it explains how conservation laws arise and treats them as an important subclass of quasi-linear PDEs. Keyfitz warns that analysts must strive to identify “function spaces inclusive enough to admit weak solutions for general classes of conservation laws, but regular enough that solutions and their approximations can be analyzed.”

At present, no fully satisfactory theory exists for systems of conservation laws involving more than a single space variable. Nevertheless, applications abound. The process of paper chromatography (a common method of separating the chemical components of a liquid) is well-modeled by an equation of the form \[c_x + (f(c))_t = 0,\] in which \(c (x) = (c_1 (x),..., c_n(x))\) is an unknown vector of altitude-dependent chemical concentrations, presumably expressed in moles per liter, and \(f\) is an empirically-determined vector-valued function of \(c\). The equation \[u_t + (q(u))_x = 0,\] which expresses a “law of conservation of automobiles,” provides a useful description of single-file traffic flow. Here \(u\) represents the linear density of cars on the road, while \(q(u) = q(u(x))\) represents the flux across a transversal located at position \(x\) .

Irene Fonseca’s article on the calculus of variations is likewise worthy of mention. Following a brief discussion of the isoperimetric problem, the brachistochrone problem, and geodesic lines on a curved surface, Fonseca proceeds directly to the extremal surfaces and hypersurfaces that lie at the forefront of current research. To that end, she formulates the problem \[ min_{u\in U}F(u) := \int_\Omega f(x, u(x), \bigtriangledown u (x)) dx, \qquad (1) \] where \(\Omega\) is an open subset of  \(\mathbb{R}^N\), \(U\) is a space of functions \(u\) mapping \(\Omega\) into \(\mathbb{R}^d\), \(N\) and \(d\) are fixed positive integers, \(\bigtriangledown u (x)\) denotes the Shwartzian derivative (gradient) of \(u\), and \(f= f (x, u, \xi)\) maps \(\Omega \times \mathbb{R}^d \times \mathbb{R}^{N\times d} \) into \(\mathbb{R}\) before explaining how special cases are currently attacked. 

If, for instance, \(f (x, u, \xi) = \sqrt{1 + |\xi|^2}\), \(\textrm{(1)}\) becomes a \(d\)-dimensional minimal surface problem, of the sort solved during the 1930s by Douglas and (independently) by Rado for \(d=3\).  Or if \(\Omega\) is a compact surface \(S\) embedded in a three-dimensional space while \(F = W(S) := \int_S \frac{1}{4} (k_1 + k_2)^2 d \sigma\), where \(k_1\) and \(k_2\) are the principal curvatures of \(S\), then \(F\) represents the Willmore (or bending) energy of \(S\). Willmore energy is minimized by spheres, for which \(W(S) = 4\pi\). If \(T\) is a torus in 3-space, then \(W(T) \ge 2 \pi^2\). The modern calculus of variations figures prominently in the theories of elastic shells, the surface structure of crystals, and superconductivity, among many other areas of application.

Generally speaking, the articles on such standard topics as complex analysis, solid and fluid mechanics, and Markov chains tend to skim lightly over the basics, in order to bring the reader within striking distance of current research. In contrast, those devoted to the knotting and linking of molecules, mathematical neuroscience, and other less familiar topics tend to be more tutorial in nature.

Finally, a few words about what goes unmentioned would seem appropriate. There are two articles on interval analysis. The first describes the method itself, and the second exposes the techniques by which one may, on occasion, assert not only that a given system \(F(x)=0\) of nonlinear equations has one of more solutions in each of the vector intervals  \(u_i \le x \le v_i; \enspace i=1(1)n\), but also that no other solutions exist. Yet neither article mentions that the second of Smale’s 1997 challenge problems—the one concerning the low-dimensional attractor of solutions for the famously chaotic Lorentz equations—was solved in 2002 by an interval-analytic technique applicable to ODEs.

More significantly, PCAM contains no biographical sketches. PCM includes nearly a hundred of them, chronicling the lives and deeds of mathematicians ranging from Pythagoras, Euclid, and Archimedes, through Fermat, Euler, Gauss, Riemann, Poincaré, and Hilbert, to Gödel, Von Neumann, and Bourbaki.

Perhaps surprisingly, PCAM has no systematic discussion of game theory. In an article on his life, PCM mentions that Von Neumann proved the minimax and perfect information theorems, while PCAM references the subject only in passing. One would never guess that machines now defeat human champions in most board games, including chess, Monopoly, and Scrabble, or that many-player game theory guided the design of the radio spectrum auctions that (as of 2006) had contributed more than $40 billion to the U.S. Treasury and raised more than $100 billion abroad.

Nevertheless, the above are mere quibbles. A “companion” cannot include everything, and somebody’s pet topic is bound to be omitted. On the whole, Higham and his  associate editors (Mark R. Dennis, Paul Glendinning, Paul A. Martin, Fadil Santosa, and Jared Tanner) have produced an admirably readable and informative volume, which anyone interested in applied mathematics would be well advised to consult or—better still—to own!

James C​ase writes from Baltimore, Maryland.