SIAM News Blog
SIAM News
Print

In Search of the Perfect Numerical Analysis Textbook

By Nicholas Higham

My bookshelf contains a lot of numerical analysis textbooks. The oldest is Douglas Hartree’s Numerical Analysis (Oxford University Press, 1958), and the newest is Robert Corless and Nicolas Fillion’s A Graduate Introduction to Numerical Methods: From the Viewpoint of Backward Error Analysis (Springer, 2013).1

The variety is wondrous, ranging from books at an introductory level to those aimed at an advanced graduate audience; from formal treatments with numerous theorems to more computationally-oriented presentations; and from books tied to a particular programming language to those that are language-independent. Why do I keep acquiring them? Why do authors keep writing them?

I am not the first to ask the latter question in SIAM News. In a lighthearted 1984 article titled “On Therapy and Numerical Analysis Texts,” Paul Davis called writing a numerical analysis textbook “the leading case of overwork among academic mathematicians,” and asked, “Why are we driven to this insanity?”

These questions are certainly relevant since SIAM publishes many textbooks in numerical analysis — both general and in specific areas of the subject. Many of us have at some time struggled to find a completely satisfying textbook for a course we need to teach, an experience that seems particularly common among numerical analysts. Given that there is a fairly standard body of core material on the subject, why should we have this problem? I can see several reasons.

Cartoon created by mathematician John de Pillis.
First, numerical analysis continues to evolve. The introduction of the IEEE arithmetic standard in 1985 made it easier to describe floating point arithmetic (removing the requirement to discuss guard digits, for example), but also harder in that it introduced features such as NaNs and subnormal numbers that may need to be covered.  Polynomial interpolation is a classic topic and it might appear that nothing has changed for over half a century, but in the last decade the barycentric representation of the interpolant has become the representation of choice in many contexts, and hardly any textbooks treat it. The evolution of computer architectures may have little effect on a first course, but it certainly can influence the state of the art in more advanced courses: method A might require more flops than method B, but A could be faster if it is more parallelizable or requires less communication. Authors of books that make use of a programming language (C, Python, etc.) or a problem-solving environment (MATLAB, Maple, Jupyter Notebook, etc.) face a constant battle to keep up with changes in the language and software, while few, if any, textbooks use newer languages (I am not aware of any textbook that uses Julia).

A second reason for dissatisfaction with numerical analysis textbooks may be that the material does not have the right balance of theory, algorithms, and computation. For example, on the topic of Runge-Kutta methods, should a general class of methods be derived, or one particular method stated? What types of error and stability should be analyzed? And to what extent should algorithmic practicalities be discussed?

Another reason is that the field of numerical analysis is big enough that no book can treat all the topics covered in courses: divided differences, multidimensional interpolation and integration, stationary iterative methods for linear systems, and stiff ordinary differential equations are not found in every book. And with the growing importance of stochastic computation and uncertainty quantification, there is an argument for including some relevant aspects of probability and statistics.

Application areas influence the examples included in a textbook. While the computation of PageRank is now commonly presented as a practical use of the power method, future textbooks may emphasize the relevance of the subject to machine learning, or some area that is as yet in its infancy.

As well as the choice of topics, there is no agreement on the order in which to present them. The first chapter of a numerical analysis textbook has traditionally been about errors and floating point arithmetic, but some argue that this chapter should appear later because numerical analysis is not principally about floating point arithmetic.

As an instructor looking to develop a course, if you ask yourself, “Which book has the best treatment of topic X?” then you may well find that your answer has almost as many books as topics, and this feeds the desire—to which Paul Davis referred—to produce your own text. He also noted that “The drive to write yet another numerical analysis text may arise from computation’s curious mix of science and art,” reasoning that everyone “is anxious to share yet another insight into the art, but none dares skimp on explaining the science.” Davis observed that “the best books seem to come from the people who do the best work…apparently, there is no substitute for being there.”

For all of these reasons, the perfect numerical analysis textbook does not yet exist and probably never will. Authors will continue to write their own versions of what a numerical analysis textbook should be, and SIAM will continue to publish them, at least when the usual criteria—which include correctness, distinctiveness, market size, and lack of duplication of existing SIAM books—are met.

If you have an idea for a new textbook or research monograph—in numerical analysis or any subject that fits SIAM’s purview—please contact the SIAM acquisition editors, who will be happy to discuss the idea with you.

1 Reviewed in the December 2016 issue of SIAM Review.

 Nicholas Higham is the Richardson Professor of Applied Mathematics at the University of Manchester. He is the current president of SIAM.
blog comments powered by Disqus