# Progress by Accident: Some Reflections on My Career

*Walter Gautschi, professor emeritus at Purdue University and a leading mathematician in the areas of approximation theory, orthogonal polynomials, special functions, and numerical analysis, celebrated his 90th birthday in December 2017. A conference honoring this occasion was held at Purdue University earlier this year. In the following article, Gautschi describes how different research areas sparked his interest. *

Often in my career, my interest in the mathematical areas in which I was active came about by chance occurrences that at the time seemed rather insignificant but were reinforced by later events.

### Ordinary Differential Equations

Numerical ordinary differential equations (ODEs) piqued my interest during my first semester at the University of Basel. I enrolled in a course on “Wissenschaftliches Rechnen” (scientific computation), in which Professor Eduard Batschelet mentioned a graphical method for solving ODEs courtesy of Richard Grammel. The method uses polar coordinates: the argument \(x\) of the solution \(y\) serving as the polar angle and the reciprocal of the solution \(1/y(x)\) plotted on the radius vector with angle \(x\). It struck me as odd that the reciprocal of the solution was being approximated. Why not the solution itself?

It turned out that a geometric construction similar to the one used by Grammel indeed exists to approximate the solution. I mentioned this to Batschelet, who was pleased by my observation. He must have mentioned this to Alexander Ostrowski, who encouraged me to expand my work on Grammel’s method into a Ph.D. thesis. I was not thrilled with this suggestion, knowing that the digital computer era—which was just beginning—would demand numerical methods rather than graphical ones. But I made the most of it and developed techniques for analyzing the error of Grammel’s method.

*Zeitschrift für Angewandte Mathematik und Physik*(

*ZAMP*) in 1951. A few years earlier, Rudolf Zurmühl had published Runge-Kutta methods that directly integrate single differential equations of \(n\)th order, i.e., without first decomposing into a system of first-order equations. I decided to apply Bieberbach’s techniques to Runge-Kutta-Zurmühl methods to obtain local error bounds for all derivatives of order \(<n\). It was a laborious undertaking—a real tour de force—but I persisted and published the results in

*ZAMP*in 1955.

Throughout my years at Oak Ridge National Laboratory (ORNL)—and still later—I was teaching myself Russian and reading Russian books and papers about approximation and computation (recall that these were the years after the Sputnik launch). This inspired me to examine numerical methods for ODEs based on trigonometric, rather than algebraic, polynomials with the expectation that they could possibly be used to solve differential equations with oscillatory solutions. I published a paper on this work in 1961, but it did not immediately have the resonance that I had hoped it would. It took some 40 years until the paper was recognized as anticipating what in the meantime had been called “exponentially fitted” methods.

### Linear Difference Equations

I must credit Milton Abramowitz with fueling my interest in the stability of linear difference equations, specifically equations of second order (three-term recurrence relations). He often spoke enthusiastically about “Miller’s method” of applying a recurrence relation in a backward direction, originally suggested by Jeffrey Charles Percy Miller and used to compute Bessel functions.

Although Miller’s method was widely perceived as a special trick, I suspected there was more to it and that continued fractions played an important role. We were interested in finding a stable algorithm for computing a minimal solution—a solution that grows more slowly than all other linearly independent solutions—of a three-term recurrence relation. I speculated that the ratio of two successive values of a minimal solution can be expressed in terms of a continued fraction formed naturally from the coefficients of the recurrence relation, a hunch that was confirmed by a theorem in Oskar Perron’s book on continued fractions. The result then allows one to formulate a stable algorithm generating the first N values of the minimal solution, given the first one or perhaps even the value of an infinite series involving the minimal solution. I worked on this for several years, applying these ideas to the recursive computation of many special functions; eventually I published a comprehensive account of this work in 1967.

### Special Functions

When I arrived at the National Bureau of Standards in Washington, D.C. (now the National Institute of Standards and Technology) in 1956, a major project entailed the preparation of the *Handbook of Mathematical Functions*. Abramowitz was directing the project and asked me whether I would be interested in writing the as-yet unassigned chapter on the error function. I accepted, despite my complete lack of experience with special functions; I felt that my background in classical analysis was strong enough for me to be up to the task. I diligently began to study the literature on special functions, particularly the confluent hypergeometric function. At Milton’s request, I also helped with the chapter on the exponential integral. The experience I gained through this project came in handy when I delivered an invited seminar on the computation of special functions in 1975. This resulted in a lengthy survey paper published that same year.

I was later invited to the 100th anniversary of Francesco Giacomo Tricomi’s birth in Rome, which presented me with another occasion to delve into special functions. The incomplete gamma function—one of Tricomi’s favored functions—was the focus here. This work culminated in an extensive, partly-historical paper published in the *Atti dei Convegni Lincei* in 1998.

### Orthogonal Polynomials and Gaussian Quadrature

During my time at ORNL in the late 1950s and early 1960s, an ORNL chemist asked Alston Householder if a member of his group could help with the computation of a definite integral that resisted accurate evaluation. Householder felt that I was the best person for the job. The integral in question turned out to be an integral over \([-1, 1]\) with a logarithmically singular factor in the denominator, something like \(\pi^2\) plus the square of \(\log(1+x)(1-x)\). That seemed easy enough: use Gaussian quadrature with the reciprocal of this singular factor as a weight function and take \(n\), the number of Gauss points, large enough to yield any desired accuracy. Having carefully studied Francis Hildebrand’s book on numerical analysis, I knew how the required orthogonal polynomials could be generated from the moments of the weight function and was able to compute them to any order. With full confidence, I wrote the necessary short program and ran it on ORACLE, the world’s fastest computer in 1954. I failed miserably! Investigating the underlying reason for my failure—ill conditioning—gave rise to many papers on the constructive theory of orthogonal polynomials.

The 150th anniversary of Elwin Christoffel’s birth in 1979 greatly intensified my preoccupation with orthogonal polynomials — especially Gaussian quadrature. Christoffel was instrumental in generalizing the Gaussian quadrature rule to arbitrary weight functions, and developed the underlying theoretical machinery involving orthogonal polynomials. The speaker slated to give the plenary talk on Christoffel’s contributions to numerical integration for the occasion had to withdraw unexpectedly, and I was asked to step in. Against all odds I pulled it off and presented the lecture, which was published in 1981.

### History

I have continually enjoyed reading about the masters of centuries past, and whenever I suspected that some contemporary results may have been realized much earlier, perhaps in the 19th century, I eagerly dug into the older literature to confirm my hunches. The first time this happened was in connection with a result in Oskar Perron’s book on computing solutions to three-term recurrences using continued fractions, which I speculated was much older. Despite perusing many books on difference equations, I could not find any mention of this result. But I did find many references to Italian mathematician Salvatore Pincherle, so I browsed through Pincherle’s collected works. There, in an 1894 paper on hypergeometric functions, I found exactly the result that Perron stated in his book. I attributed this result to Pincherle, and it became known as Pincherle’s theorem.

I have always admired Leonhard Euler. During a visit to Basel in the early 2000s, Emil Fellmann, a well-known Euler biographer, handed me a copy of a letter that Euler had written to his close friend Daniel Bernoulli. It dealt with the somewhat bizarre (and hence failed) attempt to interpolate the common logarithm at all powers of 10. It took me a while to figure out what was being described in the letter, but I was eventually able to explain the matter both in a 2008 paper and a short commentary in a correspondence volume. A year earlier, I was invited to speak about Euler on the 300th anniversary of his birth at the 2007 International Congress on Industrial and Applied Mathematics, which took place in Zurich. It took me—not really an expert on Euler’s work and life—a whole year to prepare the talk, an expanded version of which appeared in *SIAM Review* in 2008.

**Acknowledgments:** Thanks to Alex Pothen for his help in editing this article.