The applied mathematics community lost a giant when mathematician Roger Fletcher passed away in June 2016 while hiking in his beloved Scottish Highlands. A professor in the Department of Mathematics at the University of Dundee, he was known both for his contributions to the field of optimization and his humble and approachable manner.
Roger Fletcher, 1939-2016, pictured here in his beloved Scottish Highlands. Image courtesy of Sven Leyffer.
Roger began his work in optimization at the dawn of nonlinear programming. Having completed his undergraduate degree at the University of Cambridge on a state scholarship, he then pursued his Ph.D. at Leeds University. At the time, Leeds had just established one of the first computing labs in the U.K. While at Leeds, Roger first met Mike Powell, which started a lifelong competitive friendship. Mike had come to Leeds to present a seminar on derivative-free optimization, but changed his mind and chose to instead talk about a recent technical report from Argonne National Laboratory by Bill Davidon. As it happened, Roger had been given that same report by his advisor, Colin Reeves. When Mike gave his seminar, he found that Roger already knew about Davidon’s then-new method and even had a working code. The two joined forces to write Roger’s first paper, which appeared in the Computer Journal
in 1963 , on what was later known as the Davidon-Fletcher-Powell (DFP) method.
The DFP paper was not the only hugely influential paper to come out of Roger’s time at Leeds. Following a suggestion by Reeves, Roger investigated the use of his line search to extend the conjugate gradient method to nonlinear functions. The result was the Fletcher-Reeves method . While neither of these two methods are widely used today, they represent a transformational moment in nonlinear programming. Roger modestly attributed his first two papers to his coauthors. “I had two huge ideas given to me by people,” he said. He described his time at Leeds as “probably the happiest period of [my] life.” Many years later, while a professor at Dundee, he would create a similarly happy atmosphere for his students and visitors.
Roger and Mike remained competitive friends throughout their careers, and Roger once recounted a story about when they went hiking in the Scottish Highlands. As Roger was slightly younger, he was leaving Mike behind, a situation that did not sit well with Mike’s competitive nature. So Mike cleverly asked, “Roger, tell me about the proof of the conjugate gradient method” – and deftly managed to catch up with an out-of-breath Roger.
Roger emphasized the use of applications to drive his research. He believed that applications and numerical work help us understand a problem’s challenges and an algorithm’s limitations. During his time at Leeds, Roger explored molecular dynamics calculations that required the solution of an unconstrained optimization problem; these applications directly motivated his work on the DFP and conjugate-gradient methods. Roger later worked closely with the School of Chemical Engineering at the University of Edinburgh. He believed that the problems people want to solve should ultimately drive applied mathematics research, and valued simple examples to expose a method’s limitations. In 2005, he and Yuhong Dai developed a small example demonstrating that the Barzilai-Borwein method can fail for box-constrained problems .
The second foundation of Roger’s research philosophy was software, as he believed that software validates theory and is simultaneously a guide to good methods. He wrote a powerful and robust LP solver and later bqpd , a Fortran77 package for solving indefinite QPs under degeneracy. These solvers supported both single- and double-precision arithmetic, because numerical difficulties manifested themselves first in the single-precision version. His solver shows some ingenious object-orientesque coding in Fortran77 (even though I am sure Roger never knew anything about object-oriented programming or C++)! The QP solver relies on a matrix algebra “class” that implements the factorization of the basis matrix. Roger provided both dense and sparse instantiations of this “class” and opened the possibility for other classes – for example, for people wishing to exploit the structure of their problem.
Throughout his career, Roger distrusted textbooks. While working towards his Ph.D., he implemented the steepest descent method, which was then the current method of choice. It failed to solve his problem, and Roger grew doubtful of things written in books:
I read in a couple of books (Householder’s book , I think it was; another book, Hildebrand  perhaps). They seemed to suggest that steepest descent was a very good method. So I tried it, and it generated reams and reams of paper, punched paper tape output as the iterations progressed, and didn’t make a lot of progress.
However, Roger was just as suspicious of his own opinion, and not above changing his own mind. When he refereed a paper by the young Dai on the Barzilai-Borwein method, he initially rejected the idea as useless. Luckily, Dai persisted; eventually Roger not only changed his mind but also coauthored a number of papers with him [1-3].
This self-doubt and suspicion enabled Roger to stay fresh and produce groundbreaking results, even late in his career. When I last met him, he was excited about his recent work on an augmented Lagrangian method for nonnegative QP. He was using a clever transformation of variables that also allowed him to store a single vector of size \(n\) that combines the primal and dual variables, thereby exploiting their natural complementarity.
Roger was widely-recognized in the mathematics community. He won the George B. Dantzig Prize with Stephen M. Robinson in 1997 and shared the Lagrange Prize in Continuous Optimization with Philippe Toint and myself in 2006, awarded jointly by SIAM and the Mathematical Optimization Society. Roger was a fellow of the Royal Society of Edinburgh, the Royal Society of London, and SIAM. Despite these accolades, he remained humble and approachable throughout his career, and never lost the sheer joy of solving problems and making discoveries (he acknowledged that his discovery of filter methods  made him “tingle when [he] thought about it”).
Roger had a great, deadpan sense of humor (if a somewhat limited reservoir of jokes). On one occasion, he complimented me and said, “Nice pullover, is it new?” This confused me, given Roger’s lack of fashion sense and the fact that the pullover was quite old. The mystery was solved when I took the pullover off and discovered a gaping hole in its sleeve.
Roger was what Americans would call a no-nonsense applied mathematician who believed in simple arguments and proofs. At the University of Dundee he fostered a happy environment for his many Ph.D. students and visitors. Roger selflessly provided guidance to his students, passing on to a new generation of researchers the luck and good ideas he felt he was given. He will be missed.
Leyffer, S. (2015). An Interview with Roger Fletcher. Optima 99.
 Dai, Y.H., & Fletcher, R. (2005). On the asymptotic behaviour of some new gradient methods. Mathematical Programming, 103(3), 541-559.
 Dai, Y.H., & Fletcher, R. (2005). Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming. Numerische Mathematik, 100(1), 21-47.
 Dai, Y.H., & Fletcher, R. (2006). New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds. Mathematical Programming, 106(3), 403-421.
 Fletcher, R. (1993). Resolving degeneracy in quadratic programming. Annals of Operations Research, 46(2), 307-334.
 Fletcher, R., & Leyffer, S. (2002). Nonlinear Programming without a penalty function. Mathematical Programming, 91, 239-270.
 Fletcher, R., & Powell, M.J.D. (1963). A rapidly convergent descent method for minimization. The Computer Journal, 6(2), 163-168.
 Fletcher, R., & Reeves, C.M. (1964). Function minimization by conjugate gradients. The Computer Journal, 7(2), 149-154.
 Hildebrand, F.B. (1956). Introduction to Numerical Analysis. New York, NY: McGraw-Hill.
 Householder, A.S. (1953). Principles of Numerical Analysis. New York, NY: McGraw-Hill.