SIAM News Blog
SIAM News
Print

May 2020 Prize Spotlight

The SIAM Activity Group on Optimization awards a Best Paper Prize and Early Career Prize every three years. The 2020 awardees of these two prizes are Hamza Fawzi, James Saunderson, Pablo A. Parrilo, and John C. Duchi. Additional information about each recipient, including a Q&A with SIAM News, can be found below!

SIAM Activity Group on Optimization Best Paper Prize

The SIAM Activity Group on Optimization Best Paper Prize is awarded in 2020 to Hamza Fawzi, James Saunderson, and Pablo A. Parrilo. They received the award for their paper, “Semidefinite Approximations of the Matrix Logarithm,” Foundations of Computational Mathematics (April 2019), and the award recognizes them for proposing an efficient semidefinite programming approximation of the matrix logarithm. Due to the cancellation of the SIAM Conference on Optimization in 2020, the award presentation date is to be determined.

The SIAM Activity Group on Optimization (SIAG/OPT) awards the SIAG/OPT Best Paper Prize every three years to the authors of the most outstanding paper, as determined by the prize committee, on a topic in optimization in the four calendar years preceding the award year. The qualifying paper must have been published in English in a peer-reviewed journal and must contain research contributions to the field of optimization, as commonly defined in the mathematical literature, with direct or potential applications. 

 

 

Hamza Fawzi is a lecturer in the Department of Applied Mathematics and Theoretical Physics (DAMTP) at the University of Cambridge. He received his undergraduate degree from Paris School of Mines, his MS from UCLA and his PhD from Massachusetts Institute of Technology (MIT). His research interests lie broadly in convex optimization and its applications. The current focus of his work is on semidefinite programming and its applications in polynomial optimization and quantum information theory.




James Saunderson is a lecturer in the Department of Electrical and Computer Systems Engineering at Monash University. He received undergraduate degrees in electrical engineering and mathematics, both from the University of Melbourne, and Masters and PhD degrees in electrical engineering and computer science from MIT. His research is primarily in developing and analyzing computational tools, based on mathematical optimization, for problems in science and engineering.

 

 

Pablo A. Parrilo is the Joseph F. and Nancy P. Keithley Professor of Electrical Engineering and Computer Science at MIT, with a joint appointment in Mathematics. He is affiliated with the Laboratory for Information and Decision Systems (LIDS) and the Operations Research Center (ORC). He received a degree in electrical engineering from the University of Buenos Aires, Argentina, and a PhD in control and dynamical systems from the California Institute of Technology. His research interests include mathematical optimization, control and identification, machine learning, and the development and application of computational tools based on convex optimization and algorithmic algebra to practically relevant engineering problems. He is a Fellow of IEEE and SIAM. 

 

The authors collaborated on their answers to our questions.

Q: Why are you excited to receive the SIAG/OPT Best Paper Prize?

A: We are humbled and honored that the committee chose to recognize our work with this prize. It is a wonderful acknowledgement of the importance of semidefinite optimization, and the role it can play in rapidly evolving areas such as quantum information. 

Q: Could you tell us a bit about the research that won you the prize?

A: The paper develops a new method, based on semidefinite programming, to solve optimization problems involving the matrix logarithm function. The logarithm of a positive definite Hermitian matrix plays an important role in many branches of applied mathematics and satisfies some remarkable properties such as operator concavity. The main technical novelty of the paper is to show that this function, as well as other functions derived from it, admit an (approximate) representation using semidefinite programming. We do this by showing that certain rational approximations of the logarithm function are also operator concave and admit an exact semidefinite representation. One of the main application areas of our work is in quantum information theory, where the quantum relative entropy function, defined in terms of the matrix logarithm, plays a fundamental role.

Q: What does your research mean to the public?

A: Our work develops new ways to rewrite certain mathematical optimization problems in a special way that allows them to be approximately solved using standard computer algorithms. One of the key areas where our methods can be used is in designing, and evaluating the quality of, quantum technologies. For example, our work has been used to estimate the amount of information that can be transmitted reliably through a quantum channel. It has also been used to evaluate the security of communication protocols that are safe against attacks by quantum computers, known as quantum key distribution schemes.

Q: What does being a SIAM member mean to you?

A: Being a SIAM member means being part of a community that values intellectual rigor and depth alongside applications and impact. SIAM’s events are welcoming and collegial, its journals are well managed and of the highest quality, and the society is particularly supportive of its younger members. 


SIAM Activity Group on Optimization Early Career Prize

John C. Duchi of Stanford University is the recipient of the 2020 SIAM Activity Group on Optimization Early Career Prize. The award recognizes Duchi for his deep and important contributions to convex, nonconvex, and stochastic optimization as well as to the statistical foundations of optimization methods for data science.

This is the first award of this new prize. The SIAM Activity Group on Optimization (SIAG/OPT) will award the SIAG/OPT Early Career Prize at the SIAM Conference on Optimization.  Due to the cancellation of the SIAM Conference on Optimization in 2020, the presentation date is yet to be determined.

The SIAG/OPT Early Career Prize is awarded every three years to an outstanding early career researcher in the field of optimization for distinguished contributions to the field in the six calendar years prior to the award year. The award recognizes an individual who has made outstanding, influential, and potentially long-lasting contributions to the field of optimization within six years of receiving the PhD or equivalent degree as of January 1 of the award year. The contributions for which the award is given must be publicly available and may belong to any aspect of optimization in its broadest sense. The contributions may include a paper or papers published in English in peer-reviewed journals or conference proceedings, or high quality freely available open source software.

John C. Duchi completed his PhD in computer science at the University of California Berkeley and is currently an assistant professor of Statistics and Electrical Engineering and (by courtesy) Computer Science at Stanford University. His work spans statistical learning, optimization, information theory, and computation, with a few driving goals. (1) To discover statistical learning procedures that optimally trade between real-world resources---computation, communication, privacy provided to study participants---while maintaining statistical efficiency. (2) To build efficient large-scale optimization methods that address the spectrum of optimization, machine learning, and data analysis problems we face, allowing us to move beyond bespoke solutions to methods that robustly work. (3) To develop tools to assess and guarantee the validity of---and confidence we should have in---machine-learned systems.

Q: Why are you excited to receive the SIAG/OPT Early Career Paper Prize?

A: I'm thrilled to be a recipient of the prize. I've been fortunate to have a number of great collaborators--both more senior and my students at Stanford--and they have really pulled my research in all sorts of interesting directions. I think the prize is a reflection of their hard work, and frankly, I'm just feeling lucky to have been able to work with them. A few shout-outs I think are important here: Stephen Boyd's courses (when I was an undergraduate at Stanford University) and his mentorship were what got me into optimization in the first place, and I had this remarkable chance to work closely with Yoram Singer making large-scale optimization for machine learning more practical at Google and--I hope!--in industry more broadly. My PhD advisors at Berkeley (Michael I. Jordan and Martin J. Wainwright) really encouraged a view bridging optimization, computation, and statistics, and seeing this viewpoint be recognized through the prize and the community more broadly is very rewarding. Of course, now it's my PhD students working in optimization who bring in the exciting ideas: Hilal Asi, Yair Carmon, and Feng Ruan have really taken this vein of research and run with it. 

Q: Could you tell us a bit about the research that won you the prize?

A: The work this prize recognizes has been to build efficient large-scale methods for the spectrum of optimization, machine learning, and data analysis problems we face. The paper of mine that is probably most well-known, AdaGrad (“Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” Journal of Machine Learning Research, 2011), brings ideas of conditioning to online and large-scale stochastic optimization and approximation problems, rescaling the problem adaptively to better reflect the geometry of the actual problem at hand. The idea was to take methods that work well even when data is sparse or ill-conditioned--as frequently happens in large-scale learning problems, especially with text data--and make them more robust. Since then, my collaborators and I have tried to build methods that work well in distributed and parallel optimization, and then to understand how to make methods that just work for the optimization problems in statistics machine learning. Lots of modern methods appear fairly sensitive to their parameters--meaning that to fit large-scale models, one wastes hundreds or thousands of CPU hours--and building robust methods that get around that is exciting and fun.

Q: What does your work mean to the public?

A: Several of the optimization algorithms I developed ended up implemented in web companies (Google, Facebook, Microsoft), though, of course, the engineers and researchers there often made substantial and important further modifications. But at some level, something like the optimization algorithms we've developed is the plumbing behind the large-scale model-fitting each of these companies do. So when you send a text message and it autocompletes (correctly or incorrectly), or when you ask for an automatic translation and the model translates "The spirit is willing but the flesh is weak" into something a bit more reasonable than "The vodka is good, but the meat is rotten," large-scale stochastic optimization methods are behind the scenes, fitting these prediction models.

Q: What does being a SIAM member mean to you?

A: I get some great journals and conferences! Of course I love to read SIOPT, but I also like to read the more eclectic features through SIAM Review and SIAM News--they fall exactly in that great spot between super nerdy and super interesting. 

blog comments powered by Disqus