Many large-scale problems stemming from the most diverse areas of science and engineering have become tractable only because of the existence of data-sparse representations or approximations, and their exploitation in suitable computational methods. In this exciting and rapidly-developing area, researchers try to exploit the fundamental observation that data—as well as functions and operators—can often be represented, or highly accurately approximated, by only a small number of relevant features. Groundbreaking advances in this context have been seen in recent years using, for instance, tensor-based methods, as well as low-rank and low-order representations and approximations.
The eighth Gene Golub SIAM Summer School, held in late May, focused on this emerging area and was titled, “Data Sparse Approximation and Algorithms.” The Akademie Berlin-Schmöckwitz, on the southeastern tip of Berlin, Germany, hosted the international and diverse group of participants, consisting of 45 highly-qualified master’s and Ph.D. students from 18 different countries. The school was held in conjunction with the SIAG/LA International Summer School on Numerical Linear Algebra.
Students partook in four one-week courses, with lectures in the morning and exercise sessions in the afternoon. Deanna Needell (University of California, Los Angeles) discussed topics in optimization that rely on sparsity, with a wide range of applications that include medical imaging, sensor network monitoring, and video. Bernhard G. Bodmann (University of Houston) taught a course devoted to sparse recovery and the geometry of high-dimensional random matrices. He led the students from the restricted isometry property of Emmanuel Candès, Justin Romberg, and Terence Tao to results on the randomized construction of sensing matrices due to Mark Rudelson and Roman Vershynin. Lars Grasedyck (RWTH Aachen University) introduced participants to low-rank tensor formats for the data-sparse representation of higher-order tensors and multivariate functions. The students learned how to apply representations like Canonical Polyadic, Tucker, Tensor Train, and Hierarchical Tucker in model reduction, uncertainty quantification, high-dimensional partial differential equations, and analysis of big data. Serge Gratton (University of Toulouse, INP-IRIT) dealt with numerical methods for inverse problems in the geosciences. His lectures focused specifically on reducing computational complexity by considering problem structures.
Fittingly, the school was held in the spirit of the late Gene Golub, with many interactions between lecturers and participants. More than 20 students exhibited their own work in the poster session. Ample opportunities for joint activities existed due to the Akademie’s location and the beautiful early-June weather. Volleyball and swimming in the lake were especially popular among the students. The “Gene Golub Class of 2017” also mastered challenges in real-world engineering and with the forces of nature. Participants built rafts from scratch with provided materials during one of the activities. When the moment of truth arrived, all five rafts built by the group floated smoothly on the water. During a canoe trip in the second week, the students paddled through some serious headwinds when crossing two large lakes. One certainly does not have to worry about the skills and determination of this new generation of computational scientists!
In addition to the generous funds provided by SIAM out of Gene Golub’s bequest, the school was supported by the U.S. National Science Foundation, the Einstein Center for Mathematics Berlin, the Berlin International Graduate School in Model and Simulation based Research, and MathWorks.