Weinan E (Princeton University) is the 2019 recipient of the Peter Henrici Prize.
Weinan E of Princeton University is the 2019 recipient of the Peter Henrici Prize. He is being recognized for breakthrough contributions in various fields of applied mathematics and scientific computing, particularly nonlinear stochastic (partial) differential equations (PDEs), computational fluid dynamics, computational chemistry, and machine learning. E’s scientific work has led to the resolution of many long-standing scientific problems. His signature achievements include novel mathematical and computational results in stochastic differential equations; design of efficient algorithms to compute multiscale and multiphysics problems, particularly those arising in fluid dynamics and chemistry; and his recent pioneering work on the application of deep learning techniques to scientific computing.
Peter Henrici—whom the prize honors—was a Swiss numerical analyst and teacher at the Eidgenössische Technische Hochschule Zürich (ETH Zurich) for 25 years. The award is given by SIAM and ETH Zürich for contributions to applied and numerical analysis and/or exposition appropriate for applied mathematics and scientific computing.
E is currently a professor in the Department of Mathematics and the Program in Applied and Computational Mathematics at Princeton. He received his Ph.D. from the University of California, Los Angeles in 1989, after which he held visiting positions at New York University (NYU) and the Institute for Advanced Study. He was a member of the faculty of NYU’s Courant Institute of Mathematical Sciences from 1994 to 1999.
E has worked in a wide range of areas, including homogenization theory, computational fluid dynamics, PDEs, stochastic PDEs, weak Kolmogorov-Arnold-Moser theory, soft condensed matter physics, computational chemistry, and machine learning. The main themes of his work have been applied analysis and multiscale modeling.
E was awarded the Collatz Prize of the International Council for Industrial and Applied Mathematics in 2003, and SIAM’s Ralph E. Kleinman Prize and Theodore von Kármán Prize in 2009 and 2014 respectively. He became a fellow of the Institute of Physics in 2005, an inaugural SIAM Fellow in 2009, and a fellow of the American Mathematical Society in 2012. He was also elected as a member of the Chinese Academy of Sciences in 2011.
E will present his prize lecture at the 9th International Congress on Industrial and Applied Mathematics (ICIAM 2019), to be held in Valencia, Spain, from July 15th-19th, 2019.
Application of machine learning to multiscale modeling. Deep Potential–Smooth Edition
(DeepPot-SE) is an end-to-end machine learning-based potential energy surface (PES) model capable of efficiently representing the PES of a range of systems with the accuracy of ab initio quantum mechanics models. DeepPot-SE is extensive and continuously differentiable, scales linearly with system size, and preserves all of the system’s natural symmetries. It also characterizes finite and extended systems, such as organic molecules, metals, semiconductors, and insulators with high fidelity — as seen here. Bulk systems, which contain many different phases or atomic components, present more challenges. The figure depicts two types of systems for the dataset and results obtained from both DeepPot-SE and deep potential molecular dynamics methods. Image courtesy of Weinan E.
Peter Henrici Prize Lecture: Machine Learning and Multiscale Modeling, Monday, July 15, 2019, 7:15 PM
Modern machine learning has had remarkable success in a variety of artificial intelligence applications and is poised to fundamentally change the way we perform physical modeling. In his talk, Weinan E will offer an overview of some of this exciting area’s important theoretical and practical issues.
The first part of E’s lecture will focus on the following question: How can modern machine learning tools help build reliable and practical physical models? This section will address two topics: development of machine learning models that satisfy physical constraints, and the integration of machine learning and multiscale modeling.
The second portion of the talk will cover the mathematical foundation of modern machine learning. Serious challenges arise because the underlying dimensionality is high and neural network models are non-convex and highly over-parametrized. E will review the mathematical theory that has emerged from exploration of these issues. He will specifically discuss the representation of high-dimensional functions, optimal a priori estimates of the generalization error for neural networks, and gradient descent dynamics.