About the Author

Machine Learning for Supernova Turbulence

By Jillian Kunze

Though many areas of theoretical astrophysics depend heavily on simulations, there is not yet a significant machine learning presence within the field. In a minisymposium presentation at the 2021 SIAM Conference on Computational Science and Engineering, which is taking place virtually this week, Platon Karpov (University of California, Santa Cruz and Los Alamos National Laboratory) discussed how he aims to bring machine learning into numerical astrophysics. His focus was specifically on turbulence in supernovae models, as turbulence plays a significant role in an explosion’s progression but can be difficult to simulate.

Supernovae are some of the most energetic events in the universe, and are responsible for creating most of the elements heavier than hydrogen and helium. Under normal circumstance, the fusion of elements within stars creates enough outward pressure to balance with the inward force of gravity from the mass of the star. But during a core-collapse supernova, a star begins to run out of material that it can fuse efficiently, and the outward pressure decreases. The star thus collapses inward, then bounces — a pressure wave explodes outwards, expelling a large amount of matter and leaving behind a neutron star or a black hole. “This process may sound simple, but it is not,” said Karpov. 

Figure 1. Sapsan provides a machine learning framework for modeling turbulence in supernovae. Further information and documentation on Sapsan can be found online. The image of the "supernova-man" was created by Phil Plait.
Karpov specifically investigated what occurs when the pressure wave begins to extend outward after the bounce; turbulence seems to be a key player during this time. There can also be stalls to the shockwave, in which both neutrino heating and turbulence seem to play a major role. This is a highly asymmetric problem, so it must be modeled in three dimensions to account for all of the effects. Though the timescale being simulated is only on the order of one second, it can cost hundreds of millions of CPU-hours to run these models. This immense computational expense makes it very difficult to obtain statistically significant results. The modeled stars also sometimes do not explode — there are many parameters and lots of physics to consider, which can lead to mistakes. 

The question thus becomes how to best to study supernovae, when three dimensional simulations are so expensive and one-dimensional simulations do not exhibit the turbulent behavior. Karpov raised a possible solution: turbulence subgrid modeling using machine learning. Many cutting-edge simulations of supernovae use subgrid models—such as large eddy simulations—to represent turbulence; while these models are less expensive to run than direct numerical simulations, they are also less accurate. But machine learning approaches may be able to help. 

There is currently not a significant amount of machine learning work being done in theoretical astrophysics, and not really any accessible platforms on which to build. With this situation in mind, Karpov began the development of his machine learning approach with several key goals. As coding practices and project designs in some academic fields tend to be chaotic, he hoped to bring industry tools and standards to his project — working collaboratively with the company Provectus helped him achieve this goal. Karpov also intended to move quickly from paper to application for the physics-informed machine learning templates.

The pipeline he developed to study supernova turbulence through physics-based machine learning is called Sapsan (see Figure 1). It is based in PyTorch, which offers a high degree of flexibility; it also uses MLflow to track experiments neatly. The primary interface for Sapsan is in Jupyter notebook, but an online demo through a graphical interface is also available at sapsan.app. This interface is fairly limited, but can provide a flavor of the application. The general pathway of Sapsan is that input data is filtered and batched; then, the model is trained and analyzed. This entire process is encapsulated within the Docker platform, and the pipeline can be run with several varieties of frameworks. Users can choose between physics-informed frameworks or more conventional frameworks, such as kernel ridge regression or three-dimensional convolutional neural networks. Users can also create their own frameworks. "Sapsan is made to be highly customizable," said Karpov. Full descriptions and tutorials for these frameworks and more are available on Sapsan’s GitHub wiki

Figure 2. Statistical distributions of stress tensor components for supernovae modeled by Sapsan.
Karpov outlined a number of goals he has for future frameworks. He hopes to extend the list of physics-informed machine learning models, possibly to include compressible magnetohydrodynamics and coefficient prediction for analytic turbulence models. He also aims to include non-machine learning turbulence subgrid models, as well as analytical tools such as intermittency and parity. And more tutorials should be coming in the future!

In his presentation, Karpov provided some sample results obtained with Sapsan. The input was a three-dimensional magnetohydrodynamic direct numerical simulation dataset, which contained data for the first 10 milliseconds after a supernova’s bounce with several different possible resolutions in the range from 50 to 500 meters. The result was a closure model, which attempts to predict phenomena on a smaller scale than the model on which the input was based. Sapsan produced an accurate distribution prediction for the time between 5 and 10 milliseconds, which made up half of the total simulation time. The program was also able to accurately predict the statistical distribution of stress tensor components for the modeled supernovae (see Figure 2). 

To conclude, Karpov noted that he had not found any other examples in literature of three-dimensional magnetohydrodynamic results that were accurate for core-collapse supernovae. He went on to explain that Sapsan could be used to apply machine learning in astrophysical simulations beyond just supernovae. Sapsan’s framework flexibility, reproducibility, and inclusion of industry tools will hopefully make it useful in a variety of astrophysical settings and could help to produce exciting results for turbulence in a number of fields. 

  Jillian Kunze is the associate editor of SIAM News