SIAM News Blog
SIAM News
Print

Machine Learning Accelerates Multiscale Materials Modeling

By Jillian Kunze

Researchers can gain fundamental insights into the microscopic mechanisms of different materials by modeling them at multiple scales. Multiscale materials modeling has a number of important real-world applications, include creating semiconductors that are resistant to radiation, improving advanced manufacturing techniques, and developing energetic materials. However, this multiscale process is very computationally expensive because it is based on first-principle calculations of microscopic behavior. Materials scientists often use the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)—a molecular dynamics simulation on the Sierra supercomputer at Lawrence Livermore National Laboratory—to gain fundamental information on molecular structure. Spectral Neighbor Analysis Potential (SNAP) is a subset of this code that focuses on the energy potential between atoms and can ultimately be used to predict the properties of materials. 

In order to make these property predictions, SNAP requires first-principle density functional theory (DFT) calculations. “In DFT calculations, we try to find the electronic structure of atomic configurations to build interatomic potentials for molecular dynamics simulations,” said J. Austin Ellis of Oak Ridge National Laboratory. DFT is very popular in materials science and chemistry models, and there is a lot of enthusiasm for machine learning (ML) among materials scientists as well. Unfortunately, DFT calculations can be prohibitively expensive, with computational costs on the order of the cube of the number of atoms being simulated.

During a minisymposium presentation at the 2021 SIAM Conference on Computational Science and Engineering, which took place virtually last week, Ellis described the development of a machine learning model to reduce the time that SNAP takes to run. The goal of this project was to develop a physics-informed machine learning model that had a computational cost on the order of just the number of atoms being simulated, and could accelerate first-principle data generation for DFT. Ellis performed the bulk of this research while a postdoctoral researcher at Sandia National Laboratories, though the project has now spread to Oak Ridge National Laboratory and the Center for Advanced Systems Understanding in Germany. 

Figure 1. The workflow of the machine learning density functional theory (ML-DFT) model for multiscale materials modeling.

As of now, there do not exist many ML-DFT surrogate models, though several research groups are actively working on this problem. Ellis provided a diagram of the ML-DFT workflow that he and his collaborators created (see Figure 1), which they developed with scalability and performance in mind. The researchers used a grid-based approached to contend with the large amount of data, dividing the volume into a three-dimensional Cartesian grid with dimensions \(200 \textrm{ x } 200 \textrm{ x } 200\). They placed their focus on materials as opposed to molecules, which created a large amount of near symmetries and redundancies in the process. 

The first step of the ML-DFT workflow is fingerprint generation. Here, the local environment is represented as a vector. The model then uses a grid-centered SNAP descriptor that was developed at Sandia using a heavily-optimized implementation of LAMMPS. The descriptor is atom-centered and has a rich set of features, making it effective for ML interatomic potentials. SNAP calculates a unique and descriptive fingerprint vector at each grid point, which then forms the input for the ML model. A key feature of these fingerprints is that they do not vary if they are rotated; as this physical requirement is critical for DFT, it is built directly into the formulation.

Figure 2. A map of the feed-forward neural network used to bridge the input fingerprints and output local density of states (LDOS) data in the machine learning density functional theory (ML-DFT) workflow.
The target output for the machine learning model is the local density of states, which describes the discretized levels of energy local to each grid point. To find this quantity, the model used open-source Quantum Espresso DFT electronic structure calculations. These calculations used 30 atomic configuration snapshots of aluminum from a molecular dynamics trajectory simulation, each consisting of 256 atoms at temperatures of either 298 or 933 Kelvin. The model post-processed the DFT results to obtain the local density of states defined on fingerprint grid locations. Each output snapshot was a size of about 15 gigabytes. Based on the local density of states, it was then possible to predict the total energy per atom for the material in question. 

To bridge the input with the desired output, Ellis and his collaborators trained the ML-DFT model with a feed-forward neural network. ML mapping bridged the input of fingerprint features and output of local density of states data, as shown in Figure 2. This is a purely local model: there is one-to-one mapping of each fingerprint vector to each local density of states vector, with no contributions from nearby grid points and all vectors treated as independent from each other. This allows the ML-DFT workflow to attain the accuracy required by researchers who perform molecular dynamics simulations. 

Ellis and his collaborators tested their ML-DFT workflow on two different cases. The first test used aluminum at an ambient room temperature of 298 Kelvin, with 10 snapshots of aluminum in its solid form as input. This case required only a single snapshot each for training and validating the model in order to attain a high accuracy. In the second test case, the aluminum was at a higher temperature of 933 Kelvin. This required 20 total snapshots as input: 10 of aluminum in its liquid form, and 10 of it as solid. Due to the higher complexity of this situation, eight training snapshots were necessary to reach the accuracy they desired, along with a single validation snapshot. For training in this case, one could use only solid snapshots, only liquid, or a hybrid of the two. 

Figure 3. The performance of the machine learning density functional theory (ML-DFT) workflow for the test case of aluminum at 933 Kelvin. The training used a hybrid of snapshots of aluminum in both solid and liquid form, leading to the model’s good performance.
There was far more variability in the case of aluminum at 933 Kelvin—particularly when the aluminum was in its liquid form—so the optimal architecture of the machine learning model in that situation needed a more descriptive and thus more complex network. Ellis and his collaborators used several different accuracy metrics to evaluate the workflow, including density of states, electron density, band energy, and total energy and forces. For the model predictions based on aluminum at 933 Kelvin, the hybrid training formulation obtained a final relative accuracy greater than 99.8% (see Figure 3). This meets the stringent accuracy requirements that chemists need for molecular dynamics simulations, and approaches the accuracy of the machine learning workflows in the best molecular dynamics simulations in use today. 

There will be a public code release of the ML-DFT workflow coming soon. In the future, Ellis hopes to improve the workflow in a number of ways. “We want to be able to train on smaller systems and evaluate on even larger systems,” Ellis said. Such efforts could enable molecular dynamics simulations to run with even smaller computational costs, thus further advancing materials scientists’ ability to model materials at multiple scales.

  Jillian Kunze is the associate editor of SIAM News

 

blog comments powered by Disqus