SIAM News Blog
SIAM News
Print

Machine Learning Predicts Solar Flare Activity

By Jillian Kunze

Though solar flares occur on the surface of the Sun, they can have a major impact on Earth. Flares can cause radio blackouts just eight minutes after they occur, as well as solar radiation storms after 30 minutes. Two to four days later, geomagnetic storms may reach the Earth. The events can affect critical operations such as aviation and communication, but with early enough predictions, decisionmakers could act to mitigate the worst impacts. In a minisymposium presentation at the 2021 SIAM Conference on Computational Science and Engineering, which took place virtually last week, Cristina Campi of the Università di Genova described a machine learning model for solar flare prediction and explained how she assessed the model’s results.

Solar flare dynamics are very difficult to model, so machine learning is useful in this context. Flares occur when lines of magnetic field on the surface of the Sun reconnect with each other; this extremely energetic event emits a bright flash and ejects clouds of accelerated particles into space. The regions of the Sun where magnetic fields are stronger on average—known as active regions—were special areas of interest for the machine learning model. There is a huge amount of data available for this problem, as multiple satellites take measurements of the Sun every five minutes. This extensive resource could help a machine learning algorithm to understand the physical aspects of solar flares in order to predict them. 

Figure 1. Images of the Sun taken using several different detection methods — the image on the right is a magnetogram. The most active region is circled in red. The data was measured by the Helioseismic and Magnetic Imager aboard NASA’s Solar Dynamics Observatory.

Campi and her collaborators used convolutional neural networks on full magnetograms—images that record the strength of magnetic fields—that were taken of the Sun (see Figure 1). Using pattern recognition techniques based on assumptions about solar magnetic fields, it is possible to extract hundreds of features from the active regions. However, the researchers wanted to focus on just the few most important features. They took a supervised machine learning approach, which began with dividing the data into a training set and a test set. They then trained the algorithms on the training set using labels, which Campi highlighted as being a crucial part of their application. The labels depended on what kind of prediction they wanted to make, constrained by the relevant time window and the classes of flares — flares in class C have a comparatively low energy, M represent the middle amount of energy, and X is the highest amount of energy. The labeled data revealed an imbalance among the flare classes. 

To get a threshold that would separate the outcomes of their machine learning model into two classes, the researcher applied a fuzzy clustering to the regression outcomes of the least absolute shrinkage and selection operator (Lasso) method in a “hybrid Lasso” approach. The hybrid Lasso promotes sparsity, selecting a few important features from the hundreds of features in the dataset. Instead of an outcome of continuous values, as would normally be produced by a regression method, the result was a yes or no prediction of whether there would be a flare. 

Once the model was trained, the researchers applied it to a test set and checked its performance (thanks once again to the labels). They used data from the large solar flare that occurred in September 2017—which was the first event of such magnitude in several years—to investigate whether the machine learning model would have been able to predict that event. The model was trained and made predictions based on that data in six-hour segments of time. Figure 2 displays a comparison of the machine learning predictions with the data taken at the time. 

Figure 2. The machine learning predictions for the September 2017 solar flare as compared to the data. The thin blue rectangles represent the data, and the green, yellow, and red boxes show the machine learning predictions.

To validate the forecast, Campi used a contingency table. Though it would be possible to simply score the overall accuracy of their predictions, this is not a completely realistic measure of the model’s performance. If the model produced a lot of true negatives—correct predictions that there would not be a flare—but failed to produce many true positives, the accuracy would still be calculated as high even though it did not predict actual flares well, which is the model’s main job. Instead, there were several better measures of model validation that Campi employed, such as critical success index, true skills statistic, and Heidke skill score.

The contingency table revealed that the model did well at predicting C and M flares, but had more false positives for X flares — it predicted that flares of this magnitude would occur that did not actually happen. This highlights the capability of using machine learning methods as a warning machine, which decisionmakers can evaluate and decide whether to implement emergency measures given their knowledge of potential inaccuracies. To conclude, Campi noted that validation measured for machine learning models are necessary, but do not always capture the whole picture. In the future, she hopes to train convolutional neural networks on the full solar disk—instead of just the most active areas—to learn whether the results are improved by using all of the available information. 

  Jillian Kunze is the associate editor of SIAM News
blog comments powered by Disqus