# Predicting Hurricane Trajectories

#### A Comparison Between Classical Statistical Modeling and Contemporary Machine Learning

Natural disasters such as hurricanes and cyclones detrimentally impact many countries around the world, causing significant losses to infrastructure, habitats, and life. Generating accurate predictions of the trajectories of such storms is a critical part of mitigating their associated risks. Researchers have employed various techniques to predict hurricane trajectories, including classical statistical modeling and contemporary machine learning (ML) methods. Classical statistical models generally attempt to determine the relationships between variables for inferential testing. In contrast, ML models compromise interpretability in favor of better predictive power.

Here, I compare classical statistical modeling with contemporary ML methods and evaluate their effectiveness for the prediction of hurricane trajectories. Specifically, I seek to extend ML’s applicability to the domain of climate change and natural disasters. I will overview aspects such as accuracy, variable importance, interpretability, and computation time, then assess the likelihood of impact and assumed trajectories based on empirical evidence and predictions. These metrics are crucial for the development of more robust models to predict the paths of future hurricanes. Additionally, this research and analysis aims to provide a compelling narrative about model robustness based on a sensitive use case that involves previous states.

**Figure 1.**The structures of two different modeling frameworks.

**1a.**Typical layout of a hidden Markov model (HMM).

**1b.**Typical layout of a long short-term memory (LSTM) network. Figure 1a courtesy of Tanmay Binaykiya on GitHub, and Figure 1b courtesy of [1].

### Methods

Our study investigated two potential methods for the prediction of hurricane trajectories: hidden Markov models (HMMs) and long short-term memory (LSTM) networks. An HMM is a stochastic method that randomly changes systems that possess the Markov property (see Figure 1a). At any given time, the next state depends only on the current state and is independent of anything in the past. One can express a Markov model as a transition matrix or graph. A transition matrix indicates the probability of moving from each state to every other state; the current states are listed in rows and the next states are represented as columns. Each cell then contains the probability of moving from the current state to the next one, and all of the cell values in any given row must add up to one.

In contrast, a LSTM network is a type of recurrent neural network (RNN) that is capable of learning order dependence in sequence prediction problems (see Figure 1b). This behavior is required in complex problem domains like machine translation and speech recognition. Bidirectional RNNs combine two independent RNNs, thus providing the networks with both backward and forward information about the sequence at each time step. The LSTM that runs backward preserves information from the future, so the combination of two hidden states preserves information from both the past and future.

### Process

We preprocessed and wrangled the data that we collected—dealing with missing data, removing duplicates, and encoding categorical data—to ensure that it was ready for input in the models. The next step was feature engineering, which involved selecting the relevant features for model training and scaling the chosen features to ensure that each one contributes equally to the model.

**Figure 2.**Performance comparison for the hidden Markov model (HMM) and long short-term memory (LSTM) network. Figure courtesy of Kevan Rajaram.

### Results

After running the models, we evaluated their performance. Figure 2 displays the results of our comparison between the HMM and LSTM methods. The HMM performed better than the LSTM RNN in terms of accuracy, achieving 81 percent accuracy compared to the LSTM network's 77 percent (see Figure 3). However, the LSTM RNN had a faster computation time of 10.3 seconds versus the HMM's 14.2 seconds. The HMM works quite well if the states and transitions are easily measured. In many cases, however, only the outputs are observable and a one-to-one correspondence between output and internal state does not exist.

Both models are difficult to interpret due to the latent nature of the modeling process, but the HMM is a bit more explainable because we can discover patterns on previous states. Some minor scalability issues occur since the required operations are somewhat memory intensive.

**Figure 3.**Actual data compared to predicted values for two different modeling frameworks.

**3a.**Long short-term memory (LSTM) network.

**3b.**Hidden Markov model (HMM). Figure courtesy of Kevan Rajaram.

### Conclusions and Future Work

This research reveals several avenues for future work. First, we can extend the proposed framework to support other types of algorithms—such as time-sequential models—and explore various activation functions for the RNN’s parameters in order to observe the effect on the output layer.

Second, we can evaluate the framework’s performance by testing it on real-world data sets. This assessment will require that we gather real-world data, preprocess it, and use the framework to analyze it. We can then compare the results to the outcomes of other data analysis techniques to determine the effectiveness of the proposed framework.

Lastly, we can broaden the framework so that it finds cost matrices that determine the cost of disasters to help minimize storm costs to governments, loss of life, and so on. The accurate prediction of hurricane trajectories will certainly have many future societal implications.

Ultimately, the comparison between the HMM and LSTM network demonstrated that both methods have their strengths and weaknesses. The LSTM network showed promising results but the HMM performed equally well — and with better accuracy. Further research in this area will help us understand which of the two techniques is better suited for particular applications.

*Kevan Rajaram delivered a minisymposium presentation on this research at the 2022 SIAM Conference on Mathematics of Data Science, which took place in San Diego, Ca., last year.*

**References**

[1] Yildirim, Ö. (2018). A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification. *Comput. Biol. Med.*, *96*, 189-202.

**Further Reading**

Alemany, S., Beltran, J., Perez, A., & Ganzfried, S. (2019). Predicting hurricane trajectories using a recurrent neural network. *Proc. AAAI Conf. Artif. Intell.*, *33*(01), 468-475.

Asthana, T., Krim, H., Sun, X., Roheda, S., & Xie, L. (2021). Atlantic hurricane activity prediction: A machine learning approach. *Atmosphere*, *12*(4), 455.

Chen, R., Zhang, W., & Wang, X. (2020). Machine learning in tropical cyclone forecast modeling: A review. *Atmosphere*, *11*(7), 676.

Richman, M.B., Leslie, L.M., Ramsay, H.A., & Klotzbach, P.J. (2017). Reducing tropical cyclone prediction errors using machine learning approaches. *Procedia Comp. Sci.*, *114*, 314-323.

Wang, Z., Yuan, G., Pei, H., Zhang, Y., & Liu, X. (2020). Unsupervised learning trajectory anomaly detection algorithm based on deep representation. *Int. J. Distrib. Sens. Netw.*, *16*(12).

Wang, Z., Zhao, J., Huang, H., & Wang, X. (2022). A review on the application of machine learning methods in tropical cyclone forecasting. *Front. Earth Sci*., *10*, 902596

Kevan Rajaram has worked as a data professional at major companies throughout the Caribbean region for over 10 years and currently serves as the Director of Data Services at PwC Caribbean. He received a Bachelor of Science in mathematics and a Master of Science in statistics from the University of the West Indies (UWI). At present, Rajaram is pursuing a Ph.D. in computer science at UWI St. Augustine with a focus on applications of artificial intelligence and machine learning. Rajaram is also a Certified Data Management Professional through the International Data Management Association. |