About the Author

Anomaly Detection in Unmanned Underwater Vehicles

By Lina Sorg

Unmanned underwater vehicles (UUVs) can operate in underwater environments without human occupants. These autonomous vessels collect data and are especially helpful for security missions or oceanographic survey missions. As such, they are particularly valuable to the U.S. Navy. Like any operator of an oceanographic survey vehicle, the Navy is interested in making sure that their UUVs do not break down. Rather than simply conducting preventative maintenance at regular intervals, operators could utilize predictive maintenance models that anticipate when a vehicle will fail and even allow the vehicle to autonomously reconfigure its mission and return to the mothership to avoid crashing or sinking. To do so, UUVs must learn to identify anomalies in their time series. During the 2021 SIAM Conference on Applications of Dynamical Systems, which is taking place virtually this week, Kyle Gustafson of the Naval Surface Warfare Center’s Carderock Division presented recent work with predictive maintenance models that detect anomalies in UUVs.

Gustafson’s talk focused on the Littoral Battlespace Sensing – Autonomous Undersea Vehicle (LBS-AUV), which is a small UUV that is roughly three to five meters long and 30 centimeters in diameter (see Figure 1). It can remain underwater for up to 24 hours and has a limited sense of autonomy; the operator specifies a velocity and path based on a GPS lock on the vehicle. The Naval Oceanographic Office maintains a fleet of LBS-AUVs to gather data. “They do sort of a lawnmower pattern up to 600 meters underwater and collect salinity, temperature, and depth measurements,” Gustafson said. The Navy utilizes these measurements when conducting intelligence preparations of the operational environment. All of the resulting data from the vehicles is then stored in datasets, though researchers have yet to analyze it in any sort of particularly detailed way.

Figure 1. The Littoral Battlespace Sensing – Autonomous Undersea Vehicle (LBS-AUV),

Gustafson noted that roughly 90 percent of problems are actually due to user error rather than equipment failure. For instance, the operator might miscalibrate the ballast or velocity, or fail to measure ocean conditions properly. “It’s less often happening that these vehicles are really breaking, but more often that something is programmed wrongly into them,” Gustafson said. “We need to make these vehicles smart enough to make up for that sort of user error and change or cancel their mission before failure.”

As with most data-driven methods, the metadata of UUVs is extremely important. But Navy vehicles are typically well maintained, meaning that metadata pertaining to vehicle failure is scare. If failure does occur, the data is often lost. For this reason, Gustafson focused on counting and characterizing the anomalies rather than correlating them with known failures. Interpreting and prioritizing the different anomaly types—point, collective, and contextual—is a challenge. He introduced three machine-learning-based techniques: clustering, long short-term memory auto-encoders, and generative adversarial networks (GANs). Gustafson pays particular attention to GANs and uses a software package called TadGAN (time series anomaly detection using generative adversarial networks) to determine what anomalies look like.

Gustafson’s dataset consists of 128 missions that were roughly 24 hours each. The data was strongly biased towards successful missions, meaning that any detected anomalies in the dataset do not constitute substantial failure.

UUVs can easily identify a ground fault interruption within the circuit, which is a common fault. If a leak occurs, a detector will determine the percent of ground fault. Anything greater than 40 percent is considered a serious leak, though a few drops of seawater on the sensor or the connecting circuit can incorrectly resemble a ground fault because seawater is highly conductive. Figure 2 represents one such example and tracks the ground fault percentage and depth of the UUV over the course of several hours. Even though the ground fault percentage jumps up to nearly 80 percent on the scale, the vehicle continues to operate correctly. “This is an example of an anomaly that you could easily detect,” Gustafson said. “You could just program an if/then statement. If the ground fault percentage is greater than 50 percent, then you have a serious anomaly.” Though this fault is likely just a drop of salt water in the circuit and does not actually lead to failure, an operator might choose to investigate it regardless.

Figure 2. Ground fault interruption: water intrusion.

Gustafson then presented results from convolutional networks, courtesy of his colleague who conducted regression between different time series of pitch and motor speed. This accounts for the pitch and degrees of the fins as they control the vehicle’s depth. The only time that a large “error” occurs that signifies an anomaly within these results is when the vehicle is moving slowly on the water’s surface (see Figure 3). Controlling the pitch on the surface will inevitably result in some kind of error because of the presence of waves. “This is a case where the anomaly detection algorithm is working,” Gustafson said. “It’s finding these large deviations, but in this case they are easily explainable and somewhat uninteresting.” Nevertheless, this example serves as a control case that validates the utility of a temporal convolutional network with regression.

Figure 3. Navigation control: anomaly explained by surfacing.

Finally, Gustafson introduced the results of TadGAN. Here the operator set a goal of a certain number of revolutions per minute (RPM) for the propeller, allowing Gustafson to compare the goal RPM with the actual RPM. Several anomalies (deviations from zero) occurred in the time series. Though they did not lead to mission failure, they might indicate that something strange—an unexpected water current, underwater obstacles, etc.—is happening. Gustafson plotted the frequency of the deviation values from the anomalous mission and compared it to a more “normal” mission with a much tighter deviation cluster. The GAN identified some anomalies but not all of them. “We don’t know exactly what those mean,” Gustafson said. “So we’re going to need some further experiments to determine if these anomalies are actually interpretable as something special, or if the GAN is just picking out a few examples.”

Ultimately, machine learning methods like convolutional networks and GANs show significant promise in identifying erratic or unexpected behavior in UUVs. Gustafson concluded his presentation by reiterating the importance of more metadata. “I think we’re going to need a lot more data on what meaningful anomalies are before we can implement one of these methods on a vehicle to change mission parameters or do anything with maintenance,” he said. “We’re just getting started with this data set and with these approaches for anomaly detection.”


Lina Sorg is the managing editor of SIAM News.