SIAM News Blog
SIAM News
Print

Bayesian Search for Missing Aircraft, Ships, and People

Finding a Needle in a Haystack

By J. Van Gurley and Lawrence D. Stone

On July 23, 2013, the lobster boat Anna Mary was heading out from Montauk, New York, to the fishing grounds off Long Island. Around 9:00 p.m., Anthony Sosinski went below to get some sleep while his partner, John Aldridge, stayed on deck to prepare for the next day’s fishing. When Sosinski awoke at 6:00 a.m., the boat was still motoring out on autopilot but Aldridge was gone – no visible sign of when or where he fell overboard [9]. Given the many uncertainties in the circumstances under which Aldridge fell overboard, the motion of the boat, and the influence of tides, currents, and winds, how does one even begin to mount an effective search? The U.S. Coast Guard faces this type of scenario daily. The problem also occasionally arises for major aviation accidents, such as the disappearance of Air France Flight 447 over the South Atlantic in 2009 and Malaysian Air Flight 370 in the Indian Ocean in 2014. Fortunately, Bayesian search theory provides a principled approach for problems of this nature.

Developed by the U.S. Navy in response to the German submarine threat in the Atlantic Ocean during World War II, Bayesian search theory is a systematic mathematical method for planning searches for lost objects. It has been used to plan successful searches for lost submarines (USS Scorpion [5]), aircraft (Air France Flight 447 [7]), and treasure ships (SS Central America [6]). Bayesian search theory is the analytic core of the U.S Coast Guard’s national Search and Rescue Optimal Planning System (SAROPS), credited with helping save scores of lives, including that of John Aldridge.

Key Concepts

Complicated searches, such as those listed above, are one-time events. One cannot recreate the conditions 1,000 times and record the distribution of locations to obtain an empirical probability distribution based on the relative frequency notion of probability. Instead, the probability distribution for the likely location of the missing object (person, plane, ship) must be estimated in the presence of uncertainties and conflicting information, which often necessitates the use of subjective probabilities. Time is critical in such search and rescue operations, so one cannot wait for more data to resolve these uncertainties before taking action. Bayesian search analysis offers a principled method for handling these problems. It produces a prior probability distribution for the target (search object) location that is a synthesis of objective data and subjective information based on the analyst’s best understanding of the scenario. This prior represents a fused understanding of all available information and can be used directly to plan the search and estimate the amount of effort/time required to meet a given probability of success. More importantly, the Bayesian approach provides a framework to continually update this prior, in the form of a sequence of posterior distributions, as additional information becomes available.

The key concepts are (1) the use of objective and subjective information to produce a prior probability distribution on target location, (2) the application of Bayes’ rule to update the prior as new information is obtained during the search, and (3) the use of the prior and posterior distributions to plan future search efforts in a way that maximizes the probability of success in the shortest possible time.

Bayesian Search Planning Process

Collect Information. In order to initialize the problem, one must collect all the information about the last known location of the missing object, quantifying uncertainties as probability distributions [1]. Often the use of subjective information is crucial in this phase. During the search for John Aldridge, Sosinski provided critical subjective information that Coast Guard search planners used to build a probability distribution on when and where Aldridge likely fell overboard. Even though large uncertainties remained, this information allowed a reduction in the search area and concentration of search assists in the correct region.

Figure 1. Prior distribution for the Air France Flight 447 search. Red represents high probability, while blue represents low probability. Image courtesy of [7].
Compute Prior Distribution. Searchers often face several competing but plausible scenarios. In this case, one can individually simulate the distribution of the target location under each scenario and combine the resultant distributions according to subjective weights, representing the credibility of the scenarios, into a single probability distribution for the location of the target. Usually one divides the search area into \(J\) cells and computes the prior probability \(p(j)\) that the target is in cell \(j\) for \(j=1,\ldots ,J\), where \(\sum\nolimits_{j=1}^{J}{p(j)}=1\). Figure 1 is the prior distribution map that Metron developed for the Air France Flight 447 search team, combining multiple scenarios (each representing a separate loss of flight control mode) and independent data sources (e.g., the aircraft’s last known position and the pattern of debris found on the ocean surface).

Using models of the search sensor(s), one estimates \({{b}_{j}}(z)\), the probability of detecting the target given it is in cell \(j\) with \(z\) effort for \(z\ge 0\). If \(Z=({{z}_{1}},\ldots ,{{z}_{J}})\) is an allocation of search effort, then \(P(Z)\), the probability of detection using that allocation, is given by \[P(Z)=\sum\nolimits_{j=1}^{J}{{{b}_{j}}({{z}_{j}})p(j)}.\] Allocate Search Effort. Suppose one has \(K\) effort available for the first increment of search. Then one can find the allocation \({{Z}^{*}}\) that maximizes \(P(Z)\), subject to \(C(Z)=\sum\nolimits_{j=1}^{J}{{{z}_{j}}}\le K\). There are well-developed methods for doing this (see [8]).

Compute the Posterior Distribution Using Bayes’ Rule. If the target is not found during this first increment of search, Bayes’ rule computes the posterior target location distribution as follows:

\[\tilde{p}(j)=\Pr \left\{ \text{tgt in cell }j|\text{failure to detect} \right\}=\frac{p(j)\left( 1-{{b}_{j}}(z_{j}^{*}) \right)}{\sum\nolimits_{{j}'=1}^{J}{p({j}')\left( 1-{{b}_{{{j}'}}}(z_{{{j}'}}^{*}) \right)}} \textrm{for} \enspace j=1,\ldots ,J.\]

Figure 2 illustrates the posterior distribution map for the Air France Flight 447 search, updated with the results of the first three unsuccessful search expeditions during 2009 and 2010.

Figure 2. Posterior distribution for the Air France Flight 447 search used to plan the fourth expedition in 2011. The aircraft wreckage was found on the ocean bottom in the grid cell just above the center point. Image courtesy of [7].
Iterate. The posterior distribution, now incorporating both information from the prior and results of all previous unsuccessful searches, is used to determine the optimal allocation of effort for the next increment of searching. This cycle of updating and reallocating continues until the target is found or the search has exhausted all or most of the information in the prior distribution. Such exhaustion may be determined by calculating the prior probability that the cumulative effort from all the search increments would have detected the target. When this reaches a threshold, say 95%, it is reasonable to stop the search.

Incorporating Other Information. One can integrate other information obtained during the search into the posterior distribution. For example, [3] demonstrates how to incorporate the knowledge that debris from a wreck is found sometime later. The method for doing this involves computing the likelihood of finding a piece of debris at the reported location and time given the target is in cell \(j\) for \(j=1,\ldots ,J\), and applying that likelihood function to compute a new posterior distribution.

Operational Application. These methods have been successfully applied to a large number of real-world search operations. While Bayesian search theory provides an optimal plan for directing search operations in the face of high degrees of uncertainty, it is no guarantee of success. Sometimes the scope of the effort may dwarf available search assets, or uncertainties may be too large. In these circumstances, Bayesian search theory offers guidance on how to “stack the odds” in one’s favor. It also presents a way to determine when one reaches the point where additional search effort is unlikely to significantly improve the chance of success.

For example, the team directing the international effort to find Malaysian Air Flight 370 used this Bayesian approach to calculate the prior distribution of the aircraft location by fusing information from flight dynamic modeling and the Inmarsat signals from the aircraft [2]. That analysis generated the set of high-probability ocean bottom regions that have been methodically searched over the last two years. Given the lack of positive results, the governments involved have announced that the operation will most likely be suspended in the near future, awaiting new evidence [4]. As was the case in the Air France Flight 447 search, such a pause allows for independent reanalysis of the data that formed the prior, the calculation of a posterior that incorporates information embedded in the last two years of search operations, and the potential addition of new evidence, like the pieces of aircraft debris that have been found off eastern Africa over the last year. Together, these factors might yield new insights that could lead to the discovery of the aircraft’s location.

Bayesian search theory provides a principled, methodical, and effective method of planning searches for lost objects. It serves as a methodology that maximizes the probability of success, estimates the effort required to find the missing object, and gives guidance on when to call off a search. Though simply stated, it is a powerful analytic approach that has been used successfully in many complicated and difficult situations.

References 
[1] Ayyub, B.M. (2001). Elicitation of Expert Opinions for Uncertainty and Risks. New York, NY: CRC Press.

[2] Davey, S., Gordon, N., Holland, I., Rutten, M., & Williams, J. (2016). Bayesian Methods in the Search for MH370. In SpringerBriefs in Electrical and Computer Engineering. Australia: Springer.

[3] Gurley, V., & Stone, L. (2015, August 24). What does the recovery of floating debris tell us about the location of a wreck? Metron Memorandum to MH 370 file.

[4] “MH370 Tripartite Meeting Joint Communiqué.” (2016, July 22). Retrieved from http://jacc.gov.au/media/communiques/2016/com004.aspx.

[5] Richardson, H.R., & Stone, L.D. (1971). Operations Analysis during the Underwater Search for Scorpion. Naval Research Logistics Quarterly, 18, 141-157.

[6] Stone, L.D. (1992). Search for the SS Central America: Mathematical Treasure Hunting. Interfaces, 22, 32-54.

[7] Stone, L.D., Kratzke, T.M., Keller, C.M., & Strumpfer, J.P. (2014). Search for Wreckage of Air France AF 447. Statistical Science, 29, 69-80.

[8] Stone, L.D., Royset, J.O., & Washburn, A.R. (2016). Optimal Search for Moving Targets. New York, NY: Springer.

[9] Tough, P. (2014, January 5). A Speck in the Sea. New York Times Magazine.

Van Gurley ([email protected]) is a Senior Vice President at Metron, Inc., an applied mathematics and scientific consulting firm in Washington, D.C. Prior to joining Metron, he completed a 26 year Navy career as a submarine warfare officer and naval oceanography specialist. Lawrence Stone ([email protected]) is Metron’s Chief Scientist and author of several books on search theory and Bayesian tracking. He is a member of the National Academy of Engineering and a fellow of the Institute for Operations Research and Management Science.

blog comments powered by Disqus