SIAM News Blog
SIAM News
Print

Formulating Natural Hazard Policies Under Uncertainty

By Jerome L. Stein and Seth Stein

Uncertainty issues are crucial in assessing the risk posed by natural hazards and in developing strategies to mitigate their consequences for society. The challenges are illustrated by the giant earthquake that struck Japan’s Tohoku coast in March 2011; much larger than predicted by sophisticated hazard models, the earthquake caused a tsunami that overtopped 5- to 10-meter seawalls and damaged the Fukushima nuclear facilities. Together, these events were responsible for more than 15,000 deaths and $210 billion in damage. Deciding whether to rebuild the defenses and, more generally, what strategies to employ against such rare events depends on estimates of the balance between the costs and benefits of mitigation. Finding that balance is a complex challenge at the intersection of geoscience, mathematics, and economics.

We have developed a general stochastic model for use in selecting an optimal mitigation strategy against future tsunamis; the model minimizes the sum of the expected present value of the damage, the costs of mitigation, and a risk premium reflecting the variance of the hazard. The probabilities, as discussed below, either are constant with time or depend on the previous history. We then considered whether new nuclear power plants should be built in Japan, using a deterministic model that does not require estimates of essentially unknown probabilities. These models can be generalized to the mitigation of other natural hazards.

Hazard Mitigation: A Stochastic Model

To illustrate our approach to inferring optimal policy for natural hazard mitigation, we begin with the question of how Tohoku’s tsunami defenses should be rebuilt. For some point on the coast, we denote the cost of defense construction as \(C(n)\), where \(n\) is the height of a seawall (an alternative measure, with a different method for increasing resilience, is the width of a no-construction zone). For a tsunami of height \(h\), the present value of the future economic loss is \(L(h – n)\), where \(h – n\) is the height to which a tsunami will overtop a seawall, or exceed some other design parameter. \(L(h – n)\) is zero for a tsunami smaller than the design value \(n\) and increases for larger tsunamis. \(L\) includes both the damage and the resulting indirect economic losses, such as those from the destruction of the Fukushima nuclear power plant, including the relocation of residents and loss of income. The probability of a tsunami overtop of height \(h – n\) is \(p(h – n)\); the expected present value of the loss from a number of possible tsunamis over the life of the tsunami wall is the sum of losses from tsunamis of different heights weighted by their probabilities: 

\[\begin{equation}\tag{1} Q(n) = E\{L(n)\} = \Sigma_h p(h – n)L(h – n). \end{equation}\]  

Thus, \(p(h – n)\) describes the hazard, the occurrence of tsunamis of a certain size, and \(Q(n)\) reflects the present value of the resulting risk, which also depends on the mitigation level \(n\).

The optimal level of mitigation \(n\)* minimizes the total cost \(K(n)\), the sum of the expected loss \(Q(n)\) and mitigation cost \(C(n)\):

\[\begin{equation}\tag{2} K(n*) = \mathrm{min}_n [Q(n) + C(n)].\end{equation}\]

Because increasingly high levels of mitigation are progressively more costly, the first and second derivatives \(C'(n)\) and \(C''(n)\) are positive. Conversely, because increasing mitigation reduces expected loss, the derivative \(Q´(n)\) is negative. \(K(n)\) illustrates the tradeoff between mitigation and damage: More mitigation gives less expected damage but higher total cost, whereas less mitigation decreases construction costs but increases the expected damage and thus total cost. The solution to equation (2) is 

\[\begin{equation}\tag{3} C´(n*) = –[Q´(n*)], \end{equation}\]

where \(n* > 0\) is the optimal mitigation level.

Figure 1. The optimal level of mitigation is n* when risk aversion and uncertainty are not considered and increases to n** when these effects are included.
The derivatives \([–Q´(n)]\) and \(C´(n)\) intersect at the optimal point \(n^*\), the highest level to which it pays to build the wall, as shown in Figure 1. If the intersection occurs where \(n^*\) is positive, it pays to build a wall. However, if even when the wall height is zero the incremental cost of a wall \(C´(0)\) is greater than the incremental gain in mitigation \(–Q´(0)\), it does not pay to build a wall. 

This approach requires estimating the probability of a tsunami of a certain height and the effectiveness of the defenses, which is often less than planned. The resulting uncertainty in the expected loss is included by adding a risk term \(R(n)\), the product of a risk aversion factor and the variance of the estimated loss, to the loss term \(Q(n)\). This increases the optimum to \(n^{**}\). 

Probability Estimates of Extreme Events

As the Tohoku earthquake illustrates, it is very difficult to estimate the probabilities of the extreme events that pose the greatest hazards. For any site, there are few observations of such events—e.g., earthquakes of magnitude greater than 8 or tsunamis higher than 10 meters. In many places, no geological records of such events are available, but it seems plausible that they might occur, at a rate that can be extrapolated from the rate of smaller events. Hence, it is often unclear how to describe their occurrence via a probability density function.

This uncertainty is one reason for the frequent occurrence of large earthquakes in areas predicted to have low hazard. In the Japanese government’s earthquake hazard map shown in Figure 2, the probability of strong ground shaking was presumed to be much lower off the Tohoku coast than in many other areas. The map reflects assumed probabilities of earthquakes of different magnitudes in different areas; the probability of an earthquake as large as that of March 2011 off Tohoku was assumed to be zero.

Figure 2. Comparison of Japanese government hazard map to the locations of earthquakes since 1979 that caused 10 or more fatalities, all of which are shown as having relatively low hazard (Geller, Nature, Vol. 472, 2011, 407-409).

Two general approaches have been taken to estimating the probabilities of such rare events. The basic choice is between a time-independent Poisson process with no “memory,” so that a future earthquake is equally likely immediately after and long after the past one, and various pdfs for time-dependent models in which the probability of the next large earthquake is small shortly after the past one and increases with time. For many places, neither approach captures the complexity of the earthquake history. 

Nuclear Power in Japan: A Deterministic Model

The destruction of the Fukushima nuclear power plant has prompted intense debate in Japan about whether to continue using nuclear power. The problem is to find an optimal cost/benefit balance for building nuclear plants in Japan. In comparing the costs and benefits, the challenge lies in the uncertainty in estimating the probability or likelihood of great earthquakes and megatsunamis. Because the stochastic model requires probability estimates, we consider an alternative deterministic model. 

The benefit of nuclear power is its effect upon GDP (real gross national product) and its growth, described by the net return on the capital invested less its cost. Our strategy for determining the optimal investment in nuclear plants has two stages. In the first we identify the worst “expectation” or “likelihood” of the loss due to large earthquakes or tsunamis, which for simplicity we term “shocks.” This is not the actual worst outcome, but the likely or expected worst outcome given a quadratic risk function. In the second stage we determine a scale of nuclear plant construction that maximizes the minimum expected real income. 

To do this, we let the logarithm of the gross domestic product \(X\) equal the capital invested \(k\) multiplied by \(b\), the productivity of capital in the absence of shocks, less the interest rate \(r\) that reflects the opportunity cost of using the capital and a term \(vs\), where \(s\) is a measure of shocks and \(v\) represents vulnerability:

\[\begin{equation}\tag{4} \mathrm{log}~ X = (b – r – vs)k. \end{equation}\]  

For simplicity of exposition, we deal with constant values of the variables. The “expected” GDP is \(X \times q\), an inverse measure of the likelihood of shocks of various sizes:

\[\begin{equation}\tag{5} q = \mathrm{exp}[(1/2)s^2].  \end{equation}\]   

This term is in effect an inverse measure of probability, even though we cannot precisely specify the probabilities. The logarithm of the expected GDP is then

\[\begin{equation}\tag{6}
Z = \mathrm{log}~ qX = (b – r – vs)k + \mathrm{exp}[(1/2)s^2]. 
\end{equation}\]

We imagine society playing a game against nature. The worst case of expected loss in real income arises for the value of the shock parameter \(s\) that produces the minimum value of \(Z\). Given this situation, society selects a capital stock \(k\) to maximize minimum \(Z\). This optimization,

\[\begin{equation}\tag{7}
\mathrm{max}_k ~\mathrm{min}_s (Z),                
\end{equation}\]leads to
\[\begin{equation}\tag{8}
k = (b – r)/v^2.             
\end{equation}\]

This max-expected min gives the optimal scale of investment conditional on the expected worst outcome. It is positively related to the net return on capital invested less the interest rate, and negatively related to the square of the vulnerability of the plant to shocks. This equation bears a remarkable similarity to the optimal ratio of risky assets/net worth in models of mathematical finance.

Research Challenges

These simple models illustrate opportunities for and challenges to the applied mathematics and computational science communities. New approaches are needed to improve our ability to assess natural hazards, including those associated with climate change. A key need is better quantification of the uncertainties in estimating the occurrence and effects of such extreme events and the resulting losses, from both a societal and an economic perspective.  Also crucial is the development of methods for evaluating the costs and benefits of alternative adaptation and mitigation approaches, which will help society formulate strategies to address these problems. These are among the topics on which the new Consortium for Mathematics in the Geosciences seeks to promote research.

Jerome Stein is a professor of economics, the Eastman Professor of Political Economy (emeritus), and a visiting professor in the Division of Applied Mathematics at Brown University. Seth Stein is the William Deering Professor in the Department of Earth and Planetary Sciences at Northwestern University.

blog comments powered by Disqus