About the Author

Modeling the Tohoku Earthquake

Coinciding with the third anniversary of the Tohoku earthquake and tsunami in northern Japan, Jeremy Kozdon, an assistant professor of applied mathematics at the Naval Postgraduate School in Monterey, California, narrated a video about his collaborative work in earthquake modeling with Eric Dunham (an assistant professor of geophysics at Stanford University). Periodically, written questions appear in the video; they were posed by interviewer Margot Gerritsen of Stanford, where Kozdon was a postdoc in geophysics before moving to NPS. Kozdon’s story of the unexpected, and gratifying, use of his group’s code and models by other scientists should be inspiring to applied and computational mathematicians, especially young people in search of an application area in which they can put their knowledge and interests to good use. This article presents highlights from the video.

            

 

Kozdon, who grew up in the Bay Area, came by his interest in earthquakes naturally: he was in elementary school at the time of the 1989 Loma Prieta Earthquake, which he remembers as occurring during the baseball World Series then underway, and as a cause of widespread devastation, including collapse of the Bay Bridge. Years later, upon receiving his PhD, he realized that the geosciences are a rich area for computational and applied mathematicians: “You can have a real impact in that field” by applying many of the techniques in wide use among applied and computational mathematicians.

Dunham and Kozdon’s team studies earthquakes by means of dynamic rupture modeling. Kozdon describes the complexity of the approach with relish: The group models the fault response, as well as the waves propagating away from the fault, carrying perturbations, velocity, and stress; the waves then feed back into the interface, which is governed by highly nonlinear friction laws, related to the velocities, stresses, and discontinuities at the fault. “We want to model what happens at that level,” he says.

Loma Prieta Earthquake, California, 1989.
The group’s modeling has been informed by high-friction lab experiments performed elsewhere, in which rocks slide past each other at displacement velocities on the order of meters/second. A major modeling challenge arises in linking lab results to real-world events. Kozdon is now beginning to explore the use of adaptive mesh refinement techniques to study what happens in moving up from the lab scale (centimeter) to the field scale (where the dimensions of faults can be in hundreds of kilometers). 

The Tohoku earthquake and tsunami occurred along a subduction zone. Subduction zones, in which one plate goes under another, tend to be the sites of the largest earthquakes and the greatest tsunami hazards. “One thing we’ve been exploring with our model,” Kozdon says, is “Why did the tsunami occur?” In answer, he mentions the large amount of sea floor uplift caused by the large amount of slip; estimates of the displacement of the fault are as high as 80 meters, with a consensus that it was in the tens of meters.

This was surprising to earthquake scientists, who had not believed that the top section of the fault would slip during a large earthquake—that the energy wasn’t there. “What we were able to show with our models,” Kozdon says, is that—even with the assumption that energy is not there to release on that segment of the fault, and neglecting the dynamics of what’s going on—wave energy released from deep slip on the fault comes up and is reflected from the sea floor and channeled onto that top portion of the fault, driving the rupture through that region.

He identifies the heart of the modeling challenge: “How do you set up initial conditions? You can’t take measurements of the state of stress of the Earth kilometers under the surface.” The group proceeded by doing an ensemble of simulations to try to understand how the uncertainties and their understanding of the physics and the initial conditions affect the final result.

The group first got involved in modeling the Tohoku earthquake soon after the event. At the time, they didn’t see their codes as set up for large-scale simulations: “We had a big parallel code, but had only used it on simplified geometries.”

The Japan Trench Fast Drilling Project.
In the midst of these feelings of doubt, another scientist (Emily Brodsky, a professor at UC Santa Cruz) approached them, wanting to drill across the fault so that they could take measurements. The Japan Trench Fast Drilling Project (JFAST) scientists thought that the methods developed by  Dunham and Kozdon could help, answering such questions as, What do we expect to see? If we see various things, what does that mean?

This was a proud moment for him and the team. “Seeing our code do simulations, putting in extremely challenging and complicated geometries, with extremely small angles, which make it a difficult problem to simulate, seeing the geoscience community get excited about your results—I’ve had only a few moments like that in my career.”

At this point in the video, Gerritsen brings Kozdon down to earth with a question about his “most embarrassing moment” in his work on earthquakes. He’s ready with an answer: “One of the silliest things I did was in bringing the model into our code. We took some published data, to give us the geometry, the angles at which the fault is dipping and where the various material layers are. I had misread the caption, didn’t realize that there was a 1.5 scaling in the vertical direction.”

The misstep led him to a deeper appreciation of working as part of a team. “Fortunately, my colleagues were forgiving.” . . . “Being part of a collaborative team is great; having someone who understands things looking at them alongside me” is invaluable.

“Three years ago,” Kozdon says, “I didn’t know that much about earthquake modeling.” What he did have was “a deep understanding of the numerics and computing, and also good physical intuition.” His advice to people wishing to get involved in the field is to master the fundamentals, “to get down in the trenches with physicists.”

“There’s a big push right now in geoscience to use computation,” he says. “That community is extremely welcoming to computational folks; they know that you can offer them something.”

For Further Reading 
[1] J.E. Kozdon and E.M. Dunham, Constraining shallow slip and tsunami excitation in megathrust ruptures using seismic and ocean acoustic waves recorded on ocean-bottom sensor networks, Earth and Planetary Sci. Lett., 396 (2014), 55--65.
[2] J.E. Kozdon and E.M. Dunham, Rupture to the trench: Dynamic rupture simulations of the 11 March 2011 Tohoku earthquake, Bull. Seismological Society of America, 103:2B (2013), 1275-1289. 
[3] J.E. Kozdon, E.M. Dunham, and J. Nordström, Simulation of dynamic earthquake ruptures in complex geometries using high-order finite difference methods, J. Sci. Comput, 55:1 (2013), 92-124.

Gail Corbett is the managing editor of SIAM News.