About the Author

Adaptive Mesh Refinement: An Essential Ingredient in Computational Science

By Paul Davis

Adaptive mesh refinement may be to computational science and engineering (CSE) what Bolognese sauce is to Italian cooking: part of many meals and integral to the repertoire of most cooks, nearly all of whom are happy to share and serve their special recipes. In that spirit, adaptive mesh refinement was widely served at invited presentations, minisymposia, and poster sessions at the 2017 SIAM Conference on Computational Science and Engineering (CSE17) in Atlanta, Ga., this February; “adaptive mesh refinement” and “AMR” are mentioned on 52 distinct occasions in the meeting’s abstracts!

Like Bolognese, AMR has but a few simple ingredients. The trick is managing them well — across many, many pots on a very large stove. AMR aims to efficiently accommodate the vast variations in scale inherent in most physical phenomena by using smaller scales only when and where required.

As the components of a set of partial differential equations’ approximate solution advance in time, AMR refines selected portions of the spatial grid to meet predetermined error bounds (and perhaps takes smaller time steps, too.) The mesh refinement is dictated adaptively by a local estimate of the error in the computed solution; the sous-chef de précision is automated — embedded in the code. Clever bookkeeping tracks the mesh hierarchy and the approximate solution values as mesh points appear and disappear.

Abel, Adaptive Methods, and Astrophysics

Tom Abel of Stanford University began his invited presentation, “Making Sense of the Universe with Supercomputers,” with a cosmological problem that incorporates the “mother of all scales:” tracing the formation “via ab initio modeling” of the very first stars from the soup of hydrogen, helium, dark matter, etc. that constituted the universe a mere 400,000 years after the Big Bang.

“We can’t experiment,” Abel remarked dryly, because “there are no black holes in the laboratory.” But, he suggested, we do know what that soup looked like 400,000 years post-Big Bang because we can measure now the background radiation it emitted then. “Think initial conditions,” Abel said. Of course, the point of the computational modeling is determining what transpired in the “dark ages” between the time of that discordant brew and the nearby universe we observe today.

The scales in this computation are certainly daunting. Spatial variations on the order of 1012 and a dynamic range up to 1015 require AMR in both space and time. The model itself is another soup, with multi-physics ingredients such as gravity, gas dynamics, gas chemistry, radiation, and magneto-hydrodynamics.

Abel and his colleagues’ computational package of choice is Enzo, “a community-developed AMR simulation code designed for rich, multi-physics hydrodynamic astrophysical calculations.” With 300,000 lines of C++ and Fortan77, the code itself is a massive undertaking of computational engineering, never mind the computational science underlying core algorithms (like AMR) that it implements, or the astrophysics of the daunting questions it seeks to answer.

Figure 1 illustrates one of the successes of these efforts: isodensity and isothermal profiles computed at three levels of spatial resolution during the evolution of the very first stars. Abel noted that while average properties like mass and temperature converge reasonably well in these computations, measures of vorticity, turbulence, and magnetic field generation are less accurate.

Figure 1. One of the successes of astrophysical computation via adaptive methods: isodensity and isothermal profiles at three levels of spatial resolution during the evolution of the very first stars. “Pc” abbreviates parsec, an astronomical unit of length equivalent to about 3.25 light-years, or 31 × 1012 km. Image credit: Matthew Turk and Tom Abel (KIPAC/Stanford).

Moving from the birth of the first stars to the evolution of the second generation adds the computational complexity of ray-tracing. Further up the scientific mountain being scaled by Abel and his colleagues—and even more spectacular—are predictions of the distribution of dark matter in the universe. These suggest that “galaxies are arranged in a cosmic web of voids, sheets, filaments, and holes,” none of which can be predicted analytically. Public television programs often use visualizations of aspects of the universe’s evolution contributed by Abel and his colleagues, some of which he included in his talk.

“Few problems of astrophysics can be addressed in the lab,” Abel said, emphasizing the field’s reliance on computation. Since complex solutions require more memory, efficient adaptive tools like AMR are essential, although AMR is but one tool among many that his teams employ. In its development of Enzo, the astrophysics community was also an early adopter of many of the ground rules of distributed code development—like version control—which are now commonplace.

Almgren and the Next Generation of AMR

Ann Almgren of Lawrence Berkeley National Laboratory focused her invited presentation exclusively on the development of the next generation of AMR algorithms. Her core message, aimed at a CSE17 audience of AMR cognoscenti, is that block-structured AMR provides natural opportunities for parallelism because it manages data efficiently. 

In principle, AMR could arbitrarily subdivide parts of individual coarse grid cells higgledy-piggledy—one here, another there—so that the refined grid presents no discernable regular pattern within the original coarse grid. Block-structured (or tile-based) AMR regards the individual grid cells as a set of non-overlapping tiles of fixed size. An individual tile is refined by subdividing it uniformly and by appending so-called ghost cells to store solution values from neighboring tiles that the algorithm will need to advance the refined tile’s solution in time.

Almgren argued that block-structured AMR offers “a natural framework for reducing memory use,” and that its “infrastructure naturally supports hierarchical parallelism.” Put crudely, if the code can track the grid refinements of AMR and the coders know enough about the machine and the model being solved, then code and coders can together make efficient decisions about using storage and assigning tasks to nodes of the machine.  Of course, the code’s “decisions” are adaptive algorithms, which coders decide to incorporate during design and development.

Almgren comprehensively addressed six aspects facing next-generation AMR: single-core versus single-node performance, programming models, load balancing, synchronicity, the possibility of new equations and corresponding new algorithms, and the trade-offs between in situ and in transit visualization and analysis.

Load balancing, for example, is usually based on the number of cells. But if the particles in the simulation are unevenly distributed on the grid, then redistributing work by particle might be more efficient, depending on the relative costs of computation and data transfer.

Although Almgren acknowledges that “‘Synchronicity’ means different things to different people,” she uses it to label a range of coordination trade-offs that might be productive. Some could take place at a very low level, entirely invisible to the application, e.g., floating point computations on some cells while doing simple bookkeeping on others. Other coordination strategies might be quite brazen—say, changing the order of high-level tasks within the algorithm—provided one tile doesn’t step far ahead of others in time, a situation that would postpone solution updates and inflate memory demands.

New algorithms might be implemented differently to remove synchronization points like norm calculations. Or new algorithms might solve different equations, perhaps permitting asynchronous combustion or a particle description of a fluid embedded within a continuum model. In the face of so many possibilities, Almgren encouraged computational scientists to “Keep the options open!”

AMR Elsewhere

AMR appeared throughout CSE17 in a myriad of sessions. As but one example, it was often part of simulations of surface flooding using AMR-based tools from the Conservation Laws Package (Clawpack), a community-supported collection originally developed by Randall LeVeque of the University of Washington. LeVeque organized a six-poster session describing some of the Claw packages’ applications at CSE17.

Donna Calhoun1 of Boise State University; Yu-Hsuan Melody Shih, then at Columbia University, now at Boise State; Kyle Mandli of Columbia; Carsten Burstedde of the University of Bonn; and collaborators from Idaho National Laboratory used the GeoClaw extension of ForestClaw, a parallel adaptive quadtree code for patch-based AMR, to simulate the June 1976 flooding that ensued from the catastrophic failure of Idaho’s Teton Dam. Eleven people were killed and approximately 2 billion dollars of property destroyed as the flood spread 88 kilometers downstream in the eight hours following the dam’s collapse.

One of the investigators’ goals was assessing the suitability of these tools “for a potential study of flooding of nuclear power plants.” In their simulation, they sought to match records of the crest’s arrival times and the flood’s geographic spread, though not the dynamics of the dam’s actual failure.

Calhoun, Shih, and Mandli found that they needed a better model of the dam’s burst to accurately modulate the water’s initial flow and volume. Changes in these initial conditions affected the agreement between simulated and actual arrival times and between the predicted and actual spread of flood waters (see Figure 2). Despite the uncertain initial data, the algorithm performed well; the team could see the grid refinement as the water advanced. About 50-70% of CPU time was spent actually solving the problem.

Figure 2. Simulation of flooding resulting from the 1976 Teton Dam failure in eastern Idaho, done using ForestClaw, a parallel adaptive quadtree code for patch-based adaptive mesh refinement. The flood waters reached Sugar City (shown on figure) with a 15 foot wall of water inundating the city. The thicker red line is a digitized flood boundary, taken from historical records, and shows good agreement between the ForestClaw results and the historical record. The simulated flood arrival times (not shown) at Sugar City, Rexburg, and beyond also show excellent agreement with historical data. © Google, Digital Globe. Image credit: Donna Calhoun.

In a related study, Mandli, working with Colton Conroy and Jiao Li of Columbia, used patch-based AMR to solve the shallow water equations with added terms in order to assess the risk of storm-surge flooding in coastal cities. The trio sought efficient, reliable answers to some of the fundamental questions of flood management design and forecasting: How high should we build surge protection barriers? Will the water deflected by the barriers cause flooding elsewhere? They found partial answers by solving a Riemann problem over a region where barriers were modeled as thin walls at cell boundaries, although much work remains.

A Large Cast

Of course, AMR is but one among many arms that CSE has built for itself and repeatedly flexed in the service of other areas of human endeavor, ranging from the astronomic to the atomic. AMR’s frequent appearances in so many settings at CSE17 make it a compelling lead character—but only one among a very large cast—in this story about computational science and engineering as a fruitful and rapidly-growing part of applied mathematics.


Abel’s and Almgren’s CSE17 presentations are available from SIAM either as slides with synchronized audio, or as PDFs of slides only.

1 Calhoun maintains an extensive list of adaptive mesh resources.

Paul Davis is professor emeritus of mathematical sciences at Worcester Polytechnic Institute.