SIAM News Blog

An In-Depth Exploration of the Cave Automatic Virtual Environment

By Lina Sorg

Given recent technological advancements and increasingly large amounts of available data, the U.S. President's Council of Advisors on Science and Technology is pushing for advancements in high-performance computing (HPC). In mid-2015, the White House released the following statement: HPC must now assume a broader meaning, encompassing not only flops but also the ability, for example, to efficiently manipulate vast and rapidly increasing quantities of both numerical and non-numerical data. While real-time and in situ approaches allow users to interact with applications during runtimes, hardware and algorithmic restrictions prevent truly effective visual exploration of dynamic data sets. Immersive virtual reality techniques, which yield superior visual comprehension of computer results and data experimentation, overcome these limitations. Ralf-Peter Mundani of the Technische Universität München used this knowledge, as well as the aforementioned statement from the Council of Advisors, as motivation for his research. 

During a contributed presentation at the 2019 SIAM Conference on Computational Science and Engineering, currently taking place in Spokane, Wash., Mundani spoke about the Cave Automatic Virtual Environment (CAVE), an immersive projection-based virtual reality tool that lends itself to advanced interactive visual data exploration. Researchers in the Electronic Visualization Laboratory at the University of Illinois at Chicago first developed this visualization system in the early 1990s to surmount the restrictions associated with head-mounted virtual reality displays. 

A CAVE typically manifests as a video theater within a larger space, in which users can manipulate, study, and naturally experience complicated three-dimensional (3D) models at a realistic one-to-one human scale. Its walls are comprised of either projection screens or flat panel displays. The high-resolution computer-based projection systems employed in a CAVE require small pixels to maintain the illusion of reality. Users wear 3D glasses to witness the generated “floating” graphics as they would appear in actuality; infrared cameras make this vision possible. Sensors attached to a user’s glasses track his/her movement around the environment, and videos continually adjust to retain the viewer’s perspective.

Interactive visual data exploration like CAVE is a level-of-detail concept that progresses from coarse global scales to increasingly fine local scales. A CAVE system begins with different types of high-resolution/high-fidelity data, such as geographical information system data (including statistics about land usage, elevation, parcels, and streets) and building information modeling data (pertaining to a structure’s physical infrastructures). This information is combined in a data repository, which derives a corresponding model for numerical multiscale simulation. Such a model uses a Navier-Stokes-type flow solver via a nonlinear equation for incompressible flow that accounts for thermal coupling. This manner of parallel numerical simulation allows researchers to crunch numbers quickly, though the resulting applications (particularly thermal applications) are often complicated. All data is stored using Hierarchical Data Format Version 5; one can implement and optimize it for different architectures.

The next task is retrieval of this data. “The connection from the supercomputer to the visualization device is pretty small and limited,” Mundani said. In response, he introduced the sliding window concept, in which a CAVE user is given a window that initially covers the entire visual domain. Making the window smaller ensures that more data points are transmitted to the system’s front end. “The smaller I make the window, the less points are discarded,” Mundani said. “You can do some quantitative analysis of the data this way, while in the larger windows you can only do some qualitative assessment. It’s a neat way to bridge HPC with visualization.” Once the window is fixed, the system forwards that change to the information server and the updated data is then based on the new window. On the back end, a collector node handles user queries about the data and sends it to the server. Simulation responses return the data to the collector, which then transmits it to the user for analysis.

All of this data is then visualized in a five-sided CAVE that is run by a cluster with 12 computing nodes and two projectors (one on the left side and one on the right side). Users retrieve data via the sliding window concept, then employ the Visualization Toolkit—an open-source software for the display of scientific data—to streamline the information and render simulation results. Doing so yields 10 different pictures in the CAVE for user exploration. Researchers are also able to switch from “explorer mode” to “overview mode” and resize the window for a different interpretation. Mundani calls this the “human in the loop” approach because users can interact with the visual simulation, change the boundary conditions, and load or discard different geometries. They are also able to activate time reversal steering. In the case of major flooding, for example, one could go backwards in stored discrete time steps and view snapshots in reverse to witness the flood’s evolution. He/she could then change conditions and geometry and restart the simulation again. “In a playful fashion, you can see the effects of your changes,” Mundani said.

He concluded his presentation with a brief discussion and demonstration of a non-immersive multi-resolution flood simulation for the city of Munich. The simulation portrayed water flowing throughout city and into the university, and depicted damage in different levels of detailed dimensions. “As time evolves, the water starts to suddenly run down the streets more and more,” Mundani said. “It’s something that can happen.”

Lina Sorg is the associate editor of SIAM News
blog comments powered by Disqus