SIAM News Blog
SIAM News
Print

Overcoming Structural Uncertainty in Computer Models

A computer model is a representation of the functional relationship between one set of parameters, which forms the model input, and a corresponding set of target parameters, which forms the model output. A true model for a particular problem can rarely be defined with certainty. The most we can do to mitigate error is to quantify the uncertainty in the model.

In a recent paper published in the SIAM/ASA Journal on Uncertainty Quantification, authors Mark Strong and Jeremy Oakley offer a method to incorporate judgments into a model about structural uncertainty that results from building an “incorrect” model.

“Given that ‘all models are wrong,’ it is important that we develop methods for quantifying our uncertainty in model structure such that we can know when our model is ‘good enough’,” author Mark Strong says. “Better models mean better decisions.”

When making predictions using computer models, we encounter two sources of uncertainty: uncertainty in model inputs and uncertainty in model structure. Input uncertainty arises when we are not certain about input parameters in model simulations. If we are uncertain about true structural relationships within a model—that is, the relationship between the set of quantities that form the model input and the set that represents the output—the model is said to display structural uncertainty.  Such uncertainty exists even if the model is run using input values as estimated in a perfect study with infinite sample size.

“Perhaps the hardest problem in assessing uncertainty in a computer model prediction is to quantify uncertainty about the model structure, particularly when models are used to predict in the absence of data,” says author Jeremy Oakley. “The methodology in this paper can help model users prioritize where improvements are needed in a model to provide more robust support to decision making.”

While methods for managing input uncertainty are well described in the literature, methods for quantifying structural uncertainty are not as well developed. This is especially true in the context of health economic decision making, which is the focus of this paper. Here, models are used to predict future costs and health consequences of options to make decisions for resource allocation.

Left: Hypothetical model with ten inputs and one output, decomposed to reveal six intermediate parameters. Right: Possible structural error in the subfunctions that result in Y1, Y5, and Y6 are corrected with discrepancy terms δ1, δ2 and δ3. Figure credit: Mark Strong and Jeremy E. Oakley

“In health economics decision analysis, the use of “law-based” computer models is common. Such models are used to support national health resource allocation decisions, and the stakes are therefore high,” says Strong.  “While it is usual in this setting to consider the uncertainty in model inputs, uncertainty in model structure is almost never formally assessed.”

There are several approaches to managing model structural uncertainty. A primary approach is ‘model averaging’ in which predictions of a number of plausible models are averaged with weights based on each model’s likelihood or predictive ability. Another approach is ‘model calibration’, which assesses a model based on its external discrepancies, that is, output quantities and how they relate to real, observed values. In the context of healthcare decisions, however, neither of these approaches is feasible since typically more than one model is not available for averaging, and observations on model outputs are not available for calibration.

Hence, the authors use a novel approach based on discrepancies within the model or “internal discrepancies” (as opposed to external discrepancies which are the focus of model calibration). Internal discrepancies are analyzed by first decomposing the model into a series of subunits or subfunctions, the outputs of which are intermediate model parameters that are potentially observable in the real world. Next, each sub-function is judged for certainty based on whether its output would equal the true value of the parameter from real-world observations. If a potential structural error is anticipated, a discrepancy term is introduced. Subsequently, beliefs about the size and direction of errors are expressed. Since judgments for internal discrepancies are expected to be crude at best, the expression of uncertainty should be generous, that is, allowed to cover a wide distribution of possible values. Finally, the authors determine the sensitivity of the model output to internal discrepancies. This gives an indication of the relative importance of structural uncertainty within each model subunit.

“Traditional statistical approaches to handling uncertainty in computer models have tended to treat the models as ‘black boxes’. Our framework is based on ‘opening’ the black box and investigating the model’s internal workings,” says Oakley. “Developing and implementing this framework, particularly in more complex models, will need closer collaboration between statisticians and mathematical modelers.”

Source Article:
When Is a Model Good Enough? Deriving the Expected Value of Model Improvement via Specifying Internal Model Discrepancies
Mark Strong and Jeremy E. Oakley
SIAM/ASA Journal on Uncertainty Quantification, 2(1), 106–125 (Online publish date: February 6, 2014). The paper is available for free download at the link above through December 31, 2014.

About the authors: Mark Strong is a clinical senior lecturer in public health and the Deputy Director of Public Health Section at the School of Health and Related Research at the University of Sheffield, and Jeremy Oakley is a professor of statistics in the School of Mathematics and Statistics at the University of Sheffield.

blog comments powered by Disqus