Bayesian assessment of conceptual uncertainty in hydrosystem modeling

Project description

Bayesian assessment of conceptual uncertainty in hydrosystem modeling

This project aims at improving uncertainty assessment for hydrosystem models subject to uncertainty in model structure, parameters, and forcing terms. In order to explicitly account for conceptual uncertainty, Bayesian model averaging (BMA) is used as an integrated modeling framework. BMA is a formal statistical approach that rests on Bayesian probability theory. Weights are assigned to a set of alternative conceptual models based on their individual goodness-of-fit against observed data and the principle of parsimony. With these weights, model ranking, model selection or model averaging can be performed. Further, the conceptual uncertainty within the set of considered models can be quantified as so-called between-model variance.

A major obstacle to the wide-spread use of BMA lies in the computational challenge to evaluate BMA weights accurately and efficiently. We have addressed this challenge by assessing and comparing different methods to evaluate the BMA equations, considering both mathematical approximations and numerical schemes (Schöniger et al., 2014). Results of two synthetic test cases and of a hydrological case study show that the choice of evaluation method substantially influences the accuracy of the obtained weights and, consequently, the final model ranking and model-averaged results.

If correctly evaluated, BMA weights point the modeler to an optimal trade-off between model performance and complexity. To determine which level of complexity can be justified by the available calibration data, we have isolated the complexity component of the Bayesian trade-off from its performance counterpart. This model justifiability analysis (Schöniger et al., 2015a) is demonstrated on model selection between groundwater models of vastly different complexity.

Finally, we have addressed the question of whether model weights are reliable under uncertain model input or calibration data. The proposed sensitivity analysis allows to assess the related confidence in model ranking (Schöniger et al., 2015b). The impact of noisy calibration data on model ranking has been investigated in an application to soil-plant model selection. Results show that model weights can be highly sensitive to the outcome of random measurement errors, which compromises the significance of model ranking.

The findings from this research project also have important implications for the population and extension of the model set, for further model improvement, and for optimal design of experiments toward maximum confidence in model ranking.

More info
Researcher Anneli Schöniger     
Principal investigator Prof. Dr.-Ing. Wolfgang Nowak
Prof. Dr.-Ing. Olaf A. Cirpka (Universität Tübingen)
Partner Dr. Thomas Wöhling
Prof. Walter Illman, University of Waterloo (Canada)
Dr. Luis Samaniego, UFZ Leipzig
Dr. Sebastian Gayler, Universität Hohenheim
Duration 06/2012 - 08/2015 Financing International Research Trainign Group "HYDROMOD" (DFG IRTG 1829)
To the top of the page