For modeling catchment-scale flow processes in the subsurface coupled with surface waters, many model schools exist. They differ vastly in complexity, ranging from conceptual hydrological models over distributed hydrological models up to fully-blown partial differential equation (pde)-based approaches.
A well-known problem is this context is: the more complex a model approach is, the more data are required to suitably constrain its increasing number of parameters and thus to make the model legitimate. The goal of this project is to answer the following related questions:
- Can the data demand of highly-resolved pde-based models ever be satisfied with a reasonable extent of catchment investigation?
- Do we need to switch back to zonation-based models, other concepts that use simple parameterizations for geological heterogeneity or even extremely simplified hydrological-style (i.e. bucket-type) models?
- Can optimal collection of data help to supply the data demand at reasonable costs?
As a first step to answer these questions, it has to be found out how model complexity can be measured for large and non-linear models in a Bayesian context. Following that, a Bayesian model analysis will be applied to models of vastly different complexity and conceptuality in the field of hydrogeology. Using Bayesian analysis tools in across-model numerical studies, we will identify the required level of data availability that marks the transition point in model legitimacy between the competing models.
|Principal investigator||Prof. Dr.-Ing. Wolfgang Nowak||Partner||Dr. Thomas Wöhling, Universität Dresden
Prof. Walter Illman, University Waterloo (Canada)
|Duration||10/2015 - 09/2018||Financing||International Research Trainign Group "HYDROMOD" (DFG IRTG 1829)|