|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Estimating model prediction error: Should you treat predictions as fixed or random. Env. Model. Softw., 84, 529–539.
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEPfixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEPuncertain(X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEPuncertain(X) can be estimated using a random effects ANOVA. It is argued that MSEPuncertain(X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
|
|
|
Van Oijen, M., Cameron, D., Levy, P. E., & Preston, R. (2017). Correcting errors from spatial upscaling of nonlinear greenhouse gas flux models. Environmental Modelling & Software, 94, 157–165.
|
|
|
Houska, T., Kraft, P., Liebermann, R., Klatt, S., Kraus, D., Haas, E., et al. (2017). Rejecting hydro-biogeochemical model structures by multi-criteria evaluation. Env. Model. Softw., 93, 1–12.
Abstract: Highlights • New method to investigate biogeochemical model structure performance. • Process based hydrological modelling can improve biogeochemical model predictions. • Modelling efficiency dramatically drops with multiple objectives. Abstract This work presents a novel way for assessing and comparing different hydro-biogeochemical model structures and their performances. We used the LandscapeDNDC modelling framework to set up four models of different complexity, considering two soil-biogeochemical and two hydrological modules. The performance of each model combination was assessed using long-term (8 years) data and applying different thresholds, considering multiple criteria and objective functions. Our results show that each model combination had its strength for particular criteria. However, only 0.01% of all model runs passed the complete rejectionist framework. In contrast, our comparatively applied assessments of single thresholds, as frequently used in other studies, lead to a much higher acceptance rate of 40–70%. Therefore, our study indicates that models can be right for the wrong reasons, i.e., matching GHG emissions while at the same time failing to simulate other criteria such as soil moisture or plant biomass dynamics.
|
|