|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Estimating model prediction error: Should you treat predictions as fixed or random. Env. Model. Softw., 84, 529–539.
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEPfixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEPuncertain(X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEPuncertain(X) can be estimated using a random effects ANOVA. It is argued that MSEPuncertain(X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
|
|