|
Wallach, D., Nissanka, S. P., Karunaratne, A. S., Weerakoon, W. M. W., Thorburn, P. J., Boote, K. J., et al. (2016). Accounting for both parameter and model structure uncertainty in crop model predictions of phenology: A case study on rice. European Journal of Agronomy, .
Abstract: We consider predictions of the impact of climate warming on rice development times in Sri Lanka. The major emphasis is on the uncertainty of the predictions, and in particular on the estimation of mean squared error of prediction. Three contributions to mean squared error are considered. The first is parameter uncertainty that results from model calibration. To take proper account of the complex data structure, generalized least squares is used to estimate the parameters and the variance-covariance matrix of the parameter estimators. The second contribution is model structure uncertainty, which we estimate using two different models. An ANOVA analysis is used to separate the contributions of parameter and model uncertainty to mean squared error. The third contribution is model error, which is estimated using hindcasts. Mean squared error of prediction of time from emergence to maturity, for baseline +2 °C, is estimated as 108 days2, with model error contributing 86 days2, followed by model structure uncertainty which contributes 15 days2 and parameter uncertainty which contributes 7 days2. We also show how prediction uncertainty is reduced if prediction concerns development time averaged over years, or the difference in development time between baseline and warmer temperatures.
|
|
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Estimating model prediction error: Should you treat predictions as fixed or random. Env. Model. Softw., 84, 529–539.
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEPfixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEPuncertain(X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEPuncertain(X) can be estimated using a random effects ANOVA. It is argued that MSEPuncertain(X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
|
|