Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). A framework for evaluating uncertainty in crop model predictions.. Berlin (Germany).
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Estimating model prediction error: Should you treat predictions as fixed or random. Env. Model. Softw., 84, 529–539.
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEPfixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEPuncertain(X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEPuncertain(X) can be estimated using a random effects ANOVA. It is argued that MSEPuncertain(X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Overview paper on comprehensive framework for assessment of error and uncertainty in crop model predictions (Vol. 8).
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. Several ways of quantifying prediction uncertainty have been explored in the literature, but there have been no studies of how the different approaches are related to one another, and how they are related to some overall measure of prediction uncertainty. Here we show that all the different approaches can be related to two different viewpoints about the model; either the model is treated as a fixed predictor with some average error, or the model can be treated as a random variable with uncertainty in one or more of model structure, model inputs and model parameters. We discuss the differences, and show how mean squared error of prediction can be estimated in both cases. The results can be used to put uncertainty estimates into a more general framework and to relate different uncertainty estimates to one another and to overall prediction uncertainty. This should lead to a better understanding of crop model prediction uncertainty and the underlying causes of that uncertainty. This study was published as (Wallach et al. 2016)
|
Wallach, D., & Rivington, M. (2013). Development of a common set of methods and protocols for assessing and communicating uncertainties (Vol. 2).
Abstract: This reports sets out an outline approach to create definitions of uncertainty and how it might be classified. This is not a prescriptive approach rather it should be seen as a starting point from which further development can be made by consensus with CropM partners and across MACSUR Themes. We propose both a numerical quantification of uncertainty and text based classification scheme. The rational is to be able to both establish the terms and definitions in quantifying the impact of uncertainty on model estimates and have a scheme to enable identification of connectivity between types and sources of uncertainty. The aim is to establish a common set of terms and structure within which they operate that can be used to guide work within CropM. No Label
|
Wallach, D., & Rivington, M. (2014). A framework for assessing the uncertainty in crop model predictions (Vol. 3).
Abstract: It is of major importance in modeling to understand and quantify the uncertainty in model predictions, both in order to know how much confidence to have in those predictions, and as a first step toward model improvement. Here we show that there are basically three different approaches to evaluating uncertainty, and we explain the advantages and drawbacks of each. This is a necessary first step toward developing protocols for evaluation of uncertainty and so obtaining a clearer picture of the reliability of crop models. No Label
|