|
Wallach, D., Nissanka, S. P., Karunaratne, A. S., Weerakoon, W. M. W., Thorburn, P. J., Boote, K. J., et al. (2016). Accounting for both parameter and model structure uncertainty in crop model predictions of phenology: A case study on rice. European Journal of Agronomy, .
Abstract: We consider predictions of the impact of climate warming on rice development times in Sri Lanka. The major emphasis is on the uncertainty of the predictions, and in particular on the estimation of mean squared error of prediction. Three contributions to mean squared error are considered. The first is parameter uncertainty that results from model calibration. To take proper account of the complex data structure, generalized least squares is used to estimate the parameters and the variance-covariance matrix of the parameter estimators. The second contribution is model structure uncertainty, which we estimate using two different models. An ANOVA analysis is used to separate the contributions of parameter and model uncertainty to mean squared error. The third contribution is model error, which is estimated using hindcasts. Mean squared error of prediction of time from emergence to maturity, for baseline +2 °C, is estimated as 108 days2, with model error contributing 86 days2, followed by model structure uncertainty which contributes 15 days2 and parameter uncertainty which contributes 7 days2. We also show how prediction uncertainty is reduced if prediction concerns development time averaged over years, or the difference in development time between baseline and warmer temperatures.
|
|
|
Wallach, D., & Rivington, M. (2013). Development of a common set of methods and protocols for assessing and communicating uncertainties (Vol. 2).
Abstract: This reports sets out an outline approach to create definitions of uncertainty and how it might be classified. This is not a prescriptive approach rather it should be seen as a starting point from which further development can be made by consensus with CropM partners and across MACSUR Themes. We propose both a numerical quantification of uncertainty and text based classification scheme. The rational is to be able to both establish the terms and definitions in quantifying the impact of uncertainty on model estimates and have a scheme to enable identification of connectivity between types and sources of uncertainty. The aim is to establish a common set of terms and structure within which they operate that can be used to guide work within CropM. No Label
|
|
|
Wallach, D., & Rivington, M. (2014). A framework for assessing the uncertainty in crop model predictions (Vol. 3).
Abstract: It is of major importance in modeling to understand and quantify the uncertainty in model predictions, both in order to know how much confidence to have in those predictions, and as a first step toward model improvement. Here we show that there are basically three different approaches to evaluating uncertainty, and we explain the advantages and drawbacks of each. This is a necessary first step toward developing protocols for evaluation of uncertainty and so obtaining a clearer picture of the reliability of crop models. No Label
|
|
|
Rivington, M., & Wallach, D. (2015). Quantified Evidence of Error Propagation (Vol. 6).
Abstract: Error propagation within models is an issue that requires a structured approach involving the testing of individual equations and evaluation of the consequences of error creation from imperfect equation and model structure on estimates of interest made by a model. This report briefly covers some of the key issues in error propagation and sets out several concepts, across a range of complexity, that may be used to organise an investigation into error propagation. No Label
|
|
|
Rivington, M., & Wallach, D. (2015). Information to support input data quality and model improvement (Vol. 6).
Abstract: Data quality is a key factor in determining the quality of model estimates and hence a models’ overall utility. Good models run with poor quality explanatory variables and parameters will produce meaningless estimates. Many models are now well developed and have been shown to perform well where and when good quality data is available. Hence a major limitation now to further use of models in new locations and applications is likely to be the availability of good quality data. Improvements in the quality of data may be seen as the starting point of further model improvement, in that better data itself will lead to more accurate model estimates (i.e. through better calibration), and it will facilitate reduction of model residual error by enabling refinements to model equations. This report sets out why data quality is important as well as the basis for additional investment in improving data quality. No Label
|
|