Rivington, M., & Wallach, D. (2015). Quantified Evidence of Error Propagation (Vol. 6).
Abstract: Error propagation within models is an issue that requires a structured approach involving the testing of individual equations and evaluation of the consequences of error creation from imperfect equation and model structure on estimates of interest made by a model. This report briefly covers some of the key issues in error propagation and sets out several concepts, across a range of complexity, that may be used to organise an investigation into error propagation. No Label
|
Rivington, M., & Wallach, D. (2015). Information to support input data quality and model improvement (Vol. 6).
Abstract: Data quality is a key factor in determining the quality of model estimates and hence a models’ overall utility. Good models run with poor quality explanatory variables and parameters will produce meaningless estimates. Many models are now well developed and have been shown to perform well where and when good quality data is available. Hence a major limitation now to further use of models in new locations and applications is likely to be the availability of good quality data. Improvements in the quality of data may be seen as the starting point of further model improvement, in that better data itself will lead to more accurate model estimates (i.e. through better calibration), and it will facilitate reduction of model residual error by enabling refinements to model equations. This report sets out why data quality is important as well as the basis for additional investment in improving data quality. No Label
|
Wallach, D., & Rivington, M. (2014). A framework for assessing the uncertainty in crop model predictions (Vol. 3).
Abstract: It is of major importance in modeling to understand and quantify the uncertainty in model predictions, both in order to know how much confidence to have in those predictions, and as a first step toward model improvement. Here we show that there are basically three different approaches to evaluating uncertainty, and we explain the advantages and drawbacks of each. This is a necessary first step toward developing protocols for evaluation of uncertainty and so obtaining a clearer picture of the reliability of crop models. No Label
|
Wallach, D., & Rivington, M. (2013). Development of a common set of methods and protocols for assessing and communicating uncertainties (Vol. 2).
Abstract: This reports sets out an outline approach to create definitions of uncertainty and how it might be classified. This is not a prescriptive approach rather it should be seen as a starting point from which further development can be made by consensus with CropM partners and across MACSUR Themes. We propose both a numerical quantification of uncertainty and text based classification scheme. The rational is to be able to both establish the terms and definitions in quantifying the impact of uncertainty on model estimates and have a scheme to enable identification of connectivity between types and sources of uncertainty. The aim is to establish a common set of terms and structure within which they operate that can be used to guide work within CropM. No Label
|
Ewert, F., van Bussel, L. G. J., Zhao, G., Hoffmann, H., Gaiser, T., Specka, X., et al. (2015). Uncertainties in Scaling up Crop Models for Large Area Climate-change Impact Assessments. In C. Rosenzweig, & D. Hillel (Eds.), (pp. 261–277). Handbook of Climate Change and Agroecosystems: The Agricultural Model Intercomparison and Improvement Project (AgMIP) Integrated Crop and Economic Assessments — Joint Publication with American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America (In 2 Parts), ICP Series on Climate Change Impacts, Adaptation, . London: Imperial College Press.
|