Wang, E., Martre, P., Zhao, Z., Ewert, F., Maiorano, A., Rötter, R. P., et al. (2017). The uncertainty of crop yield projections is reduced by improved temperature response functions. Nature Plants, 3, 17102.
Abstract: Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on crop yield, but have large uncertainty in yield simulations. Here, we show that variations in the mathematical functions currently used to simulate temperature responses of physiological processes in 29 wheat models account for >50% of uncertainty in simulated grain yields for mean growing season temperatures from 14 °C to 33 °C. We derived a set of new temperature response functions that when substituted in four wheat models reduced the error in grain yield simulations across seven global sites with different temperature regimes by 19% to 50% (42% average). We anticipate the improved temperature responses to be a key step to improve modelling of crops under rising temperature and climate change, leading to higher skill of crop yield projections. Erratum: doi: 10.1038/nplants.2017.125
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). A framework for evaluating uncertainty in crop model predictions.. Berlin (Germany).
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Estimating model prediction error: Should you treat predictions as fixed or random. Env. Model. Softw., 84, 529–539.
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEPfixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEPuncertain(X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEPuncertain(X) can be estimated using a random effects ANOVA. It is argued that MSEPuncertain(X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Overview paper on comprehensive framework for assessment of error and uncertainty in crop model predictions (Vol. 8).
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. Several ways of quantifying prediction uncertainty have been explored in the literature, but there have been no studies of how the different approaches are related to one another, and how they are related to some overall measure of prediction uncertainty. Here we show that all the different approaches can be related to two different viewpoints about the model; either the model is treated as a fixed predictor with some average error, or the model can be treated as a random variable with uncertainty in one or more of model structure, model inputs and model parameters. We discuss the differences, and show how mean squared error of prediction can be estimated in both cases. The results can be used to put uncertainty estimates into a more general framework and to relate different uncertainty estimates to one another and to overall prediction uncertainty. This should lead to a better understanding of crop model prediction uncertainty and the underlying causes of that uncertainty. This study was published as (Wallach et al. 2016)
|
Wallach, D., Mearns, L. O., Ruane, A. C., Rötter, R. P., & Asseng, S. (2016). Lessons from climate modeling on the design and use of ensembles for crop modeling. Clim. Change, .
Abstract: Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
|