Nendel, C., Ewert, F., Rötter, R. P., Rosenzweig, C., Jones, J. W., Hatfield, J. L., et al. (2013). Addressing challenges and uncertainties for, the use of agro-ecosystem models to, assess climate change impact and food security across scales..
|
Asseng, S., Ewert, F., Rosenzweig, C., Jones, J. W., Hatfield, J. L., Ruane, A. C., et al. (2013). Uncertainty in simulating wheat yields under climate change. Nat. Clim. Change, 3(9), 827–832.
Abstract: Projections of climate change impacts on crop yields are inherently uncertain(1). Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate(2). However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models(1,3) are difficult(4). Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.
|
Rötter, R. P., Asseng, S., Ewert, F., Rosenzweig, C., Jones, J. W., Hatfield, J. L., et al. (2013). Quantifying Uncertainties in Modeling Crop Water Use under Climate Change..
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Overview paper on comprehensive framework for assessment of error and uncertainty in crop model predictions (Vol. 8).
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. Several ways of quantifying prediction uncertainty have been explored in the literature, but there have been no studies of how the different approaches are related to one another, and how they are related to some overall measure of prediction uncertainty. Here we show that all the different approaches can be related to two different viewpoints about the model; either the model is treated as a fixed predictor with some average error, or the model can be treated as a random variable with uncertainty in one or more of model structure, model inputs and model parameters. We discuss the differences, and show how mean squared error of prediction can be estimated in both cases. The results can be used to put uncertainty estimates into a more general framework and to relate different uncertainty estimates to one another and to overall prediction uncertainty. This should lead to a better understanding of crop model prediction uncertainty and the underlying causes of that uncertainty. This study was published as (Wallach et al. 2016)
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Estimating model prediction error: Should you treat predictions as fixed or random. Env. Model. Softw., 84, 529–539.
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEPfixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEPuncertain(X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEPuncertain(X) can be estimated using a random effects ANOVA. It is argued that MSEPuncertain(X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
|