Wallach, D., Mearns, L. O., Ruane, A. C., Rötter, R. P., & Asseng, S. (2016). Lessons from climate modeling on the design and use of ensembles for crop modeling. Clim. Change, 139(3-4), 551–564.
Abstract: Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.
|
Ruane, A. C., Hudson, N. I., Asseng, S., Camarrano, D., Ewert, F., Martre, P., et al. (2016). Multi-wheat-model ensemble responses to interannual climate variability. Env. Model. Softw., 81, 86–101.
Abstract: We compare 27 wheat models’ yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981-2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models’ climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R-2 <= 0.24) was found between the models’ sensitivities to interannual temperature variability and their response to long-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts. Published by Elsevier Ltd.
|
Wallach, D., Thorburn, P., Asseng, S., Challinor, A. J., Ewert, F., Jones, J. W., et al. (2016). Estimating model prediction error: Should you treat predictions as fixed or random. Env. Model. Softw., 84, 529–539.
Abstract: Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEPfixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEPuncertain(X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEPuncertain(X) can be estimated using a random effects ANOVA. It is argued that MSEPuncertain(X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
|
Ewert, F., Rötter, R. P., Bindi, M., Webber, H., Trnka, M., Kersebaum, K. C., et al. (2015). Crop modelling for integrated assessment of risk to food production from climate change. Env. Model. Softw., 72, 287–303.
Abstract: The complexity of risks posed by climate change and possible adaptations for crop production has called for integrated assessment and modelling (IAM) approaches linking biophysical and economic models. This paper attempts to provide an overview of the present state of crop modelling to assess climate change risks to food production and to which extent crop models comply with IAM demands. Considerable progress has been made in modelling effects of climate variables, where crop models best satisfy IAM demands. Demands are partly satisfied for simulating commonly required assessment variables. However, progress on the number of simulated crops, uncertainty propagation related to model parameters and structure, adaptations and scaling are less advanced and lagging behind IAM demands. The limitations are considered substantial and apply to a different extent to all crop models. Overcoming these limitations will require joint efforts, and consideration of novel modelling approaches.
|
Sanna, M., Bellocchi, G., Fumagalli, M., & Acutis, M. (2015). A new method for analysing the interrelationship between performance indicators with an application to agrometeorological models. Env. Model. Softw., 73, 286–304.
Abstract: The use of a variety of metrics is advocated to assess model performance but correlated metrics may convey the same information, thus leading to redundancy. Starting from this assumption, a method was developed for selecting, from among a collection of performance indicators, one or more subsets providing the same information as the entire set. The method, based on the definition of “stable correlation”, was applied to 23 performance indicators of agrometeorological models, calculated on large sets of simulated and observed data of four agronomic and meteorological variables: above-ground biomass, leaf area index, hourly air relative humidity and daily solar radiation. Two subsets were determined: {Squared Bias, Root Mean Squared Relative Error, Coefficient of Determination, Pattern Index, Modified Modelling Efficiency}, {Persistence Model Efficiency, Root Mean Squared Relative Error, Coefficient of Determination, Pattern Index}. The method needs corroboration but is statistically founded and can support the implementation of standardized evaluation tools. (C) 2015 Elsevier Ltd. All rights reserved.
|