Challenges in Combining Projections from Multiple Climate Models. Knutti, R., Furrer, R., Tebaldi, C., Cermak, J., & Meehl, G. A. Journal of Climate, 23(10):2739–2758, May, 2010. Paper doi abstract bibtex Recent coordinated efforts, in which numerous general circulation climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multimodel ensembles sample initial conditions, parameters, and structural uncertainties in the model design, and they have prompted a variety of approaches to quantifying uncertainty in future climate change. International climate change assessments also rely heavily on these models. These assessments often provide equal-weighted averages as best-guess results, assuming that individual model biases will at least partly cancel and that a model average prediction is more likely to be correct than a prediction from a single model based on the result that a multimodel average of present-day climate generally outperforms any individual model. This study outlines the motivation for using multimodel ensembles and discusses various challenges in interpreting them. Among these challenges are that the number of models in these ensembles is usually small, their distribution in the model or parameter space is unclear, and that extreme behavior is often not sampled. Model skill in simulating present-day climate conditions is shown to relate only weakly to the magnitude of predicted change. It is thus unclear by how much the confidence in future projections should increase based on improvements in simulating present-day conditions, a reduction of intermodel spread, or a larger number of models. Averaging model output may further lead to a loss of signal—for example, for precipitation change where the predicted changes are spatially heterogeneous, such that the true expected change is very likely to be larger than suggested by a model average. Last, there is little agreement on metrics to separate “good” and “bad” models, and there is concern that model development, evaluation, and posterior weighting or ranking are all using the same datasets. While the multimodel average appears to still be useful in some situations, these results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.
@article{knutti_challenges_2010,
title = {Challenges in {Combining} {Projections} from {Multiple} {Climate} {Models}},
volume = {23},
issn = {0894-8755, 1520-0442},
url = {http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI3361.1},
doi = {10.1175/2009JCLI3361.1},
abstract = {Recent coordinated efforts, in which numerous general circulation climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multimodel ensembles sample initial conditions, parameters, and structural uncertainties in the model design, and they have prompted a variety of approaches to quantifying uncertainty in future climate change. International climate change assessments also rely heavily on these models. These assessments often provide equal-weighted averages as best-guess results, assuming that individual model biases will at least partly cancel and that a model average prediction is more likely to be correct than a prediction from a single model based on the result that a multimodel average of present-day climate generally outperforms any individual model. This study outlines the motivation for using multimodel ensembles and discusses various challenges in interpreting them. Among these challenges are that the number of models in these ensembles is usually small, their distribution in the model or parameter space is unclear, and that extreme behavior is often not sampled. Model skill in simulating present-day climate conditions is shown to relate only weakly to the magnitude of predicted change. It is thus unclear by how much the confidence in future projections should increase based on improvements in simulating present-day conditions, a reduction of intermodel spread, or a larger number of models. Averaging model output may further lead to a loss of signal—for example, for precipitation change where the predicted changes are spatially heterogeneous, such that the true expected change is very likely to be larger than suggested by a model average. Last, there is little agreement on metrics to separate “good” and “bad” models, and there is concern that model development, evaluation, and posterior weighting or ranking are all using the same datasets. While the multimodel average appears to still be useful in some situations, these results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.},
language = {en},
number = {10},
urldate = {2017-07-20},
journal = {Journal of Climate},
author = {Knutti, Reto and Furrer, Reinhard and Tebaldi, Claudia and Cermak, Jan and Meehl, Gerald A.},
month = may,
year = {2010},
keywords = {GA, Untagged},
pages = {2739--2758},
}
Downloads: 0
{"_id":"bRA9FahNhxBApwA9m","bibbaseid":"knutti-furrer-tebaldi-cermak-meehl-challengesincombiningprojectionsfrommultipleclimatemodels-2010","author_short":["Knutti, R.","Furrer, R.","Tebaldi, C.","Cermak, J.","Meehl, G. A."],"bibdata":{"bibtype":"article","type":"article","title":"Challenges in Combining Projections from Multiple Climate Models","volume":"23","issn":"0894-8755, 1520-0442","url":"http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI3361.1","doi":"10.1175/2009JCLI3361.1","abstract":"Recent coordinated efforts, in which numerous general circulation climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multimodel ensembles sample initial conditions, parameters, and structural uncertainties in the model design, and they have prompted a variety of approaches to quantifying uncertainty in future climate change. International climate change assessments also rely heavily on these models. These assessments often provide equal-weighted averages as best-guess results, assuming that individual model biases will at least partly cancel and that a model average prediction is more likely to be correct than a prediction from a single model based on the result that a multimodel average of present-day climate generally outperforms any individual model. This study outlines the motivation for using multimodel ensembles and discusses various challenges in interpreting them. Among these challenges are that the number of models in these ensembles is usually small, their distribution in the model or parameter space is unclear, and that extreme behavior is often not sampled. Model skill in simulating present-day climate conditions is shown to relate only weakly to the magnitude of predicted change. It is thus unclear by how much the confidence in future projections should increase based on improvements in simulating present-day conditions, a reduction of intermodel spread, or a larger number of models. Averaging model output may further lead to a loss of signal—for example, for precipitation change where the predicted changes are spatially heterogeneous, such that the true expected change is very likely to be larger than suggested by a model average. Last, there is little agreement on metrics to separate “good” and “bad” models, and there is concern that model development, evaluation, and posterior weighting or ranking are all using the same datasets. While the multimodel average appears to still be useful in some situations, these results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.","language":"en","number":"10","urldate":"2017-07-20","journal":"Journal of Climate","author":[{"propositions":[],"lastnames":["Knutti"],"firstnames":["Reto"],"suffixes":[]},{"propositions":[],"lastnames":["Furrer"],"firstnames":["Reinhard"],"suffixes":[]},{"propositions":[],"lastnames":["Tebaldi"],"firstnames":["Claudia"],"suffixes":[]},{"propositions":[],"lastnames":["Cermak"],"firstnames":["Jan"],"suffixes":[]},{"propositions":[],"lastnames":["Meehl"],"firstnames":["Gerald","A."],"suffixes":[]}],"month":"May","year":"2010","keywords":"GA, Untagged","pages":"2739–2758","bibtex":"@article{knutti_challenges_2010,\n\ttitle = {Challenges in {Combining} {Projections} from {Multiple} {Climate} {Models}},\n\tvolume = {23},\n\tissn = {0894-8755, 1520-0442},\n\turl = {http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI3361.1},\n\tdoi = {10.1175/2009JCLI3361.1},\n\tabstract = {Recent coordinated efforts, in which numerous general circulation climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multimodel ensembles sample initial conditions, parameters, and structural uncertainties in the model design, and they have prompted a variety of approaches to quantifying uncertainty in future climate change. International climate change assessments also rely heavily on these models. These assessments often provide equal-weighted averages as best-guess results, assuming that individual model biases will at least partly cancel and that a model average prediction is more likely to be correct than a prediction from a single model based on the result that a multimodel average of present-day climate generally outperforms any individual model. This study outlines the motivation for using multimodel ensembles and discusses various challenges in interpreting them. Among these challenges are that the number of models in these ensembles is usually small, their distribution in the model or parameter space is unclear, and that extreme behavior is often not sampled. Model skill in simulating present-day climate conditions is shown to relate only weakly to the magnitude of predicted change. It is thus unclear by how much the confidence in future projections should increase based on improvements in simulating present-day conditions, a reduction of intermodel spread, or a larger number of models. Averaging model output may further lead to a loss of signal—for example, for precipitation change where the predicted changes are spatially heterogeneous, such that the true expected change is very likely to be larger than suggested by a model average. Last, there is little agreement on metrics to separate “good” and “bad” models, and there is concern that model development, evaluation, and posterior weighting or ranking are all using the same datasets. While the multimodel average appears to still be useful in some situations, these results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.},\n\tlanguage = {en},\n\tnumber = {10},\n\turldate = {2017-07-20},\n\tjournal = {Journal of Climate},\n\tauthor = {Knutti, Reto and Furrer, Reinhard and Tebaldi, Claudia and Cermak, Jan and Meehl, Gerald A.},\n\tmonth = may,\n\tyear = {2010},\n\tkeywords = {GA, Untagged},\n\tpages = {2739--2758},\n}\n\n\n\n","author_short":["Knutti, R.","Furrer, R.","Tebaldi, C.","Cermak, J.","Meehl, G. A."],"key":"knutti_challenges_2010","id":"knutti_challenges_2010","bibbaseid":"knutti-furrer-tebaldi-cermak-meehl-challengesincombiningprojectionsfrommultipleclimatemodels-2010","role":"author","urls":{"Paper":"http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI3361.1"},"keyword":["GA","Untagged"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"http://bibbase.org/zotero-group/ajello/2099979","dataSources":["EndJaSpcpPJCQnJDH"],"keywords":["ga","untagged"],"search_terms":["challenges","combining","projections","multiple","climate","models","knutti","furrer","tebaldi","cermak","meehl"],"title":"Challenges in Combining Projections from Multiple Climate Models","year":2010}