Meeting
2016 SIPN Workshop
Presentation Type
plenary
Presentation Theme
Predictions and Dynamical Predictive Systems II
Abstract Authors

François Massonnet, Barcelona Supercomputing Center, francois.massonnet [at] bsc.es
Omar Bellprat, Barcelona Supercomputing Center, omar.bellprat [at] bsc.es
Virginie Guemas, Barcelona Supercomputing Center, virginie.guemas [at] bsc.es
Francisco Doblas-Reyes, Barcelona Supercomputing Center, francisco.doblas-reyes [at] bsc.es

Abstract

Seasonal forecast quality assessment relies on two sources of information: a forecast (produced by a model) and a reference for verification. It is frequent to evaluate various forecasts against a common reference, to detect the effects of e.g. model improvements on prediction skill. The emergence of multiple observational datasets allows to ask the dual question: how does the skill of an individual forecast depend on the quality of the underlying reference dataset? We address this question for the seasonal prediction of summer Arctic sea ice extent over 1993-2008, using six different observational products. We find that skill scores, measured using sample correlation, can vary substantially (up to 0.1) from product to product, and for the same forecast. The reasons behind these variations, sometimes as large as variations noticed when upgrading the model itself, are discussed.

Time
-