In the final post on this background-series I want to write about the necessity for new ideas in verification. Verification is essential in geo- and climate science, as it gives validity to our work of predicting the future, whether it is on the short or long timescale. Especially in long-term prediction we have the huge challenge to verify our predictions on a low number of cases. We are happy when we got our 30+ events to identify our skill, but we have to find ways to make quality statements on potentially much lower number of cases. When we e.g. investigate El Niño events over the satellite period, we might have a time series bellow 10 time steps at hand and come to a dead end with classical verification techniques. Contingency tables require much more cases, because otherwise potential uncertainties become so huge that they cannot be controlled. Correlation measures are also highly dependent on many cases. Everything below 30 is not really acceptable, which is shown by quite high thresholds to reach significance. Still, most of long term prediction evaluation rely on such methods.
An alternative idea has been proposed by DelSole and Tippett, which I had first seen at the S2S2D-Conference in 2018. In this case we do not investigate a whole time series at once, as we would do for correlations, but single events. This allows to evaluate the effect of every single time step on the verification and give therefore new information beside the information on the whole time series.
I have shown in the new paper, that this approach allows also a paradigm shift in evaluating forecasts. While we looked beforehand in many approaches at a situation, where the evaluation of a year depends on the evaluation on other years, by counting the successes of each single year makes a prediction evaluation much more valuable. We do often not ask how good a forecast is, but whether it is better than another forecast. And we want to know at the time of forecasting, how likely it is that a forecast is better than another. But this information is not given by many standard verification techniques, as they take into account the value of difference between two forecasts at each time step. This is certainly important information, but limits our view in essential questions of our evaluation. Theoretically, it is often possible, that one single year can decide whether one forecast is better than another. Or more extreme: When in correlation one forecast is really bad in one year, but is better in all other years, it can still be dominated by the other forecast. These consequences have to be taken into account when we verify our models with these techniques.
As such, it is important to collect new ideas about how we want to verify and quantify the quality with its uncertainties of the new challenges, which are posed to us. This new paper applies new approaches in many of these departments, but there is certainly quite some room for new ideas in this important field for the future.
The idea behind sub-sampling is that dynamical ensemble prediction on long-term time scales have a too large spread. To counter that a couple of years ago (Dobrynin et al 2018) we introduced a technique called sub-sampling, which combines statistical with dynamical predictions. To understand the post-processing paper and its intentions it is key to understand at least in its basics the sub-sampling procedure, as it is in essence a generalisation of the methodology.
So how does it work. First of all we need a dynamical model, which predicts our chosen phenomena. It does not necessary have to have skill in it on the chosen time frame, but that is something I will discuss when another paper currently in review will be published. As we use in our papers the North Atlantic Oscillation (NAO) in Winter, let’s take this as an example. So in that case the NAO to be predicted is the NAO over December, January and February (DJF). The predictions are made at the beginning of November and show a reasonable prediction skill when measured with correlation measures, but have a large spread. At that point we introduce the statistical prediction. For this we have to have physically motivated predictors. For example is the sea-surface temperature in parts of the North Atlantic in September or October well connected to the NAO in DJF. Meaning: A high temperature in those areas in autumn will by some chance lead to a high NAO value in DJF and the other way round. Consequently: When we choose those areas and taking a normalised mean over their autumn SST values, we generate a predictor of the NAO in the following winter. Same is true for other variables, like sea ice or stratospheric temperature, where literature in the past has proven the connections. It is essential that we can trust the connections, as their validities are important when we want to trust the final predictions.
Having several predictor values for a prediction of the DJF-NAO allows us now to select those ensemble members of the dynamical prediction, which are close to at least one of the predictor. In the published paper we had chosen those 10 ensemble members, which are the closest to one of the used predictors, which lead to a minimum of 10 ensemble members (when all predictors select the same ensemble members) or all ensemble members of the 30 member ensemble, when the predictors deliver a widely spread prediction. Taking the mean over those selected (or better sub-selected) ensemble members has proven to have much more predictive skill than the ensemble mean with all ensemble members.
An advantage of the approach is that we have now not only a better prediction for the NAO in DJF, but also for other variables. Due to the fact that we are not only choosing NAO values with the help of the statistical predictors, but the full model fields, variables connected to the NAO in DJF have also the chance to be higher predictable. All this allows us to make a better prediction for the chosen phenomena (DJF-NAO) as well as the whole dynamical fields of different variables in selected areas.
As such it is a powerful tool and has been proven in other applications with different modifications as very stable. But there remained several questions in the review processes, which were up to now unanswered. The most important one is why it works. Others have looked in the last years on the physical argumentation, while the new paper investigated the statistical argumentation. This will be further investigated in the next background post.