Post-processing paper background: Why does sub-sampling work?

One main aspect of the new paper is the question why sub-sampling works. In many review rounds for the original paper (Dobrynin et al 2018) we got questions about a proper statistical model of the method and many claims why it should not work while it does (aka cheating). This is the point this manuscript comes into play. Instead of selecting a (probably) random number of ensemble members close to one or more predictors everything is transferred to distribution functions (pdf). Of course those are not easily available without making large amounts of assumptions, so I have gone the hard way. Bootstrapping of EOF fields is certainly no easy task in terms of computational costs, but it does work. It allows to have for every ensemble member and every predictor as well as for the observations of the North Atlantic Oscillation (NAO) a pdf.

Basing on those pdfs it is now possible to look for the reason of better prediction skill of the sub-sampling method compared of no-sub-sampling-case. First step is to show that the distribution view and the sub-sampling are at least similar. In the end, making use of pdfs is not a pure selection but more a weighting. It weights those ensemble members higher, which are close to a predictor compared to those far away. Of course there are differences between the two approaches, but the results are remarkably similar. It gave us more confidence that in the many tests we did in the past on the sub-sampling methodology the way how we select does not have such a huge influence (but that will be explained in detail in an upcoming paper). Consequently, we can accept that when we can show how the pdf-approach works we will get insights into the sub-sampling approach itself.

The new paper shows, that key to the understanding of the mechanism is the understanding of the spread. While seasonal prediction has an acceptable correlation skill for its mean of ensemble members, each prediction of a single ensemble member is rubbish. In consequence, the overall ensemble has a huge spread of quite uniformed members. We have learned in the past to work with such problems, requiring us to take huge care in how to evaluate predictions on the long-term timescale. By filtering this broad spread and with it highly variant distribution function with informed and sharper predictor functions leads to the effect of sharpening the combined prediction, while at the same time having a better prediction overall. With other (simplified) words: we weight down the influence of those ensemble members that drifted away from the correct path and concentrate onto those, which are consistent with the overall state of the climate system.

As a consequence, the nature of the resulting prediction is in its properties quite similar to a statistical prediction, but has still many advantages of a dynamical prediction. It is probably not the best of both worlds, but an acceptable compromise. But to establish that we need tools to evaluate the made predictions and that proved to be harder than expected. But that is the story of the next post on why we need verification tools for uncertain observations.

Post-processing paper background: What is sub-sampling?

The idea behind sub-sampling is that dynamical ensemble prediction on long-term time scales have a too large spread. To counter that a couple of years ago (Dobrynin et al 2018) we introduced a technique called sub-sampling, which combines statistical with dynamical predictions. To understand the post-processing paper and its intentions it is key to understand at least in its basics the sub-sampling procedure, as it is in essence a generalisation of the methodology.

So how does it work. First of all we need a dynamical model, which predicts our chosen phenomena. It does not necessary have to have skill in it on the chosen time frame, but that is something I will discuss when another paper currently in review will be published. As we use in our papers the North Atlantic Oscillation (NAO) in Winter, let’s take this as an example. So in that case the NAO to be predicted is the NAO over December, January and February (DJF). The predictions are made at the beginning of November and show a reasonable prediction skill when measured with correlation measures, but have a large spread. At that point we introduce the statistical prediction. For this we have to have physically motivated predictors. For example is the sea-surface temperature in parts of the North Atlantic in September or October well connected to the NAO in DJF. Meaning: A high temperature in those areas in autumn will by some chance lead to a high NAO value in DJF and the other way round. Consequently: When we choose those areas and taking a normalised mean over their autumn SST values, we generate a predictor of the NAO in the following winter. Same is true for other variables, like sea ice or stratospheric temperature, where literature in the past has proven the connections. It is essential that we can trust the connections, as their validities are important when we want to trust the final predictions.

Having several predictor values for a prediction of the DJF-NAO allows us now to select those ensemble members of the dynamical prediction, which are close to at least one of the predictor. In the published paper we had chosen those 10 ensemble members, which are the closest to one of the used predictors, which lead to a minimum of 10 ensemble members (when all predictors select the same ensemble members) or all ensemble members of the 30 member ensemble, when the predictors deliver a widely spread prediction. Taking the mean over those selected (or better sub-selected) ensemble members has proven to have much more predictive skill than the ensemble mean with all ensemble members.

An advantage of the approach is that we have now not only a better prediction for the NAO in DJF, but also for other variables. Due to the fact that we are not only choosing NAO values with the help of the statistical predictors, but the full model fields, variables connected to the NAO in DJF have also the chance to be higher predictable. All this allows us to make a better prediction for the chosen phenomena (DJF-NAO) as well as the whole dynamical fields of different variables in selected areas.

As such it is a powerful tool and has been proven in other applications with different modifications as very stable. But there remained several questions in the review processes, which were up to now unanswered. The most important one is why it works. Others have looked in the last years on the physical argumentation, while the new paper investigated the statistical argumentation. This will be further investigated in the next background post.

 

Background to “Seasonal statistical-dynamical prediction of the North Atlantic Oscillation by probabilistic post-processing and its evaluation”

Recently my paper “Seasonal statistical-dynamical prediction of the North Atlantic Oscillation by probabilistic post-processing and its evaluation” was accepted in “Nonlinear Processes in Geophysics” and as it is now published, I will use this blog as it is a tradition (see here, here and here) to explain in more detail what it is about and what the problems are. So I will highlight some background to the paper in the upcoming week and will show why some of the points I raise therein will be important for the development of the community around seasonal- and decadal prediction as well as the one for the wider climate science.

So my background stories will take a look at the following topics:

  1. What is sub-sampling?
  2. Why does sub-sampling work?
  3. Why do we need verification with uncertain observations?
  4. EMD and IQD? What is it about?
  5. Do we need new approaches in verification?

As always, these topics are of course just an addition to the regular paper and are all just personal view. As it is my first (and I somehow hope only) single-author paper, the manuscript reflects of course mostly my view. Anyway, there are limits of things you can do in scientific literature, and that’s what those blog posts are about. And of course, it is statistical literature, so explaining it in more detail for those who are no fans of equations, is certainly a surplus.

IMSC 2019: The final day

With a half day of talks the IMSC 2019 ended today in Toulouse. It was again a quite warm day and not far away the record for the recorded french temperature was broken today. So it was fitting that the final day started with event attribution talks and covered among others heat waves and their attribution to climate change. The next session was the final parallel session and I stayed in the event attribution session. It addressed more events and discussed the limits of these techniques.

After lunch the conference officially ended at it was time to look back at the past days here in southern France. The conference was well organised and fulfilled the expectations. Good food, good location, interesting scientific content. The main topics were extremes and detection and attribution. It blocked quite a big chunk of the conference and pushed the other topics for my taste a bit too far into a corner. Biggest issue in the verification and forecast evaluation was the handling of uncertain observations. Apart from that the conference covered good statistical practice, some talks about data and many good discussions about statistical topics. So it was fun to join this conference again, even with the very hot weather. So let’s see where the enxt conference will be, in three or something years.

IMSC 2019: My presentation

The fourth day of the IMSC 2019 was the day when the heat wave finally hit Toulouse with full force. Around 40 deg C was what the temperature measurements told us and it felt a bit overwhelming. The morning started again with plenary sessions and talks about uncertainty separation and down-scaling. Afterwards followed the poster session in a tent outside, and it got warmer and warmer over time as the wind was not as strong as in the last days.

After lunch the parallel sessions for the day started and I chose the one on forecast evaluation. The first part was reserved for the development of new verification procedures and I had my own talk in this section. It went alright, I presented two new skill scores basing on the EMD and demonstrated it at different seasonal prediction applications. The second half of the session was on the application of verification procedures and showed many different fields.

With the end of the talks it was time for the social events. The choice was either a wine tasting on a ship or a walking tour through town. I chose the latter one and it was a challenge to always find shade to get not too warm in the sunshine. Tomorrow will be the last day and the weather will still be warm enough to be a challenge.