Post-processing paper background: Why does sub-sampling work?

One main aspect of the new paper is the question why sub-sampling works. In many review rounds for the original paper (Dobrynin et al 2018) we got questions about a proper statistical model of the method and many claims why it should not work while it does (aka cheating). This is the point this manuscript comes into play. Instead of selecting a (probably) random number of ensemble members close to one or more predictors everything is transferred to distribution functions (pdf). Of course those are not easily available without making large amounts of assumptions, so I have gone the hard way. Bootstrapping of EOF fields is certainly no easy task in terms of computational costs, but it does work. It allows to have for every ensemble member and every predictor as well as for the observations of the North Atlantic Oscillation (NAO) a pdf.

Basing on those pdfs it is now possible to look for the reason of better prediction skill of the sub-sampling method compared of no-sub-sampling-case. First step is to show that the distribution view and the sub-sampling are at least similar. In the end, making use of pdfs is not a pure selection but more a weighting. It weights those ensemble members higher, which are close to a predictor compared to those far away. Of course there are differences between the two approaches, but the results are remarkably similar. It gave us more confidence that in the many tests we did in the past on the sub-sampling methodology the way how we select does not have such a huge influence (but that will be explained in detail in an upcoming paper). Consequently, we can accept that when we can show how the pdf-approach works we will get insights into the sub-sampling approach itself.

The new paper shows, that key to the understanding of the mechanism is the understanding of the spread. While seasonal prediction has an acceptable correlation skill for its mean of ensemble members, each prediction of a single ensemble member is rubbish. In consequence, the overall ensemble has a huge spread of quite uniformed members. We have learned in the past to work with such problems, requiring us to take huge care in how to evaluate predictions on the long-term timescale. By filtering this broad spread and with it highly variant distribution function with informed and sharper predictor functions leads to the effect of sharpening the combined prediction, while at the same time having a better prediction overall. With other (simplified) words: we weight down the influence of those ensemble members that drifted away from the correct path and concentrate onto those, which are consistent with the overall state of the climate system.

As a consequence, the nature of the resulting prediction is in its properties quite similar to a statistical prediction, but has still many advantages of a dynamical prediction. It is probably not the best of both worlds, but an acceptable compromise. But to establish that we need tools to evaluate the made predictions and that proved to be harder than expected. But that is the story of the next post on why we need verification tools for uncertain observations.

Post-processing paper background: What is sub-sampling?

The idea behind sub-sampling is that dynamical ensemble prediction on long-term time scales have a too large spread. To counter that a couple of years ago (Dobrynin et al 2018) we introduced a technique called sub-sampling, which combines statistical with dynamical predictions. To understand the post-processing paper and its intentions it is key to understand at least in its basics the sub-sampling procedure, as it is in essence a generalisation of the methodology.

So how does it work. First of all we need a dynamical model, which predicts our chosen phenomena. It does not necessary have to have skill in it on the chosen time frame, but that is something I will discuss when another paper currently in review will be published. As we use in our papers the North Atlantic Oscillation (NAO) in Winter, let’s take this as an example. So in that case the NAO to be predicted is the NAO over December, January and February (DJF). The predictions are made at the beginning of November and show a reasonable prediction skill when measured with correlation measures, but have a large spread. At that point we introduce the statistical prediction. For this we have to have physically motivated predictors. For example is the sea-surface temperature in parts of the North Atlantic in September or October well connected to the NAO in DJF. Meaning: A high temperature in those areas in autumn will by some chance lead to a high NAO value in DJF and the other way round. Consequently: When we choose those areas and taking a normalised mean over their autumn SST values, we generate a predictor of the NAO in the following winter. Same is true for other variables, like sea ice or stratospheric temperature, where literature in the past has proven the connections. It is essential that we can trust the connections, as their validities are important when we want to trust the final predictions.

Having several predictor values for a prediction of the DJF-NAO allows us now to select those ensemble members of the dynamical prediction, which are close to at least one of the predictor. In the published paper we had chosen those 10 ensemble members, which are the closest to one of the used predictors, which lead to a minimum of 10 ensemble members (when all predictors select the same ensemble members) or all ensemble members of the 30 member ensemble, when the predictors deliver a widely spread prediction. Taking the mean over those selected (or better sub-selected) ensemble members has proven to have much more predictive skill than the ensemble mean with all ensemble members.

An advantage of the approach is that we have now not only a better prediction for the NAO in DJF, but also for other variables. Due to the fact that we are not only choosing NAO values with the help of the statistical predictors, but the full model fields, variables connected to the NAO in DJF have also the chance to be higher predictable. All this allows us to make a better prediction for the chosen phenomena (DJF-NAO) as well as the whole dynamical fields of different variables in selected areas.

As such it is a powerful tool and has been proven in other applications with different modifications as very stable. But there remained several questions in the review processes, which were up to now unanswered. The most important one is why it works. Others have looked in the last years on the physical argumentation, while the new paper investigated the statistical argumentation. This will be further investigated in the next background post.

 

Three new papers out

A new year has started and in the recent month three new papers have been published, which have my name in the author list. In all three cases my contributions were more in the sense of statistical assistance, so I will just briefly introduce the topics.

Skilful Seasonal Prediction of Ocean Surface Waves in the Atlantic Ocean

This paper predicts ocean surface waves on the seasonal scale. It uses enhanced prediction of the NAO with the sub-sampling algorithm to generate prediction skill for wave height in the North Atlantic. As the prediction enhances not only wind waves, but also the swell it is a consistent prediction enhancement for the total wave height.

Dobrynin, M.; Kleine, T.; Düsterhus, A.; Baehr, J: Skilful Seasonal Prediction of Ocean Surface Waves in the Atlantic Ocean, GRL, 46, 1731–1739

Seasonal predictability of European summer climate re-assessed

The second paper investigates the predictability of European summer climate by a physics-based sub-sampling. It uses for this a connection from tropical Atlantic SST anomalies over a wave train in the upper troposphere to the second mode of North Atlantic surface pressure. Unlike European winter’s, the second mode is as important as the first mode for European summer climate. As a consequence the predictability of surface temperature and other atmospheric variables over Europe are enhanced.

Neddermann, N.-C.; Müller, W. A.; Dobrynin, M.; Düsterhus, A.; Baehr, J. (2019): Seasonal predictability of European summer climate re-assessed, Climate Dynamics

Atlantic Inflow to the North Sea Modulated by the Subpolar Gyre in a Historical Simulation With MPI‐ESM

This study uses a global model to show that the strength of the subpolar gyre (SPG) has a  profound influence on North Sea water  properties. Up to  now regional models showed that most of the modulation happens due to Atmospheric influence. The modulations by the SPG happen on a decadal scale and can be followed on their way from the Atlantic to the North Sea.

Koul V.; Schrum, C.; Düsterhus, A.; Baehr, J.: Atlantic Inflow to the North Sea Modulated by the Subpolar Gyre in a Historical Simulation With MPI‐ESM, JGR Oceans

EGU 2017: Medal lectures

The second day of the conference was a quiet day for me, as no must see sessions were scheduled for me today. It started again with the North Atlantic session, which this time focussed more on the oscillations, like NAO. Afterwards, I visited a medal lecture on SAR. This topic is quite far away from my daily work, but such conferences are always a chance to see things you are usually not confronted with. Important for me was the statement that in times in which data can be generated in huge numbers, data management gets more and more important. Big data requires new ideas on workflows, might have to include cloud services and poses new questions on data availability.

After lunch I visited a palaeo session on the common era, which also addressed in many points the long-term variabilities of our climate system. In a last session another medal lecture was scheduled and again the southern ocean was the topic. This time it was the circular current and a good overview on the methods used to understand this important part of the global circulation was illustrated in this talk. A good thing about medal lectures is that you can see in a compact way a whole topic. Even when you now bits and pieces about it, it helps to get deeper into it to by getting it introduced by a real expert of the research field. The final stage of the day was then the traditional poster session. Tomorrow will be half time, and it will start the busy part of this week for me.