IMSC 2019: My presentation

The fourth day of the IMSC 2019 was the day when the heat wave finally hit Toulouse with full force. Around 40 deg C was what the temperature measurements told us and it felt a bit overwhelming. The morning started again with plenary sessions and talks about uncertainty separation and down-scaling. Afterwards followed the poster session in a tent outside, and it got warmer and warmer over time as the wind was not as strong as in the last days.

After lunch the parallel sessions for the day started and I chose the one on forecast evaluation. The first part was reserved for the development of new verification procedures and I had my own talk in this section. It went alright, I presented two new skill scores basing on the EMD and demonstrated it at different seasonal prediction applications. The second half of the session was on the application of verification procedures and showed many different fields.

With the end of the talks it was time for the social events. The choice was either a wine tasting on a ship or a walking tour through town. I chose the latter one and it was a challenge to always find shade to get not too warm in the sunshine. Tomorrow will be the last day and the weather will still be warm enough to be a challenge.

Advertisements

International Conference on S2S2D

Last week NCAR in Boulder (Colorado) hosted the second edition of the International Conference on Subseasonal to Decadal prediction. It covered the climate prediction from a few weeks up to a few years and hosted with around 350 scientists a good representation of the community in this field. During most of the days the conferences was split into a subseasonal to seasonal (S2S) and a seasonal to decadal (S2D) session.

The International Conference on S2S2D poster

I personally visited only the S2D part, as my current work focuses on this topic. The first day looked into the mechanisms of predictability and the typical candidates, like ocean, soil moisture and stratosphere, were discussed. The second day shifted then more to the modelling of these phenomena. The weather services presented their new prediction systems and new approaches to modelling were discussed. As a third topic covered the handling of the predictions. It looked at calibration and other technique to make the prediction really useful. This lead to the fourth topic, which discussed the decision-making process basing on the prediction. Here, the applications were the main focus points and many different phenomena and their predictability were shown. Topic number five looked at the statistical verification. It presented new approaches to access the skill of the models. The final session of the S2D session looked at the frontiers of earth system prediction and therein especially at the handling  of carbon within the models. Afterwards in a combined session of both parts many different aspects on the future of research in this field were brought up. Among others the topics of temporal dependence of forecast skill and the so-called ‘signal to noise paradox’ lead to a lively discussion.

My personal contributions were threefold. I showed on a poster in the first session how the Summer NAO can be predicted using ensemble sub-sampling. In the second session I presented a poster on the view that sub-sampling can be viewed as a post processing procedure and can so explain why it works. The  talk in the fifth session then covered  the  2D categorical EMD score.

All in all it was a great conference, with  many interesting discussions and a great overview over this interesting field. Certainly many impulses will come from this and will give not only my own research a  new push.

Let’s play: HadCRUT 4

Playing around with data can be quite funny and sometimes deliver some interesting results. I had done this a lot in the past, which was mainly a necessity coming from my PhD. Therein I had developed some methods for quality assurance of data, which needed of cause some interesting applications. So every time a nice dataset got to live, I had run them through my methods and usually the results were quite boring. Main reason for this is that these methods are designed to identify inhomogeneities and a lot of the published data nowadays is already quality controlled (homogenised), which makes it quite hard to identify new properties within the dataset. Especially model data is often quite smoothed so that it is necessary to look at quite old data to find something really interesting. Continue reading