International Conference on S2S2D

Last week NCAR in Boulder (Colorado) hosted the second edition of the International Conference on Subseasonal to Decadal prediction. It covered the climate prediction from a few weeks up to a few years and hosted with around 350 scientists a good representation of the community in this field. During most of the days the conferences was split into a subseasonal to seasonal (S2S) and a seasonal to decadal (S2D) session.

The International Conference on S2S2D poster

I personally visited only the S2D part, as my current work focuses on this topic. The first day looked into the mechanisms of predictability and the typical candidates, like ocean, soil moisture and stratosphere, were discussed. The second day shifted then more to the modelling of these phenomena. The weather services presented their new prediction systems and new approaches to modelling were discussed. As a third topic covered the handling of the predictions. It looked at calibration and other technique to make the prediction really useful. This lead to the fourth topic, which discussed the decision-making process basing on the prediction. Here, the applications were the main focus points and many different phenomena and their predictability were shown. Topic number five looked at the statistical verification. It presented new approaches to access the skill of the models. The final session of the S2D session looked at the frontiers of earth system prediction and therein especially at the handling  of carbon within the models. Afterwards in a combined session of both parts many different aspects on the future of research in this field were brought up. Among others the topics of temporal dependence of forecast skill and the so-called ‘signal to noise paradox’ lead to a lively discussion.

My personal contributions were threefold. I showed on a poster in the first session how the Summer NAO can be predicted using ensemble sub-sampling. In the second session I presented a poster on the view that sub-sampling can be viewed as a post processing procedure and can so explain why it works. The  talk in the fifth session then covered  the  2D categorical EMD score.

All in all it was a great conference, with  many interesting discussions and a great overview over this interesting field. Certainly many impulses will come from this and will give not only my own research a  new push.

Advertisements

German science scandal on “fake science”

Today the two German public television stations WDR and NDR and the newspaper SZ have published articles about “fake science”, which they describe in their publication as a scandal. They highlight, that scientists (among others) of German universities published a huge number of papers in “fake journals” and have visited “fake scientific conferences”. They give several examples of “fake articles” they have submitted to those publishers and which got without real peer review published.

As someone who works in science, I am used to these invitations. On average five to ten mails reach me every day for conferences or journals. Some seem to have fitting topics, some are just arbitrarily conference invitations where they basically invite everybody who claims to be a scientist. These things of course come with a hefty price tag, but compared to the regular conferences and journals they are often quite cheap. They promise a quick review and also a quick publication of the submitted content.

Why these things could be attractive for some scientists? Because scientific publications are important for every scientist, they basically work as an unemployment insurance. When you have enough publications per year, you have quite a good chance to stay in science, when not, you will quite likely lose your job or do not get a new one. What it makes the offers especially tempting for some, are that renowned journals often need a lot of time to publish the research. Review processes with unpaid reviewers often need a year or more. When you have a three year project, need a year to do the science, write it up for a half a year with many coauthors and then submit it with a review process of over y year, you have to be lucky to be able to put the paper into your CV in time for the next applications. Offers to have a guarantee to have something published, even when the journal is not famous, might be of interest, even for those working in the real science. That also some people use these journals to give their b***s*** publications a platform is of course even more damaging for real science.

So what can be done? Information is of course the first thing, this currently hardly happens and you have to get the idea by yourself, that it might not be good for you to interact with these journals and conference providers. Also we have to rethink our funding for scientific literature publication. Especially in Germany, the amount of money available for publications is low. When you remember that a paper in a journal might need several thousand euros/dollars/pound, especially when you want to have your paper published as open access, then money is key. Some countries like the UK have reacted in the past to enforce publishers to make papers open access after after a certain time. This would certainly help, because only reachable science is of long term benefit to the authors. As Germany has not yet implemented such a law it is time for politics to act.

The market for scientific literature and conferences is connected with high profits. The profit margins for the renowned providers are enormous, and so it is expected that fake providers get onto the market. It will be on the long term a tough fight to keep an eye on what is real and what is “fake”. Let’s hope most real scientists get this done and the working and publication conditions get better over the long term. Otherwise, science as we know it for 400 years is in danger.

Top-down or bottom-up

When you program in science, your projects usually progress over time. Often, you got an idea, you create a quick and dirty solution and test it on data you know. This works for a while, but after several amendments, future-proving and incorporating new ideas, the code gets unbearable. This is the point when bottom-up-approaches break down and when you think about reprogramming everything. In these cases the new programs are not anymore bottom-up, you have an idea in mind what to achieve and often reuse some code snippets from before. We have reached the world of top-down.

Continue reading

The first sub-sampling paper: It was a long way

Over the spring a new paper has finally made it and is now published in full form in GRL: Our first sub-sampling paper. I will not go into too much detail on what the paper is about, as I will have plenty to explain on it when my other papers on this topic will come out. Rather I would like to talk a bit about the long path papers sometimes have to go and late addition of new coauthors (like me in this case).

Continue reading

EGU 2018: Final day

And here it is: The end of the EGU 2018. The final day is always a quite relaxed one. Many scientists have already left Vienna and so everything is a bit more relaxed. Anyway, for me the day was quite busy as there were many sessions of interest for me running in parallel.

In the two morning sessions I spend my time in a session on climate archives and proxies, which was quite interesting. The topics were quite diverse and so it was a nice mix for the start. After lunch two sessions on sea-level were on the schedule. The first was on ice sheets during the Quaternary, which was mainly focused on the European Ice sheets during the last glacial maximum. The second one was on sea level from minutes to millennia, which was dominated by talks on the creation of sea level index points. And finally there was of course the poster session, as always some kind of a highlight of the day with many interesting discussions.

So the EGU is over and it was again a very interesting conference. When I look back I have to say some things have changed this year. For examples I had the impressions that the queues for the free coffee were much longer. Also the poster boards, on which I had complained a lot earlier this week were new. Over the days, the problem with the hiding place of the poster tubes got certainly better, as most tables beside the boards got loose and it was possible to take them away to open the hiding place. For the whole week we had great weather  and most stuff was well organised, but in my impression most sessions were too full. It seems that the Conference Centre in Vienna got to its limit and when the EGU should grow even more I doubt it is still the right place to host the event.

So all in all, for me it was a successful EGU. Let’s see whether I manage to get here next year and how I manage the other conferences coming up this year. So long, bye bye Vienna

EGU 2018: Presentation day

Fourth day of the EGU this year and it was the day of my own presentation. But let’s start with the star of the day. The morning session I spend in a session on climate variability across different scales. They were interesting and quite variable in the topics, and as most sessions this year quite busy.

After lunch I had my talk, which was quite misplaced in the session. As there was this year no verification session available, I had to look out for a session, which had this field as a side topic. It was connected to the data centre session and so was myself the only one who had a statistical method as a main topic and the other talks were purely software related. Nevertheless, the talk went alright.

The final presentation session was for me then the medal talk of Tim Palmer, who gave a great overview on the development of probabilistic forecasts in the last century. The packed lecture hall got likewise a historical and a topical overview on these development, which led to the current ensemble systems. With the poster session at the end of the schedule the day ended. One day is left before the EGU 2018 will get to its end.

EGU 2018: Poster day

Day three of the EGU 2018 in Vienna and today was the day for my poster. But beforehand an interesting day of presentation sessions was on the schedule. It started for me with a session on data assimilation in palaeo-climatology. As I come originally from meteorology, it is always interesting how the statistical methodology once developed for short term prediction applied onto completely different timescales. Next up was the GIA session, which included some sea-level talks.

In the afternoon the first session was the one on post-processing, in which I also had my poster. Various statistical methodologies and workflows where presented to generate more gain from a dynamical (weather) forecast. Final presentation session was then on corals and their ability to give us information mainly of the ENSO in the past.

The final session  of the day was then the poster session. I had nice discussions on my topic of statistical-dynamical prediction and my take on why it works. Tomorrow will be the day of my talk, where I will present an alternative to the common used ACC and RMSE.