Recently my paper “Seasonal statistical-dynamical prediction of the North Atlantic Oscillation by probabilistic post-processing and its evaluation” was accepted in “Nonlinear Processes in Geophysics” and as it is now published, I will use this blog as it is a tradition (see here, here and here) to explain in more detail what it is about and what the problems are. So I will highlight some background to the paper in the upcoming week and will show why some of the points I raise therein will be important for the development of the community around seasonal- and decadal prediction as well as the one for the wider climate science.
So my background stories will take a look at the following topics:
- What is sub-sampling?
- Why does sub-sampling work?
- Why do we need verification with uncertain observations?
- EMD and IQD? What is it about?
- Do we need new approaches in verification?
As always, these topics are of course just an addition to the regular paper and are all just personal view. As it is my first (and I somehow hope only) single-author paper, the manuscript reflects of course mostly my view. Anyway, there are limits of things you can do in scientific literature, and that’s what those blog posts are about. And of course, it is statistical literature, so explaining it in more detail for those who are no fans of equations, is certainly a surplus.
Today the two German public television stations WDR and NDR and the newspaper SZ have published articles about “fake science”, which they describe in their publication as a scandal. They highlight, that scientists (among others) of German universities published a huge number of papers in “fake journals” and have visited “fake scientific conferences”. They give several examples of “fake articles” they have submitted to those publishers and which got without real peer review published.
As someone who works in science, I am used to these invitations. On average five to ten mails reach me every day for conferences or journals. Some seem to have fitting topics, some are just arbitrarily conference invitations where they basically invite everybody who claims to be a scientist. These things of course come with a hefty price tag, but compared to the regular conferences and journals they are often quite cheap. They promise a quick review and also a quick publication of the submitted content.
Why these things could be attractive for some scientists? Because scientific publications are important for every scientist, they basically work as an unemployment insurance. When you have enough publications per year, you have quite a good chance to stay in science, when not, you will quite likely lose your job or do not get a new one. What it makes the offers especially tempting for some, are that renowned journals often need a lot of time to publish the research. Review processes with unpaid reviewers often need a year or more. When you have a three year project, need a year to do the science, write it up for a half a year with many coauthors and then submit it with a review process of over y year, you have to be lucky to be able to put the paper into your CV in time for the next applications. Offers to have a guarantee to have something published, even when the journal is not famous, might be of interest, even for those working in the real science. That also some people use these journals to give their b***s*** publications a platform is of course even more damaging for real science.
So what can be done? Information is of course the first thing, this currently hardly happens and you have to get the idea by yourself, that it might not be good for you to interact with these journals and conference providers. Also we have to rethink our funding for scientific literature publication. Especially in Germany, the amount of money available for publications is low. When you remember that a paper in a journal might need several thousand euros/dollars/pound, especially when you want to have your paper published as open access, then money is key. Some countries like the UK have reacted in the past to enforce publishers to make papers open access after after a certain time. This would certainly help, because only reachable science is of long term benefit to the authors. As Germany has not yet implemented such a law it is time for politics to act.
The market for scientific literature and conferences is connected with high profits. The profit margins for the renowned providers are enormous, and so it is expected that fake providers get onto the market. It will be on the long term a tough fight to keep an eye on what is real and what is “fake”. Let’s hope most real scientists get this done and the working and publication conditions get better over the long term. Otherwise, science as we know it for 400 years is in danger.
Last week several journals have published an agreement made on an National Insurance in Health (NIH) workshop in June 2014. It focus on preclinical trials, but allows a wider view on the development of the publication of research in general. Furthermore, large journals, like Science and Nature have accompanied this with further remarks on their view on the future of proper documentation of scientific research, which head into the direction I named “Open methods, open data, open models.” a while ago. In this post I would like to comment the agreement and some reactions from these major journals.
In philosophy, several great minds have addressed the way scientist should work to gain their knowledge. Among others Bacon (1620) and Popper (1934) showed different ways to gain information and how it can be evaluated to become science. During my PhD I developed a relatively simple and general working scheme for scientists, which was published in Quadt et al (2012). The paper analysed the way how this general scientific working scheme could be represented by scientific publications.
The way scietists should work (Quadt et al 2012)
While the traditional journal paper, which exists since the Philosophical Transactions of the Royal Society, edited by Henry Oldenburg in 1665, covers the whole scientific process, new forms have emerged in the last decade. Data papers (Pfeiffenberger & Carlson, 2011), a short journal article focussing on the experimental design and present the data from the experiment, filled a gap and should simplify the use of data. Another process is the publication of data and metadata at a data centre itself, without an accompanying journal article.
This type of publication was part of my project at that time. A general question therein was how such a publication can be made comparable to the other types. The comparison showed that it is quite comparable, but that one important element is missing: peer review. Continue reading
Two years. Two years between the first ideas and the submit of the paper, which has gone on its journey today. Sounds like a long time, but to be honest it is not. To show this I would like to explain in this post a little bit the generalised basic steps of my work towards a paper. I will not take the submitted paper from today as an example, because its creation was quite unusual. Therefore, I will stick with the general approach, which is divided in several phases: Continue reading