Today the two German public television stations WDR and NDR and the newspaper SZ have published articles about “fake science”, which they describe in their publication as a scandal. They highlight, that scientists (among others) of German universities published a huge number of papers in “fake journals” and have visited “fake scientific conferences”. They give several examples of “fake articles” they have submitted to those publishers and which got without real peer review published.
As someone who works in science, I am used to these invitations. On average five to ten mails reach me every day for conferences or journals. Some seem to have fitting topics, some are just arbitrarily conference invitations where they basically invite everybody who claims to be a scientist. These things of course come with a hefty price tag, but compared to the regular conferences and journals they are often quite cheap. They promise a quick review and also a quick publication of the submitted content.
Why these things could be attractive for some scientists? Because scientific publications are important for every scientist, they basically work as an unemployment insurance. When you have enough publications per year, you have quite a good chance to stay in science, when not, you will quite likely lose your job or do not get a new one. What it makes the offers especially tempting for some, are that renowned journals often need a lot of time to publish the research. Review processes with unpaid reviewers often need a year or more. When you have a three year project, need a year to do the science, write it up for a half a year with many coauthors and then submit it with a review process of over y year, you have to be lucky to be able to put the paper into your CV in time for the next applications. Offers to have a guarantee to have something published, even when the journal is not famous, might be of interest, even for those working in the real science. That also some people use these journals to give their b***s*** publications a platform is of course even more damaging for real science.
So what can be done? Information is of course the first thing, this currently hardly happens and you have to get the idea by yourself, that it might not be good for you to interact with these journals and conference providers. Also we have to rethink our funding for scientific literature publication. Especially in Germany, the amount of money available for publications is low. When you remember that a paper in a journal might need several thousand euros/dollars/pound, especially when you want to have your paper published as open access, then money is key. Some countries like the UK have reacted in the past to enforce publishers to make papers open access after after a certain time. This would certainly help, because only reachable science is of long term benefit to the authors. As Germany has not yet implemented such a law it is time for politics to act.
The market for scientific literature and conferences is connected with high profits. The profit margins for the renowned providers are enormous, and so it is expected that fake providers get onto the market. It will be on the long term a tough fight to keep an eye on what is real and what is “fake”. Let’s hope most real scientists get this done and the working and publication conditions get better over the long term. Otherwise, science as we know it for 400 years is in danger.
Over the spring a new paper has finally made it and is now published in full form in GRL: Our first sub-sampling paper. I will not go into too much detail on what the paper is about, as I will have plenty to explain on it when my other papers on this topic will come out. Rather I would like to talk a bit about the long path papers sometimes have to go and late addition of new coauthors (like me in this case).
After talking about the place for data publications among the other the scientific publication types, I want to give an overview on how a data publication might look like in the future. As I have stated before, to gain trust in a data peer review it should be comparable to the ones from other forms. The simplest way to achieve this is to build it up as similar as possible to this, but include changes which are necessary due to the form of the publication entity. Continue reading
The paper “Automated quality evaluation for a more effective data peer review“, which was published by me and my co-author in the Data Science Journal this week started as a common background theme for my PhD thesis. The task was to find a way to bring the loose chapters on quality tests together.
The basic idea was to take a closer look at the publication process in general and, since it was the topic of the project at that time, how it can be applied to data. This approach led to a lot of questions, especially on how scientist work, how they interact by their publications and how they should work. The latter is quite philosophical and was in part addressed in Quadt et al (2012).
In the upcoming week I want to give some insights into the general topic of the paper and how it tries to address the arisen problems. The topics are:
- The philosophical problematic of a missing data peer review
- How a data peer review could look like
- Statistical quality evaluation? What is that?
- Why quality is a decisive information for data
- Chances for the future
I hope these topics will show a little bit what is behind this paper and how it fits into the scientific landscape.
To really fully understand that paper it has to be brought into connection with Quadt et al 2012. In this paper we showed, that traditional publications and data publications can be published in a comparable way, but that for this one major element is missing: data peer review.
Less then a week away until the European Geosciences Union General Assembly 2014, short EGU 2014, starts in Vienna. I will visit the conference with more than 10,000 visitors for the third time and it will be once again a great oportunity to see new things, people and ideas. Once again I will have the opportunity to contribute with two entries to the program and I would like to introduce them in the following with a short overview.
This talk will present a statistical approach to estimate the sea level history during the last interglacial. It bases on a massive ensemble approach, which are evaluated with bayesian statistics. The presentation will show some preliminary results and its uncertainties. Furthermore, it will be demonstrated how the shown uncertainties can be explained.
What have to be done to make data publications comparable to traditional publications? This is the question which this contribution tries to answer. We think one main factor will be an effective peer review scheme. A propable candidate will be described and illustrated with an application on data of a meteorological climate station.
So I am looking forward to an intersting week and hope for some nice discussions. When time and WiFi permit I will write on some impressions of the conference at this place. Until then: See you in Vienna!