During this week discussions on twitter and on the blogs focused on a discussion paper by James Hansen et al in Earth System Dynamics Discussions. The paper forms part of a legal case in the US and basically states that the current and expected warming over the next decades is unprecedented since the last interglacial. In this context the Guardian has run an article on the paper. While they state that the science is not yet peer-reviewed, the authors have run a series of interviews and comments, which is usually happening only when a paper is actually published. As usual I refrain in this blog from commenting on climate politics, but as I have written my PhD on scientific publication processes I would like to focus here on the implications of the media scrutiny within the discussion paper phase of a scientific publication. Continue reading
The environment for the publication of data is currently changing rapidly. New data journals emerge, like Scientific Data from Nature two weeks ago or Geoscience Data Journal by Wiley. The latter was also in the focus of the PREPARDE project, which delivered a nice paper on data peer review a couple of weeks ago (Mayernik et al, 2014). Furthermore, more and more funding agency require the publication of data and it is to expect that this demand will lead to more pressure for scientists to make their work publicly available.
These developments are great, but at this point I would like to think further into the future. Where should we be in five or ten years, and what is possible in let’s say 30 or more years. A lot is the answer, but let’s go a little bit more in the details. Continue reading
A basic point of the new paper is the introduction of quality evaluation. But what does this mean and why do I think it is important? Well, for the first question I have to talk a little bit about the background. The common words we use together with quality are assurance and control. Depending on their definition, they are focussing to make the product or the processes, which lead to the product, better. Since the products we are talking about is data, both are focussing to deliver better datasets.
Nevertheless, in peer review we are handling now a different stage, since we are now in the phase, in which we want to quantify the quality. To do this, some points have to be made clear. First is the fact that quality is subjective. Especially, when we think about the peer review process, it is important to keep in mind that this is not an objective process. The quality of the publication entity is defined by the opinion of the reviewers and editor and has therefore inevitably a personal touch. Of cause the same is true for data peer review. Continue reading
The paper “Automated quality evaluation for a more effective data peer review“, which was published by me and my co-author in the Data Science Journal this week started as a common background theme for my PhD thesis. The task was to find a way to bring the loose chapters on quality tests together.
The basic idea was to take a closer look at the publication process in general and, since it was the topic of the project at that time, how it can be applied to data. This approach led to a lot of questions, especially on how scientist work, how they interact by their publications and how they should work. The latter is quite philosophical and was in part addressed in Quadt et al (2012).
In the upcoming week I want to give some insights into the general topic of the paper and how it tries to address the arisen problems. The topics are:
- The philosophical problematic of a missing data peer review
- How a data peer review could look like
- Statistical quality evaluation? What is that?
- Why quality is a decisive information for data
- Chances for the future
I hope these topics will show a little bit what is behind this paper and how it fits into the scientific landscape.
To really fully understand that paper it has to be brought into connection with Quadt et al 2012. In this paper we showed, that traditional publications and data publications can be published in a comparable way, but that for this one major element is missing: data peer review.