Background to “Automated quality evaluation for a more effective data peer review”

The paper “Automated quality evaluation for a more effective data peer review“, which was published by me and my co-author in the Data Science Journal this week started as a common background theme for my PhD thesis. The task was to find a way to bring the loose chapters on quality tests together.

The basic idea was to take a closer look at the publication process in general and, since it was the topic of the project at that time, how it can be applied to data. This approach led to a lot of questions, especially on how scientist work, how they interact by their publications and how they should work. The latter is quite philosophical and was in part addressed in Quadt et al (2012).

In the upcoming week I want to give some insights into the general topic of the paper and how it tries to address the arisen problems. The topics are:

  1. The philosophical problematic of a missing data peer review
  2. How a data peer review could look like
  3. Statistical quality evaluation? What is that?
  4. Why quality is a decisive information for data
  5. Chances for the future

I hope these topics will show a little bit what is behind this paper and how it fits into the scientific landscape.

To really fully understand that paper it has to be brought into connection with Quadt et al 2012. In this paper we showed, that traditional publications and data publications can be published in a comparable way, but that for this one major element is missing: data peer review.

Continue reading

Advertisements