Why media reporting on discussion papers can lead to manipulated science

During this week discussions on twitter and on the blogs focused on a discussion paper by James Hansen et al in Earth System Dynamics Discussions. The paper forms part of a legal case in the US and basically states that the current and expected warming over the next decades is unprecedented since the last interglacial. In this context the Guardian has run an article on the paper. While they state that the science is not yet peer-reviewed, the authors have run a series of interviews and comments, which is usually happening only when a paper is actually published. As usual I refrain in this blog from commenting on climate politics, but as I have written my PhD on scientific publication processes I would like to focus here on the implications of the media scrutiny within the discussion paper phase of a scientific publication. Continue reading

Computer are the basis of modern science

Computer are the basis of most modern science today. Especially for those, who are working mainly on theoretical topics, as I do, they are the centre of daily work. As such, workflows are developed over time to make the own research possible and effective, which of course require the right equipment. As this blog is about making the work of a scientist a bit more transparent, I want to explain how my personal setup looks like.

A typical work place

A typical work place

For the past couple of years the scheme I use is more or less unchanged. It got some additions in the past, but they happen mainly in the attached workflows (e.g. reading a paper digitally instead of printing it out).

The workflow is basically divided into three levels, which are each represented by one or several computers. Each level has its own tasks and requirements and all are essential to keep the own workflow running. To understand the basics of the idea I describe here, my blog post about my general schematic for work might be of interest. Several work phases are active at the same time and as such different levels of the computer infrastructure have to handle different projects at the same time.

The fist level is responsible for all the typical work, which requires active user interaction. This includes the programming of codes, the write-up of the performed work (papers & documentation) and all other parts in presenting and organising the projects I am working on. Of course, a laptop is a useful tool on this level as working not always takes place in the office (conferences, train journeys etc.). As such it is also the main downfall of this level, as it is not continuously connected with the network.

This disadvantage is covered by the second level. It is a machine, which is placed within the network and does the basic job of a distribution centre for tasks and jobs. Optionally it is able to calculate simple things itself, but the main point is that it has connections to storage facilities and the third level. These machines do not necessary have to be quick, it is more important that they are robust. Myself is using a simple Linux desktop computer for this.

On the last level external servers are placed as well as clusters and super computers. Their job is to perform the given programs and store the results into storage facilities, which can be accessed by the second level. Over the time I had different machines in this category, ranging from desktop machines of co-workers I have borrowed over the night to the really big ones, which offer several hundreds of cores. The art is usually to coordinate one or several external computer at the same time, which is again the task of the second level.

As you can see, every level has its purpose and is important to get work done. Especially in crunch times, where projects have to be finished, it is often the hope, that each level simply works and that the backup workflows, coming into play when one level is not available, are not required. Computer are essential in modern research and scientists very much depend on them. Everybody has their own idea how computer should be used and especially, how backup of the data should be handled. In effect everyone hopes that their system work for the case the backups are required, but most will only now, when it happens.

Application processes in science

In the past months the blog was quite calm, and so often when scientific blogs get deserted, it has something to do with the job of the author. This was also the case here (apart from the election censorship in the UK) and so the time usually used for writing a post was required to write applications, prepare and attend job interviews and moving to the new position. Usually scientists do not talk much about this topic, as it is of cause highly sensitive. Nevertheless, I had written in the entry statement of this blog that I want to give insight into the job of a scientist. And without doubt, working on fix-term contracts and switching to a new position is an essential part of the job of an early career scientist. But don’t worry, I will keep it very general and will just make some statements on how the general process works and some problems, which can be encountered by the scientists.

So most contracts in geoscience (so when you look up the job description sites) for post-docs have a length in a range of 2 to 5 years. In some cases it is possible to extend, but in general it has to be assumed that after the time is up, the money is up. Depending on the country and their social systems that can really be a problem, so that it is essential to find a new job before the contract ends. Unfortunately, in short term contracts this coincide with the time the final papers are written, which makes it for some a special case of multi-tasking. As a consequence, several month before the end of a contract the scientist have to look for his future. It can be even earlier, when certain deadlines for future perspectives are on the wish-list, e.g. fellowships or writing a proposal.

A major thought process in the application phase is the decisions for which jobs someone wants to apply for. Generally, there are two dimensions which dominate the process (at least in my theory). The first is the location. This can be the continent, country or the type of the institution (university or research institution). The other dimension is the research topic. Of cause there exist also a dimension on the level of the position, but that is something that I would like to count to the location dimension. With these two dimensions, it is like the uncertainty principle. When one is constraint due to personal decision (e. g. due to family/relationship or the best fitting topic for the future research path), the other one will widen.

After deciding on the jobs you want to apply for, a similar process starts for every position. Working through the job descriptions, writing cover letter and fighting through the application process. Many institutions build nowadays their own web environment, and the more compelex they are the more problems they bring. Of cause they force you to think about, whether you want to apply at all, but in the phase where you usually apply for a job, you anyway get picky about those you want to apply for (simply because of time constraints). These systems might make sense for the institutions, but definitely not for the applicants. In a few cases there is still the option to send the cover letter and cv directly via mail, which is definitely, from the applicants point of view the best solution.

Having done that, the time of waiting starts and this teaches one a lot about the potential future employer. Some institutions react quick, inform the applicants in a lot of detail of the process, whether they are short listed or not. This can happen in a week, which is great for all involved. Nevertheless, there are institutions who need a month to reply and some are not replying at all. I do not want to comment on that, but as I said, it tells you a lot about the institutions.

Being lucky and getting invited to the interviews is usually the first part of the process, from which the applicant benefits. Be it in a video-interview or taking part in person, it is a great chance to get to know the potential future employer. Whether you get the job or not is therein secondary (at least in this moment), because you still learn something from this time. Preferences on the methodology of the interview depend on each individual, both have their advantages and disadvantages. Nothing can replace real contact, but sometimes it is better for everyone, when some details are hidden behind a screen.

From then on begins the wait, which is in the most cases quite short. Many panels decide within hours on their preferred candidate, and s/he will be the only one who gets informed. For the others it is usually a long wait, until the people ranked above them have declined or accepted the position.

All in all, the application phase within a job are exciting times, the problem is only that it costs a lot of time and effort, while you do not have any of them to spare. So in the end I am happy that this phase found an end for me, but I am well aware, that it waits just around the corner, again at a time, when it is certainly not fitting into my current job.

A stronger publication regime

Last week several journals have published an agreement made on an National Insurance in Health (NIH) workshop in June 2014. It focus on preclinical trials, but allows a wider view on the development of the publication of research in general. Furthermore, large journals, like Science and Nature have accompanied this with further remarks on their view on the future of proper documentation of scientific research, which head into the direction I named Open methods, open data, open models.” a while ago. In this post I would like to comment the agreement and some reactions from these major journals.

Continue reading

All observations are models

Doing statistics between the two worlds of observations and model results lead often to the assumption that both are completely different things. There are the observations, where real people moved into the field, drilled, dug and measured and delivered the pure truth of the world we want to describe. In contrast to this, the clean laboratory of a computer, which takes all our knowledge and creates a virtual world. This world need not necessary have something to do with its real counterpart, but at least it delivers us nice information and visualisation. But this contrast between the dirty observations and the clean models is usually only something, which exists in our heads, in reality they are much more connected to each other.

Continue reading