Let’s play: HadCRUT 4

Playing around with data can be quite funny and sometimes deliver some interesting results. I had done this a lot in the past, which was mainly a necessity coming from my PhD. Therein I had developed some methods for quality assurance of data, which needed of cause some interesting applications. So every time a nice dataset got to live, I had run them through my methods and usually the results were quite boring. Main reason for this is that these methods are designed to identify inhomogeneities and a lot of the published data nowadays is already quality controlled (homogenised), which makes it quite hard to identify new properties within the dataset. Especially model data is often quite smoothed so that it is necessary to look at quite old data to find something really interesting. Continue reading

The sampling issue

Observations are generally a tricky thing. Not only are they a special kind of model, which tries to cover a sometimes very complicate laboratory experiment. Additionally they are also representing the truth, as far as we are able to measure it. As a consequence they play a really important part in science, but are in some fields hard to generate.

During the PALSEA2 meeting a question has come up in the context of the generation of paleo-climatic sea-level observations.

Assumed your ressources allow only two measurements, is it better when they be near towards each other or should they be far away.

In the heat of the discussion both sides were taken, but in the end the conclusion was the typical answer for such kind of questions: “it depends on what you want to measure”. Continue reading