For everyone working on data analysis in climatological science, using references is essential. These references, representing some form of truth, is often the target, which models have to reach. Verification (or in non-meteorological science validation) methodologies evaluate the results against the references and dependent on the methodology deliver good results when the model is near to it, matches its variability or is close in other statistical parameters. The power of these references in these analysis and defining our knowledge about the world is immense and so it is essential that it really has something to do with things we see in front of our windows.
Last month Wendy Parker published a paper named “Reanalyses and Observations: What’s the Difference” and looked at the references from a more philosophical point of view. She listed four points, which critically looked at the connection between references and observations and in this post I would like to take a look at them.
There is an elephant in the room, at every conference in nearly every discipline. The elephant is so extraordinary that everyone seems to want to watch and hype it. In all this trouble a lot of common sense seems to get lost and especially the little mice, who are creeping around the corners, overlooked.
The big topic is Big Data, the next big thing that will revolutionise society, at least when you believe the advertisements. The topic grew in the past few years into something really big, especially as the opportunities of this term are regularly demonstrated by social media companies. Funding agencies and governments have seen this and put Big Data at their top of their science agenda. A consequence are masses of scientist, sitting in conference sessions about Big Data and discussions vary between the question on what it is and how it can be used. Nevertheless, there are a lot of traps in this field, who might have serious consequences for science in general. Continue reading
Observations are generally a tricky thing. Not only are they a special kind of model, which tries to cover a sometimes very complicate laboratory experiment. Additionally they are also representing the truth, as far as we are able to measure it. As a consequence they play a really important part in science, but are in some fields hard to generate.
During the PALSEA2 meeting a question has come up in the context of the generation of paleo-climatic sea-level observations.
Assumed your ressources allow only two measurements, is it better when they be near towards each other or should they be far away.
In the heat of the discussion both sides were taken, but in the end the conclusion was the typical answer for such kind of questions: “it depends on what you want to measure”. Continue reading
Within science it is not unusual that great findings get congratulated, which consists of reduced uncertainties. “The next big thing” is sometimes rushed into publication, often in the form of a nice number, and sometimes significance of the result is the selling argument. Unfortuneately, a sole number is worth nothing in the most cases, as long as it is not backed up by the information how sure the author is about it. These information are usually called uncertainties and can be found in a lot of different forms. Sometimes they are some significance levels, sometimes a simple sigma-level. Often these uncertainties quantifying numbers and especially the methods how these numbers are retrieved is the really important thing in a publication.
Sure, for someone like me, who has uncertainty quantification as the main job, this is seen especially critical. But there are also good arguments that the statistical part in a publication is as important as the result itself. With any given quantification of uncertainty assumptions are made and these assumption decide in the end, whether the big result is really big or just nice to have. That this has become critical in science has been discussed for years in many fields, but especially this year these discussions got hot and led to consequences.