Doing statistics between the two worlds of observations and model results lead often to the assumption that both are completely different things. There are the observations, where real people moved into the field, drilled, dug and measured and delivered the pure truth of the world we want to describe. In contrast to this, the clean laboratory of a computer, which takes all our knowledge and creates a virtual world. This world need not necessary have something to do with its real counterpart, but at least it delivers us nice information and visualisation. But this contrast between the dirty observations and the clean models is usually only something, which exists in our heads, in reality they are much more connected to each other.
The basic question in this context is: Are we really able to measure something without applying a model? And the answer in my opinion is simple: no.
Of cause I could now argument that the human brain use models to interpret the information, which it gets from its sins (or should I say measurement instruments?). But let’s be honest: I am definitely not qualified for such argumentations, since I have no medical degree. The line I would prefer is the typical physicist argumentation, which any student in that science get introduced to in their first years. As part of your studies you have to do a lot of practical courses and they basically teach you one thing: Never trust an observation.
Sure, this sounds quite hard, but sole observation, without exactly knowing the conditions and procedures of measuring and having some parallel measurements to estimate the statistical uncertainty, are more or less useless. Even when the measurement procedure is very simple, systematic and statistical uncertainties blur the results. To cope with this, model building takes place. Observations are interpreted at the hand of very simple models. It could be the simple assumption of a gaussian uncertainty, which simplifies further calculations. It might be the negligence on little effects, which probably have no effects on the measurements, but in effect do. And yes, it simply might be the chosen location, time or whatever outside condition people in the field have to accept when they do the measurement. Observations are uncertain and tiny things can have a huge effect.
Every measurement takes place in a laboratory, how dirty it might look. Instruments usually act on a very localised scale, measure for a defined length of time or create a clean measurement environment in itself to prevent to much influence from the outside. These are chosen by the observer and depend on his/her knowledge of the instrument he uses. In the end someone, usually the observer himself, has to decide how representative these observations are. Often this decision is much more important that the measurement itself. The outside effects have to be estimated, their influence on the measurements calculated and decided, whether the basic theoretical assumptions, which justified the observations in the scientific context are still fulfilled. We usually have a simple term for it, but do not really connect all these decisions with it: quality control.
Similar is true for models. They usually take place nowadays in a clean laboratory named computer. But also here it has to be decided how representative they are for the real world. In both cases assumptions and simplifications have to be done to get to a result. And both are, in theory, from the same kind.
At the end I would like to redirect to a blog by Tamsin Edwards, to whom I had the pleasure to talk to earlier this year. It has the nice topic “All models are wrong”. Together with the title of this post I guess anyone can now make the math and think about observations.