A few month ago I have written about modularisation and the importance of programming in science. To get a little bit deeper into the topic, I will explain some further concepts in programming. One of them, which is especially important in modularised programs, are variable containers. The basic idea is that all variables, which are exchanged between the modules, are stored in a own module, which is then included into every other module. Nevertheless, some precautions have to be taken to make this concept a success in larger programs.
Within science it is not unusual that great findings get congratulated, which consists of reduced uncertainties. “The next big thing” is sometimes rushed into publication, often in the form of a nice number, and sometimes significance of the result is the selling argument. Unfortuneately, a sole number is worth nothing in the most cases, as long as it is not backed up by the information how sure the author is about it. These information are usually called uncertainties and can be found in a lot of different forms. Sometimes they are some significance levels, sometimes a simple sigma-level. Often these uncertainties quantifying numbers and especially the methods how these numbers are retrieved is the really important thing in a publication.
Sure, for someone like me, who has uncertainty quantification as the main job, this is seen especially critical. But there are also good arguments that the statistical part in a publication is as important as the result itself. With any given quantification of uncertainty assumptions are made and these assumption decide in the end, whether the big result is really big or just nice to have. That this has become critical in science has been discussed for years in many fields, but especially this year these discussions got hot and led to consequences.