A look at lecturing: Do we talk about the history of science?

History is important: it explains us how we got to the place were we are and interferes more with our future than many would admit. This is true in life, but also in science. In lecturing we usually teach concepts and methodologies, many developed in the last five centuries and they are all developed with a background. This background tells us a lot about why these methodologies gained its importance they nowadays have and only when we understand them we understand why they are so highlighted compared to other methodologies, which we do not necessarily teach nowadays. Nevertheless, usually we keep the mentioning of this background quite brief and when at all, some words about it can be found in books. But is it the right way?

Continue reading

Statistics is subjective… always!

From time to time you listen to talks and in a moment you do not expect any surprises you hear the argument that statistics is objective. It is often used to strengthen other arguments and tries to prevent doubts in them. Much to often statistics is given a credibility it does not deserve and so the times it gets usefully applied get devalued. With this in mind, one thing have to be clear: Statistics is subjective…. always!

Quite often the words on objectivity are used in haste, often also from scientists who should know it better. Many arguments on the objectivity of statistics comes from the past, where frequentist statistics was the norm and its application for nearly every problem was seen as appropriate and therefore objective. But many forget, that by using frequentist statistics they make a choice. A choice on assumptions many have learnt years ago and are long forgotten.

May it be an assumption on normal distributions, on stationarity or ergodicity. Let’s be honest, those are never fulfilled, but always accepted indirectly by choosing a standard methodology. And when you doubt that you have a choice on your statistical methodology, then the answer is in nearly ever case that there is one. You do not have to go all the way to Bayesian statistics, there are many steps in between. The start is usually to think about the assumptions you are currently making in your methodology and then go the extra step to think what happens when one of these fails.

Most important is that we start to teach the students that statistics is based on assumptions. For myself it is a game of assumptions and usually you have a lot of freedom to make them. That does not mean that standard methods should not be taught, yes, for everyone who works in geoscience a foundation in statistical techniques is necessary. But it is important at the same time that we make clear that each methodology has its disadvantages and that alternatives exist. They are not necessarily easily to calculate, but they are certainly at least some seconds worth to think about.

A look at lecturing: Just before the start

After I had written nearly two months ago how the preparations for the lecture in the new term has started, it is now the time to wrap up the preparations as from next week on the term starts. So what have I achieved up to now? Well, more or less nearly all lectures are prepared, I have one left to do, but this will be done nearer to the actual lecture, because I need one for a bit of wiggle room in the middle (so when I am too slow or I see that students do not get used to my concepts). Also I have managed to have ideas and prepare most of the practical sheets, which the students have to do. So far, I am quite happy with that, but I will only see in the active phase, whether this will really work out as planned. Continue reading

A look at lecturing: Preparations months ahead

Part of an academical job is to lecture. Myself am very lucky that this duty is part of my obligations as I really like to do it. In the past I have mainly assisted teaching or did tutoring in various lectures, but next term I will get my own lecture to plan and give in full. I will get important assistance on one or two lectures as my schedule require me to be away for some dates, but apart from that it I will have to fill the four hours a week. The topic will be in a statistical area and so more in my core expertise as my lecturing I did up to now, which was mainly in the physical areas of climate science.

In the upcoming months I will write some posts about this topic, my experience of preparing the lectures and my thoughts about concepts. Of cause I will omit talking about the actual lectures, as students should never fear that they are put on the spot. As the topic of the lecture will be the basis of statistics, it will be not so much about the actual topics, but on how to present them and how to make it an interesting learning experience for the students.

As there are another two month to go I have started to prepare the first lectures. All in all there will be roughly 15 weeks to fill, partly with predefined content and with practicals. The german system sets a fixed numbers of hours the student should work on any lecture and in my case this number can be worked out as 12 hours per week. That is a lot, because even with taking the four hours of presence study not into account, there are eight hours left. So it will be a balance to get enough stuff into the lectures and explaining it in a way that a general unloved topic can be understood. Statistics is for many students like maths and that is in applied physics courses like meteorology/oceanography/geophysics usually not very popular for them. Usually one to two years of mathematical studies, mostly not very connected to the rest of the curriculum, are the beginning of every students life and so the next step with a mostly quite dry topic like statistics is thought to be the same. And unfortunately, therein lies a problem. When you get into statistics too much on the applied side, then you do not give context to the maths lectures given before and it will get harder for the students in the future to get into statistics properly (so not only as an auxiliary subject, but a real tool which is comfortable to handle). On the other side when you do it too mathematically, it is just another hated maths subject. Balancing in the middle of it is certainly an aim, but not really realistic to achieve.

I am looking forward to this experience, but am also aware that all my planning and thoughts might not work out as planned and it ends up it a struggle for the students and myself. That is a challenge and I like challenges.

Two new papers out

The new year has started and in the recent weeks two new papers with myself in the author list have been published. Both are covering a wide spectrum and my contribution was in both cases more something I would classify as statistical assistance. Therfore, I will keep my comments brief at this place and just quickly introduce the topics.

Speleothem evidence for MIS 5c and 5a sea level above modern level at Bermuda

This paper is about the sea-level height at Bermuda at roughy 70,000 years back. It is mainly a geological paper and focusses on the evidence from speleotherms, that indicate that sea-level was positive compared to today at that time. That is important, because the rest of the world has in many places lower than modern sea-level at that time. A plot in the later part of the paper shows, that the difference at different locations in the carribean can be up to 30-40m. Explained can this be with GIA modelling and the paper is therefore a good help to better calibrate those models.

Wainer, K. A. I.; Rowe, M. P.; Thomas, A. L.; Mason, A. J.; Williams, B.; Tamisiea, M. E.; Williams, F. H.; Düsterhus, A.; Henderson, G. M. (2017): Speleothem evidence for MIS 5c and 5a sea level above modern level at Bermuda, Earth and Planetary Science Letters, 457, 325-334

Hindcast skill for the Atlantic meridional overturning circulation at 26.5°N within two MPI-ESM decadal climate prediction systems

The second paper focusses on the hindcast skill of two decadal forecasting systems of the Atlantic meridional overturning circulation (AMOC). It shows that both system have significant hindcast skill in predicting the AMOC for up to five years in advance, while an uninitialised model run has not. The time series for evaluationg the systems are still quite short, but the extensive statistics in the paper allows to transparently follow the argument, why the system do have this capability.

Müller V.; Pohlmann, H.; Düsterhus, A.; Matei, D.; Marotzke; J; Müller, W. A.; Zeller, M.; Baehr, J.: Hindcast skill for the Atlantic meridional overturning circulation at 26.5°N within two MPI-ESM decadal climate prediction systems, Climate Dynamics

 

 

 

 

Massive ensemble paper background: What will the future bring?

In my final post on the background on the recently published paper, I would like to take a look into the future of this kind of research. Basically it highlights again what I have already written at different occasions, but putting it together in one post might make it more clear.

Palaeo-data on sea-level and its associated datasets are special in many regards. That is what I had written in my background post to the last paper and therefore several problems occur when these datasets are analysed. Therefore, as I have structured the problems into three fields within the paper I also like to do it here.

The datasets and their basic interpretation are the most dramatic point, where I expect the greatest steps forward in the next years. Some paper came out recently that highlight some problems, like the interpretation of coral datasets. We have to make steps forward to understand the combination of mixed datasets and this can only happen when future databases advance. This will be an interdisciplinary effort and so challenging for all involved.

The next field involved are the models. The analysis is currently done with simple models, which has its advantages and disadvantages. New developments are not expected immediately and so more the organisation of the development and sharing the results of the models will be a major issue in the imminent future. Also new ideas about the ice sheets and their simple modelling will be needed for similar approaches as we had used in this paper. Statistical modelling is fine up to a point, but there are shortcomings when it goes to the details.

The final field is the statistics. Handling sparse data with multidimensional, probably non-gaussian uncertainties has been shown as complicate. There needs to be new developments of statistical methodology, which are simple on the one side, so that every involved discipline can understand them, but also powerful enough to solve the problem. We tried in our paper the best to develop and use a new methodology to achieve that, but there are certainly different approaches possible. So creativity is needed to generate methodologies, which do not only deliver a value for the different interesting parameters, but also good and honest uncertainty estimates.

Only when these three fields develop further we can really expect to get forward with our insights into the sea-level of the last interglacial. It is not a development, which will happen quickly, but I am sure that the possible results are worth the efforts.

Massive ensemble paper background: Data assimilation with massive ensembles

Within the new paper we developed and modified a data assimilation scheme basing on simple models and up to a point Bayesian Statistics. In the last post I talked about the advantages and purposes of simple models and this time I would like to talk about their application.

As already talked about, we had a simple GIA model available, which was driven by a statistical ice sheet history creation process. From the literature, we had the guideline that the sea level over the past followed roughly the dO18 curve, but that high deviations from this in variation and values can be expected. As always in statistics there are several ways to perform a task, basing on different assumptions. To design a contrast to the existing literature, the focus was set to work with an ensemble based approach. Our main advantage here is that we get at the end individual realisations of the model run and can show individually how they perform compared to the observations.

The first step in this design process of the experiment is the question how to compare a model run to the observations. As there were several restrictions from the observational side (limited observations, large two-dimensional uncertainties etc.), we decided to combine Bayesian statistics with a sampling algorithm. The potential large number of outliers also required us to modify the classical Bayesian approach. As a consequence, we were able at that point to estimate for each realisation of a model run a probability.

In the following the experimental design was about a general strategy, how to create the different ensemble members so that they are not completely random. Even with the capability to be able to create a lot of runs, even realisations in the order of 10,000 runs are not sufficient to determine a result without a general strategy. This lead us to a modified form of a Sequential Importance Resampling Filter (SIRF). The SIRF uses a round base approach. In each round a number of model realisations are calculated (in our case 100) and afterwards evaluated. A predefined number of them (we used 10), the best performers of the round, are taken forward to the next and act as seeds for the new runs. As we wanted a time-wise determination of the sea-level, we chose the rounds in this dimension. Every couple of years (in important time phases like the LIG more often) a new round was started. In each the new ensembles branched from their seeds with anomaly time series for their future developments. Our setup required that we always calculate and evaluate full model runs. To prevent that very late observations drive our whole analysis, we restricted the number of observations taken into account for each round. All these procedures led to a system, where in every round, and with this at every time step of our analysis, the ensemble had the opportunity to choose new paths for the global ice sheets, deviating from the original dO18 curve.

As you can see above, there were many steps involved, which made the scheme quite complicate. It also demonstrate that standard statistics get to its limits here. Many assumptions are required, some simple and some tough ones, to generate a result. We tried to make these assumptions and our process as transparent as possible. As such, our individual realisations, basing on different model parameters and assumptions on the dO18 curve, show that it is hard to constrain the sea-level with the underlying datasets for the LIG. Of course we get a best ice-sheet history under our conditions, that is how our scheme is designed, but it is always important to evaluate whether the results we get out of our statistical analysis make sense (basically if assumptions hold). In our case we could say that there is a problem. It is hard to say whether it is the model, the observations or the statistics itself which make the largest bit of it, but the observations are the prime candidate. Reasons are shown in the paper together with much more information and discussions on the procedure and assumptions.