Massive ensemble paper background: What can we say now on the LIG sea-level?

After the new paper is out it is a good time to think about the current status on the main question it covered, the sea-level during the LIG. Usually I do not want to generalise too much in this field, as there is currently a lot going on, many papers are in preparation or have just been published and the paper we have just published was originally handed in one and a half years ago. Nevertheless, some comments on the current status might be of interest.

So the main question the most papers on this topic cover is: How high was the global mean sea-level during the last interglacial. There were some estimates in the past, but when you ask most people who work with this topic they will answer more than six metre  higher than today. That is of course an estimate with some uncertainty attached to it and currently most expect that it will not have been much higher than about nine metres than today. There are several reasons for this estimate, but at least we can say that we are quite sure that it was at least higher than present. From my understanding, geologists are quite certain that at least for some regions this is true and even when the data is sparse, meaning the number of data points low, it is very likely that this was also the case for the global mean. Whether it is 5, 6 or 10 metre higher is a more complicate question. It will still need more evaluation until we can make more certain statements.

Another question on this topic are the start point, end point and duration of the high stand. This question is very complex, as it depends on definitions and the problem that in many places only the highest point of sea-level over the duration of the LIG can be measured. That makes it very complex to say something definitive especially on the starting point. As such, our paper did not really made a statement on this, as it just shows that data from boreholes and from corals are currently not stating the same answer.

The last question everybody asks is the variability of the sea-level during the LIG. Was it just one big up and down or were there several phases with a glaciation phase in the middle. Or where there even more phases than two? Hard questions. The most reliable statements say that there are at least two phases, while from my perspective our paper shows that it is currently hard to make any statement basing on the data we used. But also here, new data might give us the chance to make better statements.

So there are still many questions to answer in this field and I hope the future, on which I will write in my last post on this topic, will bring many more insights into this field.

Massive ensemble paper background: Data assimilation with massive ensembles

Within the new paper we developed and modified a data assimilation scheme basing on simple models and up to a point Bayesian Statistics. In the last post I talked about the advantages and purposes of simple models and this time I would like to talk about their application.

As already talked about, we had a simple GIA model available, which was driven by a statistical ice sheet history creation process. From the literature, we had the guideline that the sea level over the past followed roughly the dO18 curve, but that high deviations from this in variation and values can be expected. As always in statistics there are several ways to perform a task, basing on different assumptions. To design a contrast to the existing literature, the focus was set to work with an ensemble based approach. Our main advantage here is that we get at the end individual realisations of the model run and can show individually how they perform compared to the observations.

The first step in this design process of the experiment is the question how to compare a model run to the observations. As there were several restrictions from the observational side (limited observations, large two-dimensional uncertainties etc.), we decided to combine Bayesian statistics with a sampling algorithm. The potential large number of outliers also required us to modify the classical Bayesian approach. As a consequence, we were able at that point to estimate for each realisation of a model run a probability.

In the following the experimental design was about a general strategy, how to create the different ensemble members so that they are not completely random. Even with the capability to be able to create a lot of runs, even realisations in the order of 10,000 runs are not sufficient to determine a result without a general strategy. This lead us to a modified form of a Sequential Importance Resampling Filter (SIRF). The SIRF uses a round base approach. In each round a number of model realisations are calculated (in our case 100) and afterwards evaluated. A predefined number of them (we used 10), the best performers of the round, are taken forward to the next and act as seeds for the new runs. As we wanted a time-wise determination of the sea-level, we chose the rounds in this dimension. Every couple of years (in important time phases like the LIG more often) a new round was started. In each the new ensembles branched from their seeds with anomaly time series for their future developments. Our setup required that we always calculate and evaluate full model runs. To prevent that very late observations drive our whole analysis, we restricted the number of observations taken into account for each round. All these procedures led to a system, where in every round, and with this at every time step of our analysis, the ensemble had the opportunity to choose new paths for the global ice sheets, deviating from the original dO18 curve.

As you can see above, there were many steps involved, which made the scheme quite complicate. It also demonstrate that standard statistics get to its limits here. Many assumptions are required, some simple and some tough ones, to generate a result. We tried to make these assumptions and our process as transparent as possible. As such, our individual realisations, basing on different model parameters and assumptions on the dO18 curve, show that it is hard to constrain the sea-level with the underlying datasets for the LIG. Of course we get a best ice-sheet history under our conditions, that is how our scheme is designed, but it is always important to evaluate whether the results we get out of our statistical analysis make sense (basically if assumptions hold). In our case we could say that there is a problem. It is hard to say whether it is the model, the observations or the statistics itself which make the largest bit of it, but the observations are the prime candidate. Reasons are shown in the paper together with much more information and discussions on the procedure and assumptions.

Massive ensemble paper background: Massive ensembles: How to make use of simple models?

The new paper on the LIG sea-level investigation with massive ensembles analyses simple models. In this post I want to talk a bit about their importance and how they can be used in scientific research.

Simple models are models with reduced complexity. In contrast to complex models their physics is simplified, they are more specified for a specific problem and their results are not necessarily directly comparable to the real world. They can have a smaller, easier to maintain code base, but also a simple model can grow in lines of codes fast. A simple model is defined depends on the processes it includes, not the mass of coding lines. Continue reading

IMSC2016: Final day

The fifth and last day of the 13th International Meeting on Statistical Climatology (IMSC) has ended and with it a great week here in the Rocky mountains. It started today with the first homogenisation session and the talks covered a wide range. Among this the worldwide organisation of climate data generation, the proposal of a new homogenisation methodology and finally an overview on future challenges for homogenisation. As I had myself worked during my PhD on quality control of data this topic is of special interest for me and I was happy to see this variety of talks in this field.

Low clouds

It was followed with a session on nonlinear methods. As it was the final day, the talks within the sessions covered a wider area, which was good. Finally the day ended for me again with a homogenisation session and as before, the talks were of high quality.

As it was the last day I would like to take a look back on the week. The weather was fantastic, apart from the last day, when the clouds and rain got in. The conference and many talks were really interesting. The mixture of so many different topics gave a great overview on the many flavours of statistical application in climate science. Many scientists, with different backgrounds, on various levels within their career led to a great knowledge exchange and new views on the topics. It was really well organised and so it was easy to concentrate on the good things of a conference. Therefore, the meeting was really worth a visit so perhaps again in three years at the next IMSC.

Massive ensemble paper background: Sea-level in the LIG: What are the problems?

In the new paper on the LIG sea-level investigation with massive ensembles, I try to demonstrate how complicate it is to actually model the LIG sea level. This has many reasons and are certainly not unique to this specific problem, but more to paleoclimatology in general. So I like to highlight a few specifics, which I encountered in the preparation of this paper.

I had written before on the speciality of the palaeo-sea-level data in general. From the statistical point of view the available data are inhomogeneous due to different origins and basing on different measurement principles (e.g. analysis of data from corals or boreholes). Handling their two-dimensional uncertainty (time and value), which are usually also quite large, makes it complicate to apply standard statistical procedures. Much to many assume that at least one of the two dimensions is neglectable and when problems with non-normal uncertainty curves are added, it poses a real challenge. And it is near to certain that the dataset contains outliers. There is also no clear way to identify these, so whether any value of a datapoint is valid is unclear.  Finally, we have to accept that there is hardly any real check to find out, whether those outliers identified by your statistical method are just false measurements or show special features of the physical system. And of course, a huge problem is that the number of available data is very low, which makes it even harder to constrain the sea-level during a specific time point.

Another point of concern is the combination of two complex systems which only in combination give a comparable result to the observations. There are on the one side the ice sheets. It is hard to put physical constraints on their spatial and temporal development (especially in simple models). We tried it with assumptions on their connections to the current (or better those of the past few thousand years) ice sheets, but it is hard to tell how the ice sheets at that time have really looked. Of course there exist complex model studies on this, but what studies we created here need are consistent ice sheets in a relatively high temporal resolution (e.g. 500 years) over a very long time (so more than 200.000 years). And additionally to it we would like to have several possible implementation of it (so I used 39.000 runs the majority of them have unique ice sheet histories). That is at least a challenge, so that statistical ice sheet creation becomes a necessity.

The other complex model is the earth. It reacts on the ice sheets and to their whole history. So the combination of these two models (the statistical ice sheet model and the physical GIA model) is key to a successful experiment. We handle here simple models, which always have their benefits, but also their huge disadvantage. I will talk on that more in the next post on this topic. But in this case these models are special. At least in theory they are non-Markovian, which means that not only the last state and the changes to the systems since then play a role, but also those system states longer ago. Furthermore, also future states play a role, but they have a much smaller influence. This has a lot to do with the experimental setup, but puts constraints on what you can do with your analysis procedures. It also requires that you have to analyse a very long time of development, in our case the last 214.000 years, even when you are just interested what happen at around 130.000 years before present.

Another factor in this are the so-called delta 18O curves. We use them to create a first guess of the ice sheets, which we afterwards varied. Nevertheless, their connection to ice volume is complicate. It is still open whether their connection is stable over time or changes during interglacials compared to glacials. Simple assumptions that they are constant make it complicate to handle a first guess, as it can be quite far off.

This all poses challenges to the methodological and experimental design. Of course there are other constraints like asvailable computing time and storage, which require you to make choices. I will certainly talk about some of them in the post about massive ensemble data assimilation.

So what makes the LIG-sea-level so complicate? It is the complexity of the problem and the low amount of constraints, due to sparsity and uncertainty of data. This combination poses a huge challenge to everyone trying to bring light into this interesting research field. From the point of statistics, it is an interesting problem and a real test to any statistical data assimilation procedure available.

IMSC2016: Day four

Day four of the IMSC in Canmore and once again many good talks to a wide range of topics. The day started with a downscaling session and covered emulators, handling of natural variability and perfect model frameworks. For me the imitation of complex results by simple models (emulators) have its interesting sides, but also frightens me a bit. I am a big fan of simple models and love to applicate them, but also learned that their results need a lot of statistical handling to deliver some acceptable results in their range of definition. Using them at the border of this definition or even out of it leads usually to inacceptable results. Sure, the simple models designed for the emulators are defined for this task, but it is still a very challenging topic and I am happy that some give it a try.

Conference hotel

The following session of nonlinear methods, with talks about coincidence and network analysis. This was followed by a talk of responsibility of climate science (and their consumers). Apart from some common misuse of statistical methods it also covered the call for communicating more completely the methods and assumptions used in studies. This very important challenge is rather complicate in modern science. Reviewers ask for reducing the method section only to the necessity part and so in many journals their readers just really see the results, without understanding the many assumptions went into it to create it. Statistics is a game of assumptions and so it is essential that they and the exact application of methodologies are added in detail to the papers. Nevertheless, it is the task of the reviewers to ask for it and for the authors to press for it to get it in. “Open methods, open data, open models” are required to replicate a scientific result and that should always be the aim of a publication (and yes, sometimes this is complicate, but trying is what counts here).

After lunch break I visited two more sessions, both covering extreme events. They included many interesting talks with a wide range of topics. Tomorrow will be the final day of the meeting and it will include homogenisation, a topic I really look forward to.

IMSC2016: Own Presentation

It is halftime here in Canmore and the IMSC got its highlights today. It started with a podium discussion on one of the WCRP grand challenges, the one on climate extremes. The aim is that in the next years scientists will collaboratively try to move forward on this and six other topics. Prediction/projections of extremes, may it be droughts or heavy precipitation, is complicate and so the agenda for this topic is long. In the discussion many topics were highlighted, which would help to bring the field forward. The most important of this, at least from my view-point, is the problem of data availability. Still many countries do not share their climate data and still many information can be found in archives, but is not yet digitised. But data is everything (ok, at least a lot) in climate science. Without it new developments and proper projections are not possible and without political incentives it is doubtful whether there will be in the next years a decisive move forward. Another topic of interest for me is the problem about uncertainties and the different understanding of it between the different fields. What is uncertainty? What is included? How it is exactly defined? Many questions around this topic highlight the proble of missing standardisation and an appropiate format in the community to share uncertainty information. Uncertainty is more than just a standard deviation value, much more.

Next were two talks on extreme value theory followed and touched more the theoretical side of the field. The lunch break was filled with an award ceremony.

The next session on my list was on climate model evaluation and covered a wide range of topics. Among this was my talk on the application of the EMD in highlighting differences between different initialisation procedures in decadal hindcasts. It is always nice to show own work, even when this contribution was originally planned just as a poster due to its early stage. A final session on downscalling ended the day for me.