Computer are the basis of most modern science today. Especially for those, who are working mainly on theoretical topics, as I do, they are the centre of daily work. As such, workflows are developed over time to make the own research possible and effective, which of course require the right equipment. As this blog is about making the work of a scientist a bit more transparent, I want to explain how my personal setup looks like.
For the past couple of years the scheme I use is more or less unchanged. It got some additions in the past, but they happen mainly in the attached workflows (e.g. reading a paper digitally instead of printing it out).
The workflow is basically divided into three levels, which are each represented by one or several computers. Each level has its own tasks and requirements and all are essential to keep the own workflow running. To understand the basics of the idea I describe here, my blog post about my general schematic for work might be of interest. Several work phases are active at the same time and as such different levels of the computer infrastructure have to handle different projects at the same time.
The fist level is responsible for all the typical work, which requires active user interaction. This includes the programming of codes, the write-up of the performed work (papers & documentation) and all other parts in presenting and organising the projects I am working on. Of course, a laptop is a useful tool on this level as working not always takes place in the office (conferences, train journeys etc.). As such it is also the main downfall of this level, as it is not continuously connected with the network.
This disadvantage is covered by the second level. It is a machine, which is placed within the network and does the basic job of a distribution centre for tasks and jobs. Optionally it is able to calculate simple things itself, but the main point is that it has connections to storage facilities and the third level. These machines do not necessary have to be quick, it is more important that they are robust. Myself is using a simple Linux desktop computer for this.
On the last level external servers are placed as well as clusters and super computers. Their job is to perform the given programs and store the results into storage facilities, which can be accessed by the second level. Over the time I had different machines in this category, ranging from desktop machines of co-workers I have borrowed over the night to the really big ones, which offer several hundreds of cores. The art is usually to coordinate one or several external computer at the same time, which is again the task of the second level.
As you can see, every level has its purpose and is important to get work done. Especially in crunch times, where projects have to be finished, it is often the hope, that each level simply works and that the backup workflows, coming into play when one level is not available, are not required. Computer are essential in modern research and scientists very much depend on them. Everybody has their own idea how computer should be used and especially, how backup of the data should be handled. In effect everyone hopes that their system work for the case the backups are required, but most will only now, when it happens.