Question of the Week

Jul 242016

If you ever wander about the barren lands of southern Africa, like the scarcely vegetated Namib desert in Namibia, you will most certainly stumble across a fascinating malformation of the soil called fairy circles. They are circle shaped bare patches of dry ground with a diameter of several meters enclosed by taller grass at the edge, compared to the steppe landscape of the surroundings.

Fairy circles in the Namib Naukluft Park, Namibia. (© Heike D?nzer)

Fairy circles in the Namib Naukluft Park, Namibia. (© Heike Daenzer)

Their origin has long been a cause of intense discussions. The earliest interpretation of their appearance may come from the Himba people, who share the legend that the circles are the footprints left behind by their ancestor Mukuru. Other stories tell of aliens, dragons or fairies.[1] On the other hand though, science suggests toxic gases or residues from already dead plants, radioactive elements or insects to be the origin of the features.[2] Lots of investigation have been made in the last decades to prove each theory but no one could come to a substantial and indisputable conclusion. Since no toxic or radioactive substances were found in the soil of the fairy circles, they must arise from something else.[2]

Supported by satellite images, Dr W. Tschinkel, from the Florida State University, was able to offer proof that the circles are not permanent. They grow and develop and after a lifespan of 41 years on average, they “die”.[3]

Cramer et al. used an empirical model considering various biological, chemical and weather factors to predict the appearance of fairy circles. They conclude that circle formation must be the result of plant organization and competition for nutrients since the plants at the periphery of the circles are more lush than the plants farther away.[4]

A very vivid explanation comes from N. Juergens who examined the termite population of fairy circles. The sand termite Psammotermes allocerus, their nest and tunnels were the only similarity found in 100 % of the investigated circles and even in young circles. Apparently they feed on plant roots and keep large areas free of water accumulating vegetation which causes also a higher water content in the ground centered beneath the circle.[5]

Only a few years ago, fairy circles were found in Pilbara, Australia similar to those in Africa. Getzin et al. doubt the dependence of the pattern formation from termites or ants since many circles didn’t host any of these insects. They blame pattern-creating plants in water-limited environments, such as in a desert, to be responsible.[6]

– Tatjana Daenzer

Read more:
[2] van Rooyen, J. Arid Environ., 2004, 57, 467–485
[3] Tschinkel, PLOS ONE, 2012, 7, 1–17
[4] Cramer, PLOS ONE, 2013, 8, 1–12
[5] Juergens, Science, 2013, 339, 1618–1622
[6] Getzin, PNAS, 2016, 113, 3551–3556

Jul 032016

We all know what the color ‘black’ is. If I ask anyone, I will get different responses. From the familiar blackboard in the classroom to the ubiquitous asphalt of the roads. Some might recall, with fondness, it as the color of the little dress on their high-school prom date. Others might be more correct, and remind me that “true” black is the absence of any reflected light. And point me towards the nearest black hole (at the center of the milky way or on the Sagittarius arm of it, depending on what one believes [1]).

Is it Black or is it Grey ?

Is it Black or is it Gray ?

Even then, when I show the above graphic, all (including me) will be unequivocal in declaring the colors to be shades of black. Although those are hues of gray. Such befuddlement ails us all. As Dr. Stephen Westland, professor of color science and technology at Leeds University, is right in saying, “Unless you are looking at a black hole, nobody has actually seen something which has no light.” [2]

Given our feeble attempts at defining and rendering ‘Black’, it becomes quite a challenge to explain Vantablack – the blackest material known [3, 4], where Vanta is an acronym for Vertically Aligned Nano Tube Arrays. Although, NASA might argue that their super-black deserves that title [5]. It is easy to visualize Vantablack as a forest of carbon nano tubes. The tubes are stacked in a vertical orientation, with the length of the individual tubes being much much larger than their diameter.

Vantablack (downloaded from

Vantablack (downloaded from

Yet, that still doesn’t explain why it is the ‘blackest’ of blacks and could rewrite and replace all previous conceptions of black [6]. When light hits the Vantablack surface, it gets trapped in between the carbon nano tubes. The photons undergo a lot of collisions with the walls of these tubes. They lose their energies as heat to the walls and the tiniest amount is reflected back as light, all of 0.035 % [2, 7].

Such properties make it very exciting as future prospects. From manufacturing telescope coatings, where even the tiniest speck of scattered light can seriously affect its contrast and resolving power. To the defense and stealth sectors, who find the material extremely fascinating [7].

Yet, it is still baffling to answer how does it feel to see the blackest material known. We understand a surface by its depth or its topological features. These features change reflectance. But for Vantablack, even when it is crumpled up, it defies perception. “You expect to see the hills and all you can see … it’s like black, like a hole, like there’s nothing there. It just looks so strange”, as Surrey Nanosystems CTO Ben Jensen puts it [2].

Vantablack is the darkest material we have that is as close to perceiving what a black hole would look like. This might be a bit disconcerting for us in the future, expecting to see textures but being greeted with an abyss. “And if you gaze long into an abyss, the abyss also gazes into you.”

-Soham Roy

[4] E. Theocharous et al., Optics Express 2014, 22, 7290-7307.

Jun 122016

Modern aviation is one of the most important and possibly, also the safest when it comes to transportation and travel. As a result of the increasing need for fast and reliable transfer of resources, airplanes have become increasingly complex and nowadays only a relatively small number of people know how they are operated.

Figure 1: Landing of a modern aircraft.(downloaded from

Figure 1: Landing of a modern aircraft.(downloaded from

In this Question of the Week, we want to focus on one particular detail of aviation: The landing. A typical airplane approaches the airstrip with a speed of around 270 km/h and has to decelerate within a very short time to guarantee a safe landing. So how do you brake an airplane?

To answer this question, we first have to think about how braking works in the case of any wheel-based vehicle. In a nutshell, the braking process always exerts a torque upon the wheels which then use friction with the ground to lose kinetic energy. Friction, however, is massively dependent on the weight that rests on the wheels. In case of landing an airplane, the aerodynamic lift basically nullifies the weight of the plane and therefore makes braking while using the wheels extremely inefficient. As a result, the plane needs other ways to slow down until the aerodynamic lift and speed is sufficiently reduced. In modern aviation, this is done by two different braking systems: The Spoilers and the Reversers, that both are usually operated by a computer, which tries to reach a constant deceleration of convenient magnitude (about 0.17 – 0.3 g).


Figure 2 : Spoilers on an aircraft. (downloaded from

As soon as the wheels get in contact with the ground, the Spoilers (Figure 2) are fully activated. These are flaps located on the back-end of the wings and can significantly reduce the aerodynamic lift as well as increase the drag. These flaps are extremely important for the braking process because without them the friction of the wheels is not sufficient for efficient braking. Basically, wheel brakes and Spoilers together can already be sufficient for slowing down an airplane.


Figure 3 : Reversers on an aircraft. (downloaded from

However, to reduce the amount of stress the wheel brakes have to withstand, there is an additional system: The Reversers (Figure 3). These are mechanisms located at the engines that can be activated to redirect the engine’s exhaust forward, rather than backwards (commonly referred to as thrust reversal). All three systems together can be used by a computer to reach an extremely smooth braking process without putting too much stress on the single components.

As a result, the landing process by itself is extremely complex and depends on many factors. Most of them can be controlled by a computer, however, in case of any unforeseen circumstances, the pilots have to be prepared to take over and land the airplane manually. This (and many other factors) makes the training of pilots one of the most demanding educational processes of our time.

– Kai Litzius

Further reading:

May 012016

Have you ever wondered if you are smarter than your parents or grandparents? Actually, that might not be completely unlikely! At least according to the so called Flynn effect, which was first described in 1984 by the political scientist James Robert Flynn [1]. It refers to the observation that a generation scores in average slightly higher on an IQ test* than the generation before. This effect has been investigated for more than 20 industrial countries and for different types of intelligence tests that were specified on problem-solving (fluid intelligence) and knowledge and experience-based questions (crystallized intelligence), respectively.

Many people do not believe in the IQ test as a benchmark for intelligence and therefore seek a different explanation than increasing intelligence for Flynn’s observation. They argue that the measured IQ might just be related to something else, for example a training effect.

Anyways, according to Flynn, statistics seemed convenient. But if we really are getting smarter, the central question that arises is, of course: Why? The discovery heated up the old genes vs. socialization influence debate. Dealing with the latter, different theories were developed in the last decades [2]:

  • Social environment: As the world is getting more and more complex due to modernization and new technologies, people are more often confronted with abstract concepts.
  • Education: Probably there is a connection between intelligence and learning. The education in general has been improved in the last century – schools are getting better equipped and school attendance is compulsory.
  • Dedicated parents: In general, parents are more dedicated to seek for a more inspiring environment for their children, than they had for themselves.
  • Nutrition: Nowadays, people are better nourished compared to earlier generations.

What people obviously have learned from Flynn’s discovery is that there needs to be a regular updating for IQ tests and other tests in order to reset the normal distribution to the average value of 100.

Is the Flynn effect ongoing or is it just describing IQ test results from the first three-quarters of the 20th century ? More recent studies indicate that the test results in Norway are more or less stable since the nineties [3]. Another publication even claimed a recent reversal of the Flynn effect [4]. In 2012, on the other hand, Flynn himself pointed out that there are new statistics leading to an increasing IQ [5].

At least we can agree, that the Flynn effect is a controversial field in psychology and will keep scientists busy for many more years.

*The informative value of an IQ test is widely discussed and a topic for another Question of the Week.

-Jennifer Heidrich

[1] J.R. Flynn. The mean IQ of Americans: Massive gains 1932–1978. Psychological Bulletin. 1984; 95(1): 29–51.
[2] A. Furnham: 50 Psychology Ideas You Really Need to Know, Quercus Publishing Plc, 2009.
[3] J.M. Sundet, D.G. Barlaug, T.M. Torjussen. The end of the Flynn effect?: A study of secular trends in mean intelligence test scores of Norwegian conscripts during half a century, Intelligence, Volume 32, Issue 4, July–August 2004, 349-362.
[4] T.W. Teasdale, D.R. Owen. Secular declines in cognitive test scores: A reversal of the Flynn Effect, Intelligence, Volume 36, Issue 2, March–April 2008, 121-126.
[5] J.R. Flynn. Are We Getting Smarter? Rising IQ in the Twenty-First Century, Cambridge University Press, 2012.

Apr 242016

Since way back, humankind is looking up into the night sky, observing orbs and wondering about the origin and the look of the cosmos. Is the universe really expanding as we all learn in school? How does the border of the universe (if it exists) look like? Admittedly, a modern scientific approach to this problem is very abstract and not easily explained in layman’s terms.The following explanation therefore spares any detailed mathematical considerations for simplification.

Derived from Einstein’s theory of relativity, there are found different possibilities. In simplified terms, mass warps the space and thus determines its shape. Complex mathematical considerations result in a critical density of the universe. A structure can be assigned from the density parameter, omega, which is the quotient of the average density of the universe and the critical density. Three border cases emerge whose abstract values can be translated into two-dimensional images for a more vivid explanation (Fig. 1):[1,2]

a) The density is bigger than the critical density (omega > 1). The universe is big enough to stop the expansion sometime but after that point it will be shrinking again. This is called “closed universe”.

b) The density is smaller than the critical density (omega < 1). The universe expands forever and its shape is saddle-like. This is called “open universe”.

c) The density has the exact value of the critical density (omega = 1). The expansion rate decelerates over an infinite time-span and the shape is flat and endless.

Fig. 1: Two dimensional illustrations of the universe’s possible shapes: spherical or "closed" universe, saddle-like or "open" universe, and flat universe [3].

Fig. 1: Two dimensional illustrations of the universe’s possible shapes: spherical or
“closed” universe, saddle-like or “open” universe, and flat universe [3].

Another discussed model is the “Picard topology” that defines the universe as a horn which is closed at the end. Here very surreal phenomena would occur depending on whether one is situated at the peak or the broad end [4].

Measurements from the Wilkinson Microwave Anisotropy Probe (WMAP) give hints that the density of the universe equals the critical value. Accordingly, the shape would be flat. Still, with our limited technical possibilities we only can observe a very small area of the universe. No one can yet (or maybe never will?) know with absolute certainty how the universe looks like [2].

-Tatjana Daenzer

Read more:

Mar 202016

All of us are esurient creatures, when it comes to being happy. Everyone wants to be happy. There are myriad paths to happiness as well – religious, spiritual and even rational. The Dalai Lama, once remarked, “Happiness is not something ready made. It comes from your own actions.”

Yet it seems, year after year, that a group of people sharing a small genetic pool end up tops of the “The World Happiness Index” [1]. The Danish, it seems, are genetically endowed when it comes to being happy [2]. A genetic mutation 5-HTTLPR seems to be behind it. This gene variant influences the metabolism of serotonin, the neurotransmitter which affects our moods.

Does it then mean that you cannot be happy if you have not inherited Danish genes? No, there’s more to this story. And that’s where science opens a new door towards happiness.

Whether we are Danish or not, we produce a neurotransmitter called Anandamide [3]. The name of this molecule itself exudes joy, deriving from the Sanskrit word ananda or bliss. But then why aren’t we all equally happy. That depends on the extent to which this “bliss molecule” is metabolized. People who produce less of the enzyme that aids in the metabolization are more prone to be calm and at peace [4].

Prof. Friedman, from the Weill Cornell Medical College, puts it elegantly when he says, “What we really need is a drug that can boost anandamide—our bliss molecule—for those who are genetically disadvantaged.”[5]

Now it seems such a future is not that far off when we can engineer happiness. There are two things that one needs. To understand the genetic factors behind the different neurotransmitters. And how to manipulate them with nano-scale precision. Once we have that information, it will be possible to ingest a pill that carries predesigned nanobots to specific regions of the brain and turn on or off genes at will. This will then lead to a change in the perception of the immediate environment which would have otherwise strained our ability to be happy. Such a future was envisioned a decade back by author James Hughes in his book “Citizen Cyborg”.

So yes, it seems highly likely that our next generation can buy over-the-counter pharmaceuticals that can generate the feeling of satisfaction, joy or bliss. But still to be truly happy and have a satisfying life, it would take more than a drug as after all, happiness “comes more from your own actions”.

– Soham Roy

[3] W.A. Devane et al., Science 1992, 258, 1946-1949.
[4] I. Dincheva et al., Nature Communications 2015, 6, 1-9.

Mar 132016

A recent study performed by a French team examined the long-term effects of sugar intake in rats [1]. The purpose of this experiment was to investigate whether its excessive consumption during adolescence alters the brain reward system.

It is known that, during the development of the mammalian brain, there are specific time windows in which its proper functions are established [2]. These time windows exceed the prenatal development and last until early adolescence. In particular, the brain reward system could be sensitive during adolescence. If it is over- or in-active in this period, it is possible that this causes disorders such as addictions or depression [3].

The French researchers exposed adolescent male rats to sucrose solutions. The rats were free to choose between them and a supplemental bottle containing water. Similar to some of us humans, the sugar-exposed teen rats developed a sweet tooth and consumed more sugar solution than water. The sucrose bottle was removed from their home cages after 16 days. Later, when they were adult animals, their reactions toward sugar intake and their reward circuitry were examined. The rats behaviorally responded less toward sugar consumption than animals of a control group, which did not consume sugar in their adolescence. To put it into more anthropomorphic terms, they were not as excited about the reapplication of sugar. In addition, the researchers found that a key area of the brain reward pathway, the Nucleus accumbens, was not as active as in the control group. These results suggest that an overconsumption of sugar during adolescence alters the development of the brain reward circuitry. Consequently, sweet water does not seem appealing anymore in adulthood.

So, what could these findings mean for humans? Should we give more sweets to teenagers and hope that they lose interest in them later? Does that work with all “bad” substances, for example alcohol and other drugs? For sure, the answer to all of these questions is: “No!”. Firstly, the study showed that excessive sugar consumption led to a deficit in the reward system. These deficits could manifest themselves in other, more severe behavioral deficiencies. To name some of them, psychiatric disorders that have been linked to a dysfunctional reward system include depression, schizophrenia and substance abuse. Secondly, other studies have shown that adolescent alcohol consumption in rats causes severe damage to the brain, reaching from altered network function to cell death [4]. However, this study does neither fully explain whether excessive sugar intake during adolescence causes severe reward-related disorders, nor whether the findings apply to humans as well. What these experiments do tell us is that teenagers and adults should only moderately consume sweets, not only because they are unhealthy, but also in order to foster our mental health.

-Theresa Weidner

[1] F. Naneix, F. Darlot, E. Coutureau, M. Cador, EJN 2016, 46, 671-680.
[2] C. Rovee-Collier, Dev. Psychol. 1995, 31(2), 147-169.
[3] T. Paus, M. Keshavan, J. N. Giedd, Nat. Rev. Neurosci. 2008, 9, 947-957.
[4] C. Guerri, M. Pascual, Alcohol 2010, 44(1), 15-26.

Feb 292016

Within the last years, cloud computing has become more and more important for industry as well as for the private sector. But what exactly is cloud computing and where could it lead our future IT progress?

Firstly, the term itself refers to the nowadays common practise to “outsource IT activities to one or more third parties that have rich pools of resources to meet organization needs easily and efficiently” [1, 2]. In other words, one buys the permission to use hardware, network connectivity, storage, and software that is located in a computing center anywhere in the world. It is more or less comparable to other known public utilities such as electricity, water and natural gas [1] and follows the same rule: You pay for what you need, not more.

The private sector is also more and more part of the system. Cloud memory saves personal data and makes it available from any place with an internet connection; file sharing websites are widely used and have gained a lot of popularity within the last years. Another kind of cloud computing is especially interesting for research – Branches with high computational needs, e.g. astrophysics, medicine, and large scale facilities like CERN, can save a lot of resources by outsourcing computational power to volunteers. While their PCs are idle, a program starts in the background and performs calculations for the project [3].

The current state of cloud computing is already very impressive, however there is one major goal the IT industry starts to tackle now, namely the so-called Internet of Things (IoT). An example is Near Field Communication (NFC), a set of hardware and software protocols to enable two devices to communicate wirelessly with each other [4]. It is already part of most modern smartphones and also widely used for contactless payment cards. More and more devices in our daily life will be included in this IoT, resulting in increased connectivity and data flow around us. The idea is to take the cloud and place it everywhere around us, basically creating a fog [5]. This now indeed called “fog-computing” could span a wide range of applications in daily life. From smart houses that adjust the temperature, to refrigerators that tell their user when they are getting empty. An even more spectacular application could be connected to the trend towards self-driving cars. Large IT companies already started to develop cars which do not need a driver any more [6]. What sounds like science fiction could become commonly available within the next decades and open the path to some great applications of fog-computing. How about a traffic light, which already counts the arriving cars and adjusts its phases according to the traffic volume or tries to prevent accidents by detecting obstacles and pedestrians much faster than any human would be able to? The possibilities are endless and incredible.

However, one also needs to consider possible disadvantages like data safety and the problem of the totally transparent citizen. Moreover, judiciary will require a lot of adjustments and new laws, especially when the computer hardware that processes cloud data is located in another country with different data protection laws. There are a lot of changes to be made, however so far technological progress was never stoppable. We will most likely be able to observe within the next 10 years some of the biggest changes in IT and connectivity since the invention of the internet itself.

– Kai Litzius

[1] Hassan, Qusay (2011). Demystifying Cloud Computing. The Journal of Defense Software Engineering (CrossTalk) 2011 (Jan/Feb): 16–21.
[2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, M. Zaharia, “Above the Clouds: A Berkeley View of Cloud Computing”. University of California, Berkeley, Feb 2009.
[4] What is NFC? Everything you need to know.
[5] Bar-Magen Numhauser, Jonathan (2013). Fog Computing introduction to a New Cloud Evolution. Escrituras silenciadas: paisaje como historiografia. Spain: University of Alcala. pp. 111–126.
[6] Google Self-Driving Car Project Monthly Report – September 2015.

Feb 212016

The first satellite in space was Sputnik 1, launched by the Soviet Union in 1957. Since that time more than 6000 satellites have been launched. From the (estimated) 3600 satellites that are still in the orbit, about 1000 are operational.[1-3] The rest of them are more or less useless and part of the space-debris, which is becoming a more and more important problem.

But what are they doing all the time?

Satellites can be distinguished by their usage into various categories. News, science, earth observation, navigation and military satellites are only a few examples of the broad range of applications.

Just imagine. Your day starts with your alarm clock. It is an ordinary one, not a radio-controlled one, of course. After the first coffee you want to look up the weather forecast on your smartphone. No chance. Without weather satellites, a forecast is a possible but quite vague endeavor, but without adequate satellites, a smartphone is an absolutely useless device.

On your way to work you notice that your satnav is not working, either. Of course not, how should it, without GPS?! GPS is the magic word for our modern world. ATMs are reliant upon GPS, as well as airports, telephone, stock exchange and so on.

Without satellites we’d be able to survive, at least, but our lives would change in so many ways. Scenarios where confused people are walking around, fingers on a map, looking for an old-fashioned phone booth, are Hollywood-like and very improbable to happen.[4]

Back to space-debris. What is happening to all the hundreds and thousands of tons of scrap? After 3-8 years, a satellite retires. Modern satellites have special engines that transport them into space-graveyard, where they travel forevermore. Elderly ones vaporize upon re-entry into the atmosphere.[5]

View of our planet. Can you spot it?

   View of our planet. Can you spot it ? [6]

So, without satellites our lives would be totally different, and the view of our blue planet won’t be blocked by thousands of tons of terrestrial garbage.

– Katharina Stockhofe

[1] Rising, David (11 November 2013). “Satellite hits Atlantic — but what about next one?”. Seattle Times
[2] Global Experts Agree Action Needed on Space Debris
[3] UCS Satellite Database

Feb 152016

Tinnitus – the non-stopping auditive experience – is a well-known malady. Patients with tinnitus hear sounds even though no source of this acoustic impression is present; at least not outside of the brain.[1] The source of the sound is in fact inside the brain, which is proven by several observations. Firstly, patients whose acoustic nerves have been severed still “hear” the sound. And secondly, the acoustic sensation is independent of the position of the ears. Both facts do not comply with regular sounds. Furthermore, EEG analysis showed that neuronal activity is altered in tinnitus patients.[2]

In the current Question of the Week, however, I do not want to focus on tinnitus, but on another similar phenomenon: The Hum. First mentioned in the 60s of the previous century, the hum has been detected around the world.[3] But what is this hum? People who complain about it “hear” a low-frequency humming sound similar to a diesel engine or a turbine without any physical source.[4] But what is the difference compared to regular tinnitus? It displays some dissimilar properties like varying volumes depending on the location of the patient and modulation, e.g. it is not perceived as a single tone but more as a vibrato like sound.[5]

So if it is not tinnitus, what is the reason for the hum? There are a variety of speculations. Most of them assign the hum to electromagnetic fields emitted by modern technology like mobile telephones of sending masts as well as Wi-Fi networks. But this cannot be the (only) case, since the hum was already described before these technologies existed. Until now, no unambiguous explanation for the hum exists, but it is mainly described in high-technology societies like Europe or Northern America.[6] This however might just be accounted to limited data from other countries of the world. In fact, the hum is still an unsolved question and it remains unclear if it indeed has an origin which waits for its detection or if it is just the imagination of the patients.

– Andreas Neidlinger

[2] I. Adamchic, B. Langguth, C. Hauptmann, P. A. Tass, Front. Neurosci. 2014, 8, 284.