Question of the Week

Mar 202016
Spread the love

All of us are esurient creatures, when it comes to being happy. Everyone wants to be happy. There are myriad paths to happiness as well – religious, spiritual and even rational. The Dalai Lama, once remarked, “Happiness is not something ready made. It comes from your own actions.”

Yet it seems, year after year, that a group of people sharing a small genetic pool end up tops of the “The World Happiness Index” [1]. The Danish, it seems, are genetically endowed when it comes to being happy [2]. A genetic mutation 5-HTTLPR seems to be behind it. This gene variant influences the metabolism of serotonin, the neurotransmitter which affects our moods.

Does it then mean that you cannot be happy if you have not inherited Danish genes? No, there’s more to this story. And that’s where science opens a new door towards happiness.

Whether we are Danish or not, we produce a neurotransmitter called Anandamide [3]. The name of this molecule itself exudes joy, deriving from the Sanskrit word ananda or bliss. But then why aren’t we all equally happy. That depends on the extent to which this “bliss molecule” is metabolized. People who produce less of the enzyme that aids in the metabolization are more prone to be calm and at peace [4].

Prof. Friedman, from the Weill Cornell Medical College, puts it elegantly when he says, “What we really need is a drug that can boost anandamide—our bliss molecule—for those who are genetically disadvantaged.”[5]

Now it seems such a future is not that far off when we can engineer happiness. There are two things that one needs. To understand the genetic factors behind the different neurotransmitters. And how to manipulate them with nano-scale precision. Once we have that information, it will be possible to ingest a pill that carries predesigned nanobots to specific regions of the brain and turn on or off genes at will. This will then lead to a change in the perception of the immediate environment which would have otherwise strained our ability to be happy. Such a future was envisioned a decade back by author James Hughes in his book “Citizen Cyborg”.

So yes, it seems highly likely that our next generation can buy over-the-counter pharmaceuticals that can generate the feeling of satisfaction, joy or bliss. But still to be truly happy and have a satisfying life, it would take more than a drug as after all, happiness “comes more from your own actions”.

– Soham Roy

[3] W.A. Devane et al., Science 1992, 258, 1946-1949.
[4] I. Dincheva et al., Nature Communications 2015, 6, 1-9.

Mar 132016
Spread the love

A recent study performed by a French team examined the long-term effects of sugar intake in rats [1]. The purpose of this experiment was to investigate whether its excessive consumption during adolescence alters the brain reward system.

It is known that, during the development of the mammalian brain, there are specific time windows in which its proper functions are established [2]. These time windows exceed the prenatal development and last until early adolescence. In particular, the brain reward system could be sensitive during adolescence. If it is over- or in-active in this period, it is possible that this causes disorders such as addictions or depression [3].

The French researchers exposed adolescent male rats to sucrose solutions. The rats were free to choose between them and a supplemental bottle containing water. Similar to some of us humans, the sugar-exposed teen rats developed a sweet tooth and consumed more sugar solution than water. The sucrose bottle was removed from their home cages after 16 days. Later, when they were adult animals, their reactions toward sugar intake and their reward circuitry were examined. The rats behaviorally responded less toward sugar consumption than animals of a control group, which did not consume sugar in their adolescence. To put it into more anthropomorphic terms, they were not as excited about the reapplication of sugar. In addition, the researchers found that a key area of the brain reward pathway, the Nucleus accumbens, was not as active as in the control group. These results suggest that an overconsumption of sugar during adolescence alters the development of the brain reward circuitry. Consequently, sweet water does not seem appealing anymore in adulthood.

So, what could these findings mean for humans? Should we give more sweets to teenagers and hope that they lose interest in them later? Does that work with all “bad” substances, for example alcohol and other drugs? For sure, the answer to all of these questions is: “No!”. Firstly, the study showed that excessive sugar consumption led to a deficit in the reward system. These deficits could manifest themselves in other, more severe behavioral deficiencies. To name some of them, psychiatric disorders that have been linked to a dysfunctional reward system include depression, schizophrenia and substance abuse. Secondly, other studies have shown that adolescent alcohol consumption in rats causes severe damage to the brain, reaching from altered network function to cell death [4]. However, this study does neither fully explain whether excessive sugar intake during adolescence causes severe reward-related disorders, nor whether the findings apply to humans as well. What these experiments do tell us is that teenagers and adults should only moderately consume sweets, not only because they are unhealthy, but also in order to foster our mental health.

-Theresa Weidner

[1] F. Naneix, F. Darlot, E. Coutureau, M. Cador, EJN 2016, 46, 671-680.
[2] C. Rovee-Collier, Dev. Psychol. 1995, 31(2), 147-169.
[3] T. Paus, M. Keshavan, J. N. Giedd, Nat. Rev. Neurosci. 2008, 9, 947-957.
[4] C. Guerri, M. Pascual, Alcohol 2010, 44(1), 15-26.

Feb 292016
Spread the love

Within the last years, cloud computing has become more and more important for industry as well as for the private sector. But what exactly is cloud computing and where could it lead our future IT progress?

Firstly, the term itself refers to the nowadays common practise to “outsource IT activities to one or more third parties that have rich pools of resources to meet organization needs easily and efficiently” [1, 2]. In other words, one buys the permission to use hardware, network connectivity, storage, and software that is located in a computing center anywhere in the world. It is more or less comparable to other known public utilities such as electricity, water and natural gas [1] and follows the same rule: You pay for what you need, not more.

The private sector is also more and more part of the system. Cloud memory saves personal data and makes it available from any place with an internet connection; file sharing websites are widely used and have gained a lot of popularity within the last years. Another kind of cloud computing is especially interesting for research – Branches with high computational needs, e.g. astrophysics, medicine, and large scale facilities like CERN, can save a lot of resources by outsourcing computational power to volunteers. While their PCs are idle, a program starts in the background and performs calculations for the project [3].

The current state of cloud computing is already very impressive, however there is one major goal the IT industry starts to tackle now, namely the so-called Internet of Things (IoT). An example is Near Field Communication (NFC), a set of hardware and software protocols to enable two devices to communicate wirelessly with each other [4]. It is already part of most modern smartphones and also widely used for contactless payment cards. More and more devices in our daily life will be included in this IoT, resulting in increased connectivity and data flow around us. The idea is to take the cloud and place it everywhere around us, basically creating a fog [5]. This now indeed called “fog-computing” could span a wide range of applications in daily life. From smart houses that adjust the temperature, to refrigerators that tell their user when they are getting empty. An even more spectacular application could be connected to the trend towards self-driving cars. Large IT companies already started to develop cars which do not need a driver any more [6]. What sounds like science fiction could become commonly available within the next decades and open the path to some great applications of fog-computing. How about a traffic light, which already counts the arriving cars and adjusts its phases according to the traffic volume or tries to prevent accidents by detecting obstacles and pedestrians much faster than any human would be able to? The possibilities are endless and incredible.

However, one also needs to consider possible disadvantages like data safety and the problem of the totally transparent citizen. Moreover, judiciary will require a lot of adjustments and new laws, especially when the computer hardware that processes cloud data is located in another country with different data protection laws. There are a lot of changes to be made, however so far technological progress was never stoppable. We will most likely be able to observe within the next 10 years some of the biggest changes in IT and connectivity since the invention of the internet itself.

– Kai Litzius

[1] Hassan, Qusay (2011). Demystifying Cloud Computing. The Journal of Defense Software Engineering (CrossTalk) 2011 (Jan/Feb): 16–21.
[2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, M. Zaharia, “Above the Clouds: A Berkeley View of Cloud Computing”. University of California, Berkeley, Feb 2009.
[4] What is NFC? Everything you need to know.
[5] Bar-Magen Numhauser, Jonathan (2013). Fog Computing introduction to a New Cloud Evolution. Escrituras silenciadas: paisaje como historiografia. Spain: University of Alcala. pp. 111–126.
[6] Google Self-Driving Car Project Monthly Report – September 2015.

Feb 212016
Spread the love

The first satellite in space was Sputnik 1, launched by the Soviet Union in 1957. Since that time more than 6000 satellites have been launched. From the (estimated) 3600 satellites that are still in the orbit, about 1000 are operational.[1-3] The rest of them are more or less useless and part of the space-debris, which is becoming a more and more important problem.

But what are they doing all the time?

Satellites can be distinguished by their usage into various categories. News, science, earth observation, navigation and military satellites are only a few examples of the broad range of applications.

Just imagine. Your day starts with your alarm clock. It is an ordinary one, not a radio-controlled one, of course. After the first coffee you want to look up the weather forecast on your smartphone. No chance. Without weather satellites, a forecast is a possible but quite vague endeavor, but without adequate satellites, a smartphone is an absolutely useless device.

On your way to work you notice that your satnav is not working, either. Of course not, how should it, without GPS?! GPS is the magic word for our modern world. ATMs are reliant upon GPS, as well as airports, telephone, stock exchange and so on.

Without satellites we’d be able to survive, at least, but our lives would change in so many ways. Scenarios where confused people are walking around, fingers on a map, looking for an old-fashioned phone booth, are Hollywood-like and very improbable to happen.[4]

Back to space-debris. What is happening to all the hundreds and thousands of tons of scrap? After 3-8 years, a satellite retires. Modern satellites have special engines that transport them into space-graveyard, where they travel forevermore. Elderly ones vaporize upon re-entry into the atmosphere.[5]

View of our planet. Can you spot it?

   View of our planet. Can you spot it ? [6]

So, without satellites our lives would be totally different, and the view of our blue planet won’t be blocked by thousands of tons of terrestrial garbage.

– Katharina Stockhofe

[1] Rising, David (11 November 2013). “Satellite hits Atlantic — but what about next one?”. Seattle Times
[2] Global Experts Agree Action Needed on Space Debris
[3] UCS Satellite Database

Feb 152016
Spread the love

Tinnitus – the non-stopping auditive experience – is a well-known malady. Patients with tinnitus hear sounds even though no source of this acoustic impression is present; at least not outside of the brain.[1] The source of the sound is in fact inside the brain, which is proven by several observations. Firstly, patients whose acoustic nerves have been severed still “hear” the sound. And secondly, the acoustic sensation is independent of the position of the ears. Both facts do not comply with regular sounds. Furthermore, EEG analysis showed that neuronal activity is altered in tinnitus patients.[2]

In the current Question of the Week, however, I do not want to focus on tinnitus, but on another similar phenomenon: The Hum. First mentioned in the 60s of the previous century, the hum has been detected around the world.[3] But what is this hum? People who complain about it “hear” a low-frequency humming sound similar to a diesel engine or a turbine without any physical source.[4] But what is the difference compared to regular tinnitus? It displays some dissimilar properties like varying volumes depending on the location of the patient and modulation, e.g. it is not perceived as a single tone but more as a vibrato like sound.[5]

So if it is not tinnitus, what is the reason for the hum? There are a variety of speculations. Most of them assign the hum to electromagnetic fields emitted by modern technology like mobile telephones of sending masts as well as Wi-Fi networks. But this cannot be the (only) case, since the hum was already described before these technologies existed. Until now, no unambiguous explanation for the hum exists, but it is mainly described in high-technology societies like Europe or Northern America.[6] This however might just be accounted to limited data from other countries of the world. In fact, the hum is still an unsolved question and it remains unclear if it indeed has an origin which waits for its detection or if it is just the imagination of the patients.

– Andreas Neidlinger

[2] I. Adamchic, B. Langguth, C. Hauptmann, P. A. Tass, Front. Neurosci. 2014, 8, 284.

Feb 082016
Spread the love

The Voynich manuscript is probably one of the most prominent and mysterious examples of a document of yet undeciphered content. It is named after Wilfrid Michael Voynich who discovered and acquired it in 1912. The origin of the manuscript most likely lies in the 15th century as suggested by radio carbon analysis, but neither its authorship nor the complete ownership history can be recovered. The first known possessor was Jakub Horcicky de Tepenec, a 17th century bohemian chemist, pharmacist and physician at the court of Emperor Rudolf II. After several intermediate owners – amongst others, the Jesuit College and the already mentioned Wilfrid Michael Voynich – the manuscript today is in the possession of Yale University’s Beinecke Rare Book and Manuscript Library.

The Voynich manuscript is written in none of any familiar languages, containing an unidentified writing system with unkown letters and a huge amount of mysterious illustrations, such as drawings of obscure plants or bathing women. Derived from the arrangement of these illustrations, it is usually divided into a herbal, an astronomical, a biological, a cosmological, a pharmaceutical and a recipe section.

Fig. 1: An illustration from the herbal section of the Voynich manuscript.[4]Fig. 1: An illustration from the herbal section of the Voynich manuscript.[4]

Despite many attempts, the Voynich Manuscript has never been deciphered and its content is still left to speculation. Nevertheless, many theories about its origin and meaning have been proposed. Some suggest that the artificial language is based on actual Latin or German, alienated by several encryption steps. Others point out that the variation of letters shows similarities to Semitic languages. The number of hypothesized authors include Roger Bacon, a 13th century Franciscan friar and polymath, Antonio Averlino, a 15th century North Italian architect, Raphael Sobiehrd-Mnishovsky, a 17th century Bohemian writer and many more – even Voynich himself, Leonardo da Vinci or aliens!

In the last decades of Voynich research, some scientists suggested that the whole manuscript is an elaborate hoax without any real meaning. For example, in a study published in Cryptologia, the Austrian physicist Andreas Schinner suggested that the order of words within the manuscript is of unnatural regularity.[1] Yet, an obvious argument against the hoax theory is that the manuscript is too complex and required too sophisticated work to just be a fraud.

A more recent study published by the physicists Marcelo Montemurro and Damian Zanette in PLoS One also points to the non-hoax direction.[2] It involves an analysis of the long-range word distribution in the manuscript using methods from information theory. In contrast to the earlier study of Andreas Schinner, Montemurro and Zanette found out that the word distribution is not homogeneous, but similar to natural languages in showing certain patterns and clustering. For instance, specific clusters of words can only be found in specific sections of the text. Moreover, word frequency obeys Zipf’s law, another hint that the writing system is based on a natural language.

In 2014, Stephen Bax, professor of applied linguistics at the University of Bedfordshire, proposed a translation of 10 words of the manuscript by applying a “bottom-up” approach like the one already used for decoding of the Egyptian hieroglyphs.[3] More specifically, he compared several of the Voynich manuscript’s illustrations of plants and stars with drawings in other European and Middle Eastern medieval manuscripts to identify them with their names and put these names into association with proper nouns within the text. In this way, he for instance, was able to find the alleged word for the constellation Taurus.

Still, it remains to be elucidated if Stephen Bax’ approach will eventually lead to a meaningful translation of the Voynich manuscript and if its secrets will ever be revealed.

– Philipp Heller

[1] A. Schinner, “The Voynich Manuscript: Evidence of the Hoax Hypothesis”, Cryptologia, vol. 31, no. 2, pp. 95–107, Mar. 2007.
[2] M. A. Montemurro and D. H. Zanette, “Keywords and Co-Occurrence Patterns in the Voynich Manuscript: An Information-Theoretic Analysis.”, PLoS One, vol. 8, no. 6, p. e66344, Jan. 2013.
[3] S. Bax, “A proposed partial decoding of the Voynich script”, Version 1, Jan. 2014

Jan 312016
Spread the love

As we all learned in our childhood, solid rocks belong to the abiotic environment and cannot move by their own selves. They have no will of their own and besides, no locomotor system.

The rocks in Racetrack Playa, located in the Death Valley National Park in south-west USA – a hostile place of annual heat records (the hottest temperature on earth since recording was measured in the Death Valley in July 1913 and came to 56.7°C)[1] – however seem to overrule this fundamental law of biology.

The name Racetrack Playa is no accident: over decades, tens to hundreds of rocks have been found with tracks behind them as if they were slowly sliding leaving grooves in the dusty soil (left picture). The tracks are often parallel and run in the same direction looking as if the rocks were participating in a slow-motion race (right picture).


Rock with a distinct track (left)[2] and aerial image of rocks moving in the same direction (right)[3].

This phenomenon was first discovered in 1948 and started versatile speculations about its origin. Some of the rocks weigh more than a hundred kilos, so help by humans is only possible with heavy equipment but no such traces can be found around them. Mud and even slime-producing algae as well as the weather were considered.[4]

Wind in conjunction with ice floes, as the most possible critical factors for rock movement, were supposed for years but no direct observation was made since studying in person is not recommended due to the temperatures and the restricted access in the Death Valley. But during the winter of 2013/2014, the group of Richard D. Norris and James M. Norris was able to monitor the motion using GPS in combination with information from weather stations.[5] Several rocks were provided with GPS transmitters and the area was observed by time lapse photography. Between November and February most of the Playa was covered by a shallow rainwater pool which froze at night-time. During sunny and windy days the ice melted partly and the rocks were driven on their ice sheets by the wind and running water. On this occasion, they pushed the mud beneath them aside forming long flat furrows. Some rocks only glided a few meters, some travelled up to 66 m and some shared an ice sheet which produced parallel lines. Under some rocks, the ice was already crushed so they showed no movement at all. At the end of February, the temperatures rose, the water evaporated and the spurs were exposed. Norris’ results prove that freezing temperatures for the formation of ice sheets and wind forces of 3-5 m/s are necessary for a rock movement of 2-5 m/min whereas the velocity is also dependent on the individual texture of the stone’s surface and weight.[5]

This is an excellent example of a long unexplained phenomenon that finally found elucidation by rigorous research. Do such allegedly mysterious occurrences lose their charm by an objective, scientific clarification like this? No! On the contrary, they show how complex and versatile the interactions of nature’s mechanisms are even by such a peculiar phenomenon as the wind-driven “wandering” rocks in the desert.

– Tatjana Daenzer

Read more:
[2] “Runningrock2” by Tahoenathan – Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons –
[3] “Racetrack playa 2013-12-20” by Richard D. Norris, James M. Norris, Ralph D. Lorenz, Jib Ray, Brian Jackson. Licensed under CC BY 1.0 via Wikimedia Commons –
[4] R. P. Sharp, D. L. Carey, J. B. Reid, P. J. Polissar, M. L. Williams, Geol. Soc. Am. 1996, 765–767.
[5] R. D. Norris, J. M. Norris, R. D. Lorenz, J. Ray, B. Jackson, PLoS One 2014, 9, 1–11.

Jan 242016
Spread the love

“Many will swoon when they do look on blood.” (Shakespeare, As You Like It, Act IV, Scene III)

Some people know this phenomenon only from movies, TV shows or books. Others from relatives, friends or even themselves: The terrible weak feeling of fainting that is triggered by the sight of a large amount (or sometimes even just single drops) of blood. Such people are, in most cases, not suitable for donating blood, not to mention, work in emergency rooms in hospitals.

But where does this strong reaction come from? Is it even good for anything?

First of all, we are talking of the so called blood phobia, also known as hemophobia. It is part of a whole group of blood-injection-injury phobias (BII), as categorized by the Diagnostic and Statistical Manual of Mental Disorders (DSM) [1].

The general consensus behind the cause of exaggerated blood phobia, which results in vasovagal responses, is that they originate from the psychological traits of an individual rather than from their genetic heritage. It seems, for example, sometimes to be caused by childhood traumata [2]. On the other hand, twin studies suggest that there might also be certain genetic predispositions which are common for phobias in general [3].

Anyways, are there any explanations? Indeed, there are three more or less fascinating ideas that could hold the key:

(1) The danger theory: Seeing blood is an alarm signal. So when we start feeling weak, we automatically seek for a safe place to rest and/or hide. This would of course only make sense, if the process of fainting takes some time, allowing us to act.
(2) The “play dead” theory: During stone-age, some predators were not interested in paralyzed preys. They would actually wait for a person to flee, only to follow them. Good for the people with hemophobia during those ancient hunts!
(3) The self-healing theory: The blood pressure decreases during fainting. An injured person could thereby slow down the blood loss and instead support the blood coagulation.

Whatever the true origin might be, nowadays the fear of blood is nothing more than annoying. But luckily, as with any phobia, blood phobia can be cured [4].

-Jennifer Heidrich

[1] Lipsitz et al. (2002), The Journal of Nervous and Mental Disease 190(7): 471-478.
[2] Thyer et al. (1985), J. Clin. Psychol., 41: 451–459
[3] Kendler et al. (1992), Arch Gen Psychiatry; 49(4):273-281.
[4] Sanford, J. (2013), Stanford Medicine, Spring 13.

Jan 122016
Spread the love

This might seem to be a very odd question at first, because we practically know everything about particles, atoms, molecules, and their sizes, right?
When we are in school, we learn that an atom is composed of a nucleus, which is very small in comparison to the atom itself and is surrounded by a “cloud of electrons”. This description implies already that we cannot be sure, where the electrons actually are; we describe this fact as electron densities, thus entailing that an atom does not have clearly defined edges. In theory, an electron can be found in any distance from the nucleus, but the probability decreases substantially when you go farther away. This is the case because of what we call wave-particle dualism.


Figure 1: Visualization of a Helium atom.(downloaded from

An electron behaves like a particle as we know from classical physics, but due to its very small size, an electron can also be described as a wave following the laws of quantum mechanics. Among other methods, people have used a type of scanning probe microscopy called atomic force microscopy (AFM) to determine actual radii of atoms. AFM relies on the detection of the interaction of a sample and a very sharp tip. It is a little bit as if a finger would profile an atomic surface. In contrast to optical microscopy methods, the resolution of AFM is not constrained by the optical diffraction limit, which makes the visualization of single atoms possible. But since this interaction between the tip and the sample atom depends on the respective electron clouds described by a certain wave function, it would not be fair to say that we know the definitive size of an atom.

– Kristina Klinker

Read more:
[1] F. J. Giessibl, Mater Today 2005, 8, 32–41.
[2] (last access 10.01.16).

Jan 042016
Spread the love

In the history of mankind, the sky above us has always fascinated and inspired. Many investigations with different scientific questions have led to great progress towards better understanding of the universe and our Solar System. But many questions are still waiting to be answered – not only in the distant universe, but also in our direct neighborhood. One such question is about the origin of the Moon.

Astronomers have presented several hypothesis how the satellite of the Earth could have been formed. Most likely, the Moon has not been captured and is also not the result of a fission process [1]. Nowadays, most scientists agree on the giant impact hypothesis: Another celestial object named Theia collided with the proto-Earth about 4.5 billion years ago [2]. After the impact, matter in the orbit around our planet could have accumulated to form the Moon. Compared to other planet and satellite pairs, the Moon is peculiarly large. To explain the corresponding angular momentum, Theia must have been as large as Mars [3]. But this hypothesis does not explain all characteristics of the Moon. Whereas the density differs between the Earth and the Moon, the chemical composition, mainly investigated in terms of abundances of some element isotope ratios (e.g. oxygen, titanium or tungsten), is rather similar. This is odd, because most other objects in our Solar System show significant differences that represent their different origin in the Solar System. Therefore, the Moon’s chemical composition should resemble the one of Theia – at least for the assumed impact angle and velocity and mass ratios [3].

One possible solution: coincidence! The composition of proto-Earth and Theia as collision partners must have been similar. Earlier this was thought to be too unlikely, but new investigations and simulations show that there is a certain probability of about 20% for this incident to happen [1]. Subtle differences in isotope ratios may be the result of a late accretion following the impact [4,5]. But why this accretion led to the isotope ratios astronomers observe nowadays, still remains a riddle.

-Nicola Reusch

[1] A. Mastrobuono-Battisti, H. B. Perets, S. N. Raymond, A primordial origin for the compositional similarity between the Earth and the Moon, Nature 520 (2015), 212–215.
[2] R. M. Canup, E. Asphaug, Origin of the Moon in a giant impact near the end of the Earth’s formation, Nature 412 (2001), 708–712.
[3] R. M. Canup, Simulations of a late lunar-forming impact, Icarus 168 (2004), 433–456.
[4] M. Touboul, I. S. Puchtel, R. J. Walker, Tungsten isotopic evidence for disproportional late accretion to the Earth and Moon, Nature 520 (2015), 530–533.
[5] T. S. Kruijer, T. Kleine, M. Fischer-Goedde, P. Sprung, Lunar tungsten isotopic evidence for the late veneer, Nature, 520 (2015), 534–537.