Absolute nothingness ( Śūnyata ) is one of the most exciting notions in Buddhism. Essentially, it cannot be interpreted anyhow but can be thought of as Ultimate Reality. In Mediterranean tradition, ancient cosmologists introduced another term that sounds more familiar – The Chaos. It was associated with the infinite ocean and expressed an initial state of cosmos in potentia. Not to get numb by the immensity of this semantic unit, we can consider chaos as noise having an infinite spectrum of all conceivable frequencies. And through interaction with external conditions, certain modes manage to become more pronounced as, for example, in the process of stimulated emission build-up in the laser or during the process of natural selection in the theory of evolution.
In the context of road traffic development, we can define the situation in ancient times as the initial chaotic state. As there were no roads as such, the traffic was chaotic. With the evolution of horse-drawn transport, the road map was developing. However, the roads were still only directions along which one could get from one place to another.
The situation changed when engine cars jolted the slow and stagnant horse traffic. Between the man and the road there was no middle link anymore that could choose a better way within the given direction on its own. Nonetheless, engine-drawn transport had an obvious advantage of higher achievable speed. In turn, the desire to move faster and faster required less scattering at the surface roughness, which inevitably resulted in roads getting smoother, i.e., less chaotic. In the meantime, the assembly line was progressing drastically and both factors lead to a dense cloud of potentially fast cars. But people were still scratching their heads why the average speed of the road traffic was not increasing. After a while, they figured out who is to blame in the residual scattering – the interaction of the drivers themselves with each other. With the absence of any predefined rules, everyone had to slow down and likely change the direction to avoid physical interaction with another participant of the traffic. Thus, the necessity of the traffic regulations was obvious.
The first “Convention with respect to the international circulation of motor vehicles” was signed in Paris in 1909. Among others, it contained the sign depicted in Fig. 1, which indicated the road intersection. And naturally, originating from the ship traffic, the habitual priority-to-the-right rule was established to regulate the right-of-way for two vehicles with intersecting directions. Later a set of traffic regulations was complemented with priority signs and traffic lights.
In 1930 Kurt Gödel presented two theorems reflecting insuperable limitations of formal arithmetics. These theorems had a direct relation to the second problem from Hilbert’s list asking for the proof that arithmetics is consistent. The first Gödel’s theorem (in Rosser form) states that within any consistent formal system S, one can come up with expression A that can be neither proved nor disproved. In other words, the axiomatic system S is incomplete. Hao Wang published in his Logical Journey the full text that Gödel had written about his discovery of the incompleteness theorems:
“In the summer of 1930 I began to study the consistency problem of classical analysis. It is mysterious why Hilbert wanted to prove directly the consistency of analysis by finitary methods. I saw two distinguishable problems: to prove the consistency of number theory by finitary number theory and to prove the consistency of analysis by number theory <…> Since the domain of finitary number theory was not well-defined, I began by tackling the second half <…> I represented real numbers by predicates in number theory <…> and found that I had to use the concept of truth (for number theory) to verify the axioms of analysis. By an enumeration of symbols, sentences and proofs within the given system, I quickly discovered that the concept of arithmetic truth cannot be defined in arithmetic. If it were possible to define truth in the system itself, we would have something like the liar paradox, showing the system to be inconsistent <…> Note that this argument can be formalized to show the existence of undecidable propositions without giving any individual instances. (If there were no undecidable propositions, all (and only) true propositions would be provable within the syosmos in potestem. But then we would have a contradiction.) <…> In contrast to truth, provability in a given formal system is an explicit combinatorial property of certain sentences of the system, which is formally specifiable by suitable elementary means…”
Traffic regulations in the context of the 1st Gödel’s theorem
We can consider any set of interrelated rules, including traffic regulations, as a formal axiomatic system where each axiom is not subject to prove and serves as a basis for further deriving the formulas and theorems (or behavior in a traffic situation). Clearly, the traffic regulations are consistent because otherwise, the number of car crashes would be much higher. Hence, according to the 1st Gödel’s theorem, the system is incomplete. This means that there would always exist a situation, which cannot be resolved regardless of the number of regulations (axioms) contained in the system.
The example of such a situation can be observed on the road intersection regulated by priority-to-the-right rule depicted in Fig. 2. Here four vehicles coming from every direction want to pass this intersection each going straight. There is no way to resolve this situation (to derive the formula) within the traffic regulations system and the drivers in every certain situation are supposed to make the decision: who has the priority.
We can incrementally enhance our axiomatic system by introducing another rule to resolve such a dead-end situation. A rule that gives priority to go first, say, to a red car. Again, four red cars on the same road crossing end up with the same confusion. As long as we add the rules (axioms) into the system enumerably, which is the case for the traffic regulations, such situations will always appear. Introducing the priority signs, constant or variable in time, like traffic lights, or topological road junctions (see Fig. 3) can only decrease the probability of this situation emerging.
Nowadays, most of the intersections are controlled (or topologically resolved). And let’s assume that the preposterous situation with four red cars trying to figure out the right-of-way on the uncontrolled intersection hasn’t happened up to the moment in our complex but finite system of road traffic. Hence, the drivers’ behavior seems to be fully governed with the traffic regulations. However, there still is a possibility of an unresolvable situation, namely, if one comes up with an expression: “I’m not going to obey the rules. For the axiomatic system of traffic regulations, this expression serves as a “liar paradox” and cannot be resolved. Thus people had to come up with the penalty system for acceptable performance of the traffic regulations. But again, it is impossible to nullify the probability of such a situation emerging.
Instead of conclusion
The aim of this text was not to establish a solid theory in either mathematics or law, and the presented examples may not be in strict compliance with the described statements. However, the author finds entertaining the fact that there are bridges between different islands of knowledge accumulated by mankind over the infinite ocean of the unknown.
— Sergei Sobolev
Read more:  D. Mathers, M. Miller, O. Ando. Self and No-Self: Continuing the Dialogue Between Buddhism and Psychotherapy. 2013 Routledge  http://www.plato.spbu.ru/TEXTS/lebedev/1/ferekid.htm  W. Koechner. Solid-State Laser Engineering, 2006 Springer  C. Darwin. The origin of species by means of natural selection; or, the preservation of favoured races in the struggle for life. 1859 London  Convention with Respect to the International Circulation of Motor Vehicles. The American Journal of International Law Vol. 4, No. 4, Supplement: Official Documents (Oct., 1910), pp. 316-328  https://upload.wikimedia.org/wikipedia/commons/f/f2/1909_Paris_Convention_road_signs.jpg  D. Hilbert. “Mathematical Problems”. Bulletin of the American Mathematical Society. 8 (10): 437–479, 1902.  Introduction to metamathematics. S. Kleene, 1952 D. Van Nostrand Company, Inc.  H. Wang. A Logical Journey. From Gödel to Philosophy. 1996 The MIT Press.  https://www.archdaily.com/64354/pearl-river-necklace-nl-architects/
Just the thought of getting in touch with or even ingesting urine repels many people. But medical treatment with urine – also called urotherapy – has been a valuable approach in the traditional medicine of many cultures over the last centuries. Usually, endogenous urine is used but animals are also popular sources. The utilization of urine in conventional medicine is not uncommon too. Urokinase, for example, can be isolated from (human) urine and is an important thrombolytic agent. The drug Premarin®, which is used for hormone treatment, contains estrogens that are extracted from the urine of pregnant mares.
Besides milk, camel (i. e. camelus dromedarius) urine plays a special role for desert dwelling people like the Bedouin. Its use was advised by Prophet Mohammed, thus it has found its way into the Islamic prophetic medicine. Apparently, this body liquid cures diseases like tuberculosis, hepatitis, digestion problems, impotence, hemorrhoids, and flatulence, just to name a few. In 2013, one liter of urine from a virgin camel was worth about 15 € (ca 20 USD) in Yemen, where it is not only used for universal medical treatment but also as a cosmetic product for skin and hair care.
Conventional medicine offers plenty of pharmaceutical cancer treatments which are a blessing and a curse for the patients at the same time. Besides the tedious and exhaustive treatment, patients are confronted with severe side-effects including nausea, fatigue, hair loss, inflammation, and temporary immunodeficiency. The demand for alternatives that are at the same time highly effective, easy to use, mild, and in the best case based on renewable resources is therefore very high.
Camel urine has long been claimed to be an efficient cancer treatment but detailed research on its actual potency and effect on human health is scarce. The soothing effect of pure camel urine on digestive problems can sufficiently be explained through its relatively high content of electrolytes like sodium and zinc as found by Al-Attas, in 2009 – a result that certainly might be achieved just as well by drinking a bouillon. Kohrshid et al. were the first to show an inhibiting effect of lyophilized camel urine on carcinoma cells in animals. In 2011, Alhaider et al. found that treatment of murine hepatoma cells (Hepa 1c1c7, i. e. liver cells) with camel urine inhibited the induction of Cyp 1a1 (a well-known cancer-activator) gene expression by TCDD, a potent Cyp 1a1 inducer and a known carcinogen. Among virgin, pregnant, and lactating camels, the virgin’s urine was found to be most potent while the urine of pregnant camels showed the least potency. One year later, Khorsihd et al. showed that the potency of camel urine to reduce a specific type of lung cancer cells (A549) is somewhat dependent on the breed (Majaheem urine was found to be more effective than Magateer urine) and the age of the camels. The depletion of the cancer cells ranged between 85‒93% of the starting cell number.[9,10] The bioactive subfraction PMF which is believed to be responsible for these effects is obtained from lyophilized camel urine (in literature frequently called PM701). Clinical trials on the oral uptake of PM701 fractions showed no negative effects on human health so far. Apparently, the urine contains a high amount of antibodies of such a small size, that they can be easily absorbed by the patient’s digestive system. Other experiments also show antimicrobial effects of camel urine on bacteria and fungi. Aiming at the environmentally friendly substitution of synthetic agents which are usually obtained from complex multistep reactions this approach is most honorable. It is exciting to see that a waste product has the potential to cure severe diseases although much more research must be done on this subject to clearly verify the efficacy. After all, urine is an excretion that contains various less beneficial digestive metabolites, and even toxins that the body wants to get rid of and indisputable evidence for the efficacy and safety of the PM701 fractions are vital.
For those people who are curious enough to try camel urine for whatever reason but are too disgusted by the idea to drink it pure, a solution might be on the way: there are capsules of PM701, or PMF respectively, but they are not yet available on the market. Another alternative might be camel milk which sounds much more enjoyable and is supposed to be a medicine just as magical as camel urine. It is said to “reduce blood sugar […] solve the problems of autism in children, enhance the immunity of the body…” and many more. Alas, some bad news comes from the World Health Organization (WHO) concerning the use of camel milk and urine: shortly after the Middle East respiratory syndrome coronavirus (MERS-CoV) outbreak in Saudi Arabia in the year 2012 dromedary camels were found to be zoonotictransmitters, meaning that the virus is rapidly transferred from animals to humans – just as we experience right now with the latest outbreak of a coronavirus (COVID-19). As a consequence the WHO advises to avoid contact with camels or consuming raw camel milk and urine. This surely dampens the enthusiasm to utilize camel urine and we might have to wait a few years more for some groundbreaking results in cancer research.
‒ Tatjana Dänzer
 “Abstracts of Papers Read”. American Journal of Physiology. Legacy Content., 1952, 171, 704–781.  D. Brügger, „Hormone aus Stutenharn“, pharma-kritik, 2019, Nr. 5/6/1997.  Alhaidar, A., Gader, A. G. M. A., Mousa, S. A., The Journal of Alternative and Complementary Medicine, 2011, 17, 803‒808.  https://www.vice.com/en_us/article/4w7gvn/drinking-camel-urine-in-yemen-fob-000300-v20n8.  https://upload.wikimedia.org/wikipedia/commons/4/40/Dromadaire4478.jpg  Al-Attas, A. S., Arab J. Nucl. Sci. Appl., 2009, 42, 59–67.  Khorshid F., International Journal of Pharmacology, 2008, 4, 443‒451  Alhaidar, A. A.; El Gendy, M. A. M.; Korashy, H. M.; El-Kadi, A. O. S., Journal of Ethnopharmacology, 2011, 133, 184–190.  Alghamdi, Z.; Khorshid, F., Journal of Natural Sciences Research, 2012, 2, 9‒16.  Khorshid, F. A., 2009, US 20090297622.  Khorshid, F. A., Alshazly, H., Al Jefery, A., Osman, M. A.-M., Journal of Pharmacology and Toxicology 2010, 5, 91‒97.  Hamers-Casterman, C.; Atarhouch, T.; Muyldermans, S.; Robinson, G., Hammers, C.; Songa, E. B.; Bendahman, N. and Hammers, R., Nature, 1993, 363, 446‒448.  Mostafa, M. S.; Dwedar, R. A., British Journal of Pharmaceutical Research, 2016, 13, 1‒6.  Hammam, A. R. A., Emirates Journal of Food and Agriculture, 2019, 31, 148‒152.  https://www.eurosurveillance.org/content/10.2807/1560-7917.ES2014.19.16.20781.  https://www.who.int/csr/don/08-january-2020-mers-uae/en/.
Due to technical
improvements during the last years, machines outcompete humans in a couple of
specialized tasks: Whereas it can take a human person very long to calculate
the square root of a (non-square) number, a computer can finish this
calculation at high precision within a fraction of a second. However, there are
some areas in which machines still cannot compete with nature (yet). One of
them is olfaction: Currently, no device is available that could replace police
dogs with the ability to detect trace amounts of molecules. Similarly, farmers
sometimes even train pigs to search for truffles hidden in the soil. Of course,
the ability to detect relevant molecules in low amounts offers an enormous
advantage and is thus subject to extensive optimization by evolution.
How exactly olfaction works in higher organisms has not been known for a long time. Nonetheless, it had been intuitively clear that there must be specific receptors interacting with the corresponding odours. This simple assumption has a remarkable consequence: Since mammals can distinguish a high number of odours, there also must be a high number of different receptors encoded in the genome. Indeed, the two scientists Linda Buck and Richard Axel discovered a comparatively large family of genes encoding for odorant receptors . For this discovery, they were awarded the Nobel Prize in Physiology or Medicine in 2004. The activation of these receptors on the cell surface always results in similar intracellular reactions. If a cell had receptors for different odour molecules on its surface, it could therefore not distinguish these odours. In accordance to this consideration, it turned out that each olfactory cell only carries one type of all the different odorant receptors encoded in its genome. Why exactly this is the case is still not known in detail to date. Even more surprisingly, it even turned out that the axons of cells, which carry the same type of odorant receptor on their surface, end on the same set of cells.
An odour can of course
consist of several kinds of molecules. The activation of different combinations
of olfactory sensory neurons further increases the number of differentiable
odours. A phenomenon seemingly
similar to the exclusive expression of a single odorant receptor by an
olfactory sensory neuron is the generation of only one type of antigen receptor
by immune cells. They achieve this by a complicated recombination of genes,
which is clearly not observed in olfactory neurons.
Investigating how a biological structure develops is often very helpful: In a later work, Linda Buck could show that in contrast to mature olfactory neurons, there are multiple mRNAs for different odorant receptors in immature neurons . Why cells of our body can have entirely different morphologies and properties even though they all carry a copy of the same genome is a fundamental question which keeps many biologists busy. It is the differential expression of the genes in a cell, which causes these differences. This gives muscle cells the ability to contract and enables neurons to generate action potentials.
However, all olfactory neurons express a very similar pattern of genes except for their odorant receptor. One of the reasons for the transcription of different amounts of RNAs from different genes is the spatial arrangement of the DNA in the nucleus. Had it not been tightly packed into the nucleus, the DNA in each cell would have a total length of 1.8 m and highly condensed sections of DNA are usually not accessible for transcription into RNA. Stavros Lomvardas, a former member of the group of Richard Axel, could show that DNA segments encoding for odorant receptors on different chromosomes get pulled close to each other in a small spatial region in the nucleus. Interactions between the different DNA segments encoding for odorant receptors could contribute to the exclusive transcription of one specific odorant receptor gene [3,4].
The relevance of the spatial arrangement of the DNA within the nucleus for gene expression is an open question of major interest beyond olfaction. To which degree there is a specific nuclear arrangement of DNA and how this is established after cell division would then be further important for other unsolved questions in biology.
— Tobias Ruff
 Buck, L. and Axel, R. , A novel multigene family may encode odorant receptors: a molecular basis for odor recognition. Cell 1991, 65-1 PP175-187 DOI:10.1016/0092-8674(91)90418-x
 Hanchate, N. K. and Kondoh, K. and Lu, Z. and Kuang, D. and Ye, X. and Qiu, X. and Pachter, L. and Trapnell, C. and Buck, L. B. , Science 2015, 350-6265 PP1251–1255
 Clowney, E. J. and LeGros, M. A. and Mosley, C. P. and Clowney, F. G. and Markenskoff-Papadimitriou, E. C. and Myllys, M. and Barnea, G. and Larabell, C. A. and Lomvardas, S., Cell 2012, 151-4 PP724–737
 Markenscoff-Papadimitriou, E. and Allen, W. E. and Colquitt, B. M. and Goh, T. and Murphy, K. K. and Monahan, K. and Mosley, C. P. and Ahituv, N. and Lomvardas, S., Cell 2014, 159-3 PP543–557
Alex Steffen makes enterprises future-proof. He is an expert for business strategy and innovation. He is also a no.1 Best-Selling Author and Speaker. His mission by 2025 is to empower 150,000 business leaders to future-proof their enterprise with ease. How? Alex turns business leaders into entrepreneurs. Alex Steffen was named Management Thought Leader 2019 by Change X and his book “Die Orbit Organisation” was nominated for the getAbstract International Book Award. His Keynotes “The Atlas of Innovation” and “Unstoppable Human” are international hits. Learn about Alex at https://alextsteffen.com.
JUnQ: What is digital citizenship? Should there be a
basic education in responsible handling of digital tools in (early) schools?
Alex T. Steffen: Let’s pick a narrow definition. I
understand digital citizenship as a human’s ability to be a more rounded part
of society thanks to information technology. The truth is: technology often
simply emphasizes the existing design.
Digital schooling isn’t better schooling, as long as schools
fail to teach us the central skills required in the modern world: thinking for
ourselves. In my opinion, that’s what the society and workplace of the future
needs. We’re trying to stitch digital onto an outdated paradigm, which tells us
that memorizing facts is fundamental to a successful career. And then we’re
surprised to find that machines take away jobs.
The truth: a rounded human, well-equipped to play his or her
part in society combines a unique blend of complex skills. Uniqueness is an
advantage, not a disadvantage. I see micro degrees, potent mentoring, and real
exposure to the world as essential ingredients of education towards digital
citizenship. We don’t need any more homogenous machine workers. The new
standard for humans and businesses is hyper-customization. A smart country isn’t
a country that has advanced to digital citizen services only.
A smart country is a society where its citizens can create a
career and life on their own terms using highly customizable (education)
resources. That will make them uniquely trained and attractive according to
their strengths and inclinations. Look around, the top talents are already
living this very design. Now it’s our responsibility to take it from niche to
JUnQ: What are the general problems and dangers that
arise with (global) digitalization and what are possible solutions?
Alex T. Steffen: This begs the exploration of the new
relationship between digital processes and human habits. Let’s first crush a
myth: our problem isn’t the technology disrupting our lives. Humans will create
what’s possible. They always have. The problem lies in our own comfort to
reconsider what we see as “normal”, “customary” and “acceptable”. Our problem
is: we think that most of what we look at is permanent when in fact, the world
is in constant change.
We underestimate our need for validation and our inability
to accept outside perspectives. Those are the real causes of resistance. I am
convinced that if we could measure the real damage of business as usual, it
would vastly outweigh the so-called threats of digitization. I would like to
see an approach where anything new is met with a cool-headed evaluation.
Reactive resistance contra change based on individual discomfort stands in the
way of realizing beneficial trends.
These trends often end up as part of our lives anyway, built
by others, who were open-minded in the first place. And, equally important, a
lack of engagement with trends prevents us from making them safe and aligned
with our values. I suggest training leaders on emotional intelligence and on
staying curious. As soft as this sounds to our logical minds, it’s the vastly
underestimated skill that nourishes our ability to be competitive. Innovation
starts with the very subject in question: rethinking (innovating) the way we
train our leaders, so that change can be embraced .
JUnQ: Data processing, communication, and research
have become impossible without digital tools, especially in the field of
technology and science. A regression has become unthinkable. Are there
limitations to further digital progress?
Alex T. Steffen: Every society comfortable enough to
explore this philosophical question faces a dilemma between two seemingly
Idea 1: we’ve arrived at the pinnacle of innovation. Further
innovation seems unthinkable or unethical. Further innovation causes more harm
Idea 2: awe-inspiring science fiction scenarios that look
completely absurd but encapsulate even more human optimization potential.
The two ideas are not exclusive. Rather, they lie on
opposite poles of a scale. I’m always curious where a person or society sits on
that scale. In other words, how much of each idea do they express. My take is
that we often ignore the bigger picture. History can provide data for a more
realistic standpoint, namely that humans will continue innovating indefinitely.
It’s like that because with new capabilities come ever new desires. These
trigger our ingenuity anew.
This begs the question: will we be able to find a healthy
balance between a paralyzing public debate about the implications of change on
the one hand and co-creating the inevitable changes, so that they end up in
favor of future generations? Let’s look at an example: In Sweden the question
of female equality at work has largely been resolved for a few years. “We focus
on doing rather than talking” an executive at Volvo shared with me. In Germany,
after years of debate this is still a hot topic.
JUnQ: How will the future digital workplace look
Alex T. Steffen: I love this question and yet I’ll
keep my answer deliberately vague. Nobody can predict the future with 100%
accuracy. I sincerely hope that for most people the future workplace will be
driven by vitality, intuition, and self-actualization. This will mean better
health and quality of life for the individual as well as higher competitiveness
for business. 
JUnQ: In Germany, digitalization appears to proceed
more slowly than in other industrial countries. What are possible troubles and
how can we overcome this gap?
Alex T. Steffen: All innovation starts in the mind.
History is full of examples where German ingenuity put us in the pole position,
only to be halted by doubt and cumbersome processes. We wake up and find
ourselves late in the game. No question, their intention is good. But after
some time of business as usual, further resistance to creative destruction
creates more harm than good. In 2019 German car giant Volkswagen came out with
its car for the future. Unfortunately the car is not an exponential innovation
at all. It’s traditional car with an electric engine. Major improvements still
require a garage.
Tesla Motors on the other hand, has shown us what a
disruption of the automotive industry really looks like. Tesla has built a
digital platform on which major improvements are performed over the internet
via digital upgrades. The result: the need for a garage drops drastically. So
does the dependency on a complex web of stakeholders, turning Tesla Motors into
the more flexible player. This example shows that Germany’s industry still
loves its traditions. They are safe. Planning and due diligence is our fetish.
But safe does not make our designs future-proof. The key competitive edge for
the future is flexibility. Sooner or later we need to start killing our legacy
darlings and commit to real change.
JUnQ: How important do you see 5G in general?
Alex T. Steffen: Humans have great difficulty
perceiving change that is happening right now. Change is always seen from the
understanding of the past. For example, the first movies were recorded in the
style of plays. Only after some time directors developed the unique movie style
we know today. I see 5G as an essential building block of the future, both for
business and private. The debate about the why is holding up the potential to
work on the how.
JUnQ: What could be the next big step in
digitalization after smart devices, AI and augmented reality?
Alex T. Steffen: I heard a fascinating statement the
other day: In the last two years we have undergone more change than the
previous ten. The discomfort of uncertainty makes us ask questions like this.
Just like a cigarette drag they are just dangerous fixes that ignore the root
problem: anxiety. We cannot trust any so-called futurists because nobody
actually knows the future. Many experts’ predictions have been dramatic errors
costing businesses large sums of money. Other predictions have never reached
the mainstream, leaving everyone unprepared. Instead I suggest us all to take
on a calm and confident attitude towards the future:
1. Being optimistic. Not all of the future is great but
there’s more good than bad.
2. Embracing uncertainty. Accepting the fact that for the
rest of our lives we’ll be newbies.
Build our very own ability to separate what’s important from
the noise, based on concrete data points. Then decide for ourselves without
taking dangerous shortcuts. To help with this I recommend three books: “The
Inevitable” by Kevin Kelly, “Factfulness” by Hans Rosling, “The Rise of The
Creative Class” by Richard Florida.3-5
JUnQ: The data flood is growing evermore, and
coherencies seem to become impenetrable with every new discovery. How
applicable is “fail fast, fail often” for the digital learning
processes in terms of time and resources?
Alex T. Steffen: In the late 1800s, as economic
activity grew, people were debating solutions for the drastic increase of horse
dung in the streets. It was becoming a huge issue and no solutions in sight.
The advent of the combustion engine solved that pressing issue within one
decade. As humans evolve they design capabilities for pressing challenges.
These days we’re addressing the issues caused by the combustion engine and
other contributors to global heating.
In the same fashion, we’ll come up with technology that can
manage and interpret existing and new data for our needs. Because of the
increase of speed and complexity, prototyping in a fail fast, fail often
fashion as we know it from startups remains highly relevant in my view.
JUnQ: Can you give future leaders a piece of advice
to take along?
Alex T. Steffen: There’s only one, but it means
everything: embrace discomfort. In order to go further we often need to
tolerate some discomfort. A trampoline requires a downward strain in order to
gain the force that can shoot a person up in the air. Without the down there’s
no up. In most cases the internal resistance is much greater than the external
struggle. In other words: it’s easier than we think. If we have a good reason
to act we’ll do it. So here’s mine: if we want to leave a better world for our
kids, we have to get better at embracing change.
JUnQ: Inspiring words, thank you very much for the
interview, Mr. Steffen!
— Tatjana Daenzer
You can find some perspectives on how to design a future-proof workplace in Alex’ book “The Orbit Organisation” and on Alex’ blog (http://www.alextsteffen.com/blog).[1,2]
A.M. Schüller, A.T. Steffen, Die Orbit-Organisation, 2019, Gabal
Haydn Belfield  is a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk. He is also an Associate Fellow at the Leverhulme Centre for the Future of Intelligence. He works on the international security applications of emerging technologies, especially artificial intelligence. He has a background in policy and politics, including as a Senior Parliamentary Researcher to a British Shadow Cabinet Minister, as a Policy Associate to the University of Oxford’s Global Priorities Project, and a degree in Philosophy, Politics and Economics from Oriel College, University of Oxford. firstname.lastname@example.org
Artificial intelligence (AI) is beginning to change our world – for better and for worse. Like any other powerful and useful technology, it can be used both to help and to harm. We explored this in a major Febuary 2018 report The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. We co-authored this report with 26 international experts from academia and industry to assess how criminals, terrorists and rogue states could maliciously use AI over the next five years, and how these misuses might be prevented and mitigated. In this piece I will cover recent advances in artificial intelligence, some of the new threats these pose, and what can be done about it.
In this piece I will cover recent advances in artificial
intelligence, some of the new threats these pose, and what can be done about
AI, according to Nilsson, “is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment”. It has been a field of study from at least Alan Turing in the 1940s, and perhaps from Ada Lovelace in the 1840s. Most of the interest in recent years has come from the subfield of ‘machine learning’, in which instead of writing lots of explicit rules, one trains a system (or ‘model’) on data and the system ‘learns’ to carry out a particular task. Over the last few years there has been a notable increase in the capabilities of AI systems, and an increase in access to those capabilities.
The increase in AI capabilities is often dated from 2012’s seminal Alexnet paper. This system achieved a big jump in capabilities on an image recognition task. This task has now been so comprehensively beaten that it has become a benchmark for new systems – “this method achieves state-of-the-art in less time, or at a lower cost”. Advances in natural language processing (NLP) have led to systems capable of advanced translation, comprehension and analysis of text and audio – and indeed the creation of synthetic text (OpenAI’s GPT-2) and audio (Google’s Duplex). Generative Adversarial Networks (GANs) are capable of creating incredibly convincing synthetic images and videos. The UK company DeepMind achieved fame within the AI field with their systems capable of beating Atari games from the 1980s such as Pong. But they broke into the popular imagination with their AlphaGo systems defeat of Lee Sedol at Go. AlphaGo Zero, the successor program, was also superhuman at Chess and Shogi. AI systems have continued to match or surpass human performance at more games, and more complicated games: fast-paced, complex, ‘real-time strategy’ games such as DOTA II and Starcraft II.
This increase has been driven by key conceptual breakthroughs, the application of lots of money and talented people, and an increase in computing power (or ‘compute’). For example, training AlphaGo Zero used 300,000 times as much compute as AlexNet.
Access to AI systems has also increased. Most ML papers are
freely, openly published by default on the online depository arXiv.
Often the code or trained AI system can be freely downloaded from open source
software libraries like GitHub or TensorFlow, which also tend to standardise
programming methods. People new to the field can get up to speed through online
courses such as Coursera, or the many tutorials available on YouTube. Instead
of training their systems on their own computers, people can easily and cheaply
train them on cloud computing providers such as Amazon Web Services or
Microsoft Azure. Indeed the computer chips best suited to machine learning
(GPUs and TPUs) are so expensive that it normally makes more sense to use a
cloud provider, and only rent the time one needs. Overall then, it has become
much easier, quicker and cheaper for someone to get up to speed, and create a
working system of their own.
These two processes have had many benefits: new scientific
advances, better and cheaper goods and services, and access to advanced
capabilities from around the world. However they have also uncovered new vulnerabilities.
One is the discovery of ‘adversarial examples’ – adjustments to input data so
minor to be imperceptible to humans, but that cause a system to misclassify an
input. For example, misclassifying a picture of a stop sign as a 45 mph speed
These vulnerabilities has prompted some important work on ‘AI safety’, that is, reducing the risk of accidents involving AI systems in the short-term [6,7] and long-term. Our report focussed, however, on AI security: reducing the risk of malicious use of AI by humans. We looked at the short-term: systems either currently or soon to be in use in the next five years.
AI is a ‘dual-use’ technology – it can be used for good or
ill. Indeed it has been described as an ‘omni-use’ technology as it can be used
in so many settings. Across many different areas however, common threat factors
emerge. Existing threats are expanding, as automation allows a greater scale of
attacks. The skill transfer and diffusion of capabilities described above will
allow a wider range of people to carry out attacks that currently the preserve
of experts. Novel threats are emerging, using the superhuman performance and
speed of AI systems, or attacking the unique vulnerabilities of AI systems. The
character of threats is being altered as attacks become more customised to
particular targets, and the distance between target and attacker makes attacks
harder to attribute.
These common factors will affect security in different ways
– we split them into three domains.
In ‘digital security’, for example, current ‘spear phishing’
emails are tailor-made for a particular victim. An attacker trawls through all
the information they can find on a target, and drafts a message aimed at that
target. This process could be automated through the use of AI. An AI could
trawl social media profiles for information, and draft tailored synthetic text.
Attacks shift from being handcrafted to mass-produced.
In ‘physical security’, for example, civilian drones are
likely to be repurposed for attacks. The Venezuelan regime claims to have been
targeted by a drone assassination. Even if, as is most likely, this is
propaganda, it gives an indication of threats to come. The failure of British
police for several days to deal with a remote-controlled drone over Gatwick airport
does not bode well.
In ‘political security’ or ‘epistemic security’, the concern
is both that in repressive societies governments are using advanced data
analytics to better surveil their populations and profile dissidents; and that
in democratic societies polities are being polarised and manipulated through
synthetic media and targeted political advertising.
We made several recommendations for policy-makers, technical
researchers and engineers, company executives, and wide range of other
stakeholders. Since we published the report, it has received global media
coverage and was welcomed by experts in different domains, such as AI policy,
cybersecurity, and machine learning. We have subsequently consulted several
governments, companies and civil society groups on the recommendations of this
report. It was featured in the House of Lords Select Committee on AI’s Report.
We have run a workshop series on Epistemic Security with the Alan Turing
Institute. The topic has received a great deal of coverage, due in part to the
Cambridge Analytica scandal and Zuckerberg’s testimony to Congress. The
Association for Computing Machinery (ACM) has called for impact assessment in
the peer review process. OpenAI decided not to publish the full details of
their GPT-2 system due to concerns about synthetic media. On physical security,
the topic of Lethal Autonomous Weapons Systems has burst into the mainstream
with the controversy around Google’s Project MAVEN.
Despite these promising developments, there is a lot still
more to be done to research and develop policy around the malicious use of
artificial intelligence, so that we can reap the benefits and avoid the misuse
of this transformative technology. The technology is developing rapidly, and
malicious actors are quickly adapting it to malicious ends. There is no time to
Brundage, M., Avin, S., et al. (2018). The Malicious Use of
Artificial Intelligence: Forecasting, Prevention, and Mitigation,
Nilsson, N. J. (2009). The quest for artificial intelligence.
Cambridge University Press.
Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet
classification with deep convolutional neural networks. Advances in neural
information processing systems (pp. 1097-1105).
Amodei, D. Hernandez, D. (2018). AI and Compute. OpenAI:
Karpathy, A. (2015) Breaking Convnets.
Amodei, D., Olah, C. et al. (2016) Concrete Problems in AI
Leike, J. et al. (2017) AI Safety Gridworlds. DeepMind.
Bostrom, N. (2014) Superintelligence. Oxford University Press.
House of Lords Select Committee on Artificial Intelligence (2018).
Report of Session AI in the UK: ready, willing and able? 2017–19 HL Paper
is a data scientist with PhD in Physics, currently working in IoT
branch. He is passionate for artificial intelligence with ten years of
experience in automated data analysis and machine learning.
JUnQ: The everlasting technological progress is aimed
to fulfill many needs of humans: most of them are physical, informational and
commercial. In particular, robots were created to perform tasks that were too
dangerous for humans or that humans could not or did not want to do. But what
do we need intelligent machines for and what is implied by “Artificial
Anton Bogomolov: The answer was already said – we
need AI to make our life simpler, i.e. to simplify some routine work that
humans have to do. Generally, we are heading towards automation, and in the
ideal case, we want to automize everything, every kind of work. So far, the
processes we are capable of automizing have been prioritized.
Now, what is understood by the term “AI”? Over the course of
this interview we will go deeper in the discussion, so let’s start with a
fairly broad definition: AI is something that is able to accomplish certain
tasks with the help of self-learning.
JUnQ: Does it imply that AI is not meant to create
anything, like art or music?
Anton Bogomolov: There is a number of definitions of
AI. Indeed, the term “intelligence” implies that it can do creative work as
well. It is not a simple calculator. You don’t just tell it what you want it to
calculate, and then it does exactly what has been asked. It does something more
complicated and, thus, it also involves some learning experience. In this
context, the creative work does not necessarily mean being an artist or a
musician, or a composer. A chatbot, as an example of an AI feature, is also a kind
of creative work, because it is required to react accordingly or ask
appropriate questions, in other words to be engaged into a conversation as a
human would be i.e. express creativity.
Generally, yes, AI can generate art. For example, “Deep
Dream”1 was popular a few years
back. This algorithm uses AI to generate the dream-like appearance of the
uploaded images. Another one is “Neural style transfer”2 which allows one to compose an image in the
style of another image. Should one ever want to paint like Van Gogh or Picasso,
this can be easily done, using this algorithm. There is also AI-composed music
already creeping into the background of games, film, and media. With AI it is
now possible to create music in different genres just at the push of a button.
JUnQ: In the news or podcasts, the term “machine
learning” often seems to come together with AI. What is, simply put, machine
learning and how does it relate to AI?
Anton Bogomolov: As I mentioned before there are many
definitions of AI. In simple words, AI is a broader term than the machine
learning (ML), i.e. AI includes ML. Being sort of an advanced algorithm, AI
achieves specific goals by means of ML, at the same time it is able to adapt to
its environment, just like humans. ML is also an algorithm, but a simpler one,
with the key feature – the ability to learn (thus the name). It is not meant to
achieve a global goal, its goal is to eventually enable programs to
automatically improve through experience, without the programmer having to
change the code. ML relies on working with data sets, that one needs to input
first. It then examines and analyses the data to find common patterns, so that
eventually it becomes possible to make experience-driven predictions or
JUnQ: So what it means is that AI does not exist
Anton Bogomolov: Right. Machine learning is a subset
of AI, more like a tool to achieve AI. One example might be the first chatbots
from the 90s. They had hardcoded “intelligence”, i.e hardcoded answers to
possible questions. If such bot sees certain keywords it outputs accordingly
relevant keywords. These did not have machine learning. But the intelligence of
these was doubtful since the algorithm did not adapt. And as we discussed
previously the key asset of AI is the ability to adapt.
JUnQ: Since we are on this page, how can one tell the
difference between the AI system and a more “conventional” program?
Anton Bogomolov: There are “intelligence” tests for
AI, among which the most renown one is the Turing Test.3 But this is more to test whether or not a
system is capable of thinking like a human being. However, no AI technology
today has passed the Turing test, i.e. that has shown to be convincingly
intelligent and able to think. So, this is the main goal of this AI branch – we
want to create a machine that will be indistinguishable from a human, in
particular, that will be self-aware and act somewhat mindfully. In the end,
such a machine will be able to pass the Turing test. Once again, so far, they
do not exist. Self-awareness tuned out to be tough to realize.
Now, back to what was asked. I believe, no one is interested
in differentiating AI from a mindless linear algorithm. Because as long as the
desired goal is achieved no one cares what type of algorithm was used for it.
JUnQ: AI is no longer a futuristic concept, as some
may naively think. Can you name some examples where is AI being used already?
Are there any AI applications used in the everyday life of ordinary people?
Anton Bogomolov: The most straightforward example is
our smartphones. The more recent ones can recognize the owner’s face. This is
known to use neural networks. Also, in smartphones, there is Google assistant.
Spoken inquiries are transferred to a server where neural network-based
algorithms convert them to text, and which is then processed to deliver the
relevant information. These are the simplest examples. We all watch Youtube
where based on one’s watch history the system suggests what else one might be
interested in. These AI-based recommendation engines now seem to know us to an
If we now go further from everyday life, I would say AI is
used pretty much in every field. In finance – there are already automatic
trading robots. Some use AI for analysing financial markets to generate
profitable trading strategies or make market predictions.
Autonomous driving has become very popular recently. There
are even toys for children that make use of a variety of AI and ML
technologies, including voice and image recognition, to identify the child and
other people around, based on their voices and appearance. This all is owing to
the computation power we currently have, which has advanced in the last years.
AI has found its application in medicine as well. As AI
demonstrated remarkable progress in image-recognition tasks it is now widely
used in medical radiology and computer tomography. One example is that there
are neural networks that are trained to analyze tumours and do it as well as
the top-class specialists in the field. Just as radiologists are trained to
identify abnormalities based on changes in imaging intensities or the
appearance of unusual patterns, AI can automatically find these features, and
many others, based on its experience from the previous radiographic images,
coupled with data on clinical outcomes. This also yields a more quantitative
outcome, while radiologists perform only a quantitative assessment.4
JUnQ: As AI develops further is it going to make
human jobs obsolete? And what will people be doing if there is nothing else to
Anton Bogomolov: Ideally, this is what we aim for –
to have everything automized. But this can be achieved, in my opinion, only
when so-called artificial general intelligence is realized. This will be a
machine capable of experiencing consciousness and think autonomously and thus
will be able to accomplish any intellectual task that a human being can.
What will happen to humans after all? There is a concept of
universal basic income. The idea is that the robot replacing you is working on
your behalf and you are given an income sufficient to meet basic needs, with
zero conditions on that income. Because in the end the job is being done and
the resources are being produced while you are free for other pursuits.
There has been a lot of research interest in this regard.
Back in the 60’s, there was a researcher, John Bumpass Calhoun, who reported on
an experiment with rats, the experiment is also known as “Universe 25”. The
researchers provided rats with unlimited resources, such as water and food.
Besides, they eliminated the danger otherwise coming from nature, like
predators, climate, etc. Thus, the rats were said to be in “rat utopia”. At
first, the population peaked but shortly after it started to exhibit a variety
of abnormal, often destructive behaviours. After some time of the experiment,
the rats became too lazy to reproduce and the population was on its way to
extinction. There is, of course, the controversy over the implications of the
experiment but it can be perceived as one of the possible scenarios of the
JUnQ: What about the programming jobs? And
Anton Bogomolov: Well, first we automize what we can
do – so far, the simplest work. AI is now partly replacing the jobs of
translators and customer service work. The next in line are self-driving cars
that will automize the entire transportation industry, bus, and taxi drivers
and so on. But programming jobs are of a different kind, they are creative.
Programs that develop other programs exist already, but they are rather limited
in what they can do.
Eventually, all jobs will be replaced. Programming jobs will
be among the last ones though. Just as other creative jobs, including
One day we will have a super-intelligent machine, that
develops further programs similar to itself at less expense and much faster
compared to when supervised by humans. At some point we might not be able to
follow its advances anymore and here comes the term “technological
singularity”. This is believed to occur when AI starts discovering new
science at enormous rates while always learning and evolving on top of it
uncontrollably from human’s side.
JUnQ: Is the “singularity” inevitable?
Anton Bogomolov: There is an everlasting argument
whether at all it is possible to realize a self-aware AI, that will act
mindfully, much like a human. Therefore, depending on “yes” or “no” there will
be a technological singularity or not. It can as well occur for other reasons,
it is just that among others AI is more likely to bring us to the technological
On the other hand, it is not proven that such AI can ever be
created, to be able to run autonomously and replace all of us. In this case,
there will be no AI-induced singularity.
So, this is now a really hot topic in the community.
JUnQ: Does it mean that self-awareness is
prerequisite for a possible singularity to occur and we are not yet passed the
point of no return?
Anton Bogomolov: Right. The algorithms that exist now
and are known to beat the world-class champions in chess and Go are harmless.
They are just trained extraordinary well on one particular subject, to achieve
a well-defined goal. They are not able to think outside of the box, like “what
else is there that I could do”.
Once we create a machine that will be able to think this
way, to exhibit human-level consciousness, it is expected to bring us to the
singularity. Because it will be able to operate and develop without any
supervision. All existing AI technologies do develop themselves but only to a
certain degree, they do not have this freedom yet.
JUnQ: Speaking about self-awareness. For example,
Sophia – the social humanoid robot developed by Hanson Robotics – realizes
itself (herself) as being a programmed female robot. Does it mean that she is
self-aware? How did they manage to program “her” self-realization?
Anton Bogomolov: As far as I understand she is programmed
to answer this way. If there comes a question about what she thinks she is, her
answer will be according to what has been built in her program. Most likely she
was trained on thousands of real dialogs among people about their
self-awareness. Like other AI systems, she also has machine learning that, if
you feed it with enough data, will enable her to learn how to answer and how to
behave, as people would.
Sophia communicates very well on a topic known in advance.
Because in this case she can get trained in advance: they provide her with
enough information about a given topic to get trained. Then she is able to have
a sensible conversation because she has the statistics on what is typically
answered when. Nevertheless, it is not as simple as when you say X, she replies
Y. Thanks to machine learning what she says is a result of rather complicated
I did not have a chance to speak with her personally though,
but I think she is certainly not self-aware. Otherwise, the singularity would
have been just around the corner by now. If she had a human-level
consciousness, there would be nothing that she would need people for. She would
be able to program herself to increase her memory. In just a few days she would
reach the level of intelligence of all the people on Earth. In a few more days
we would not be able to comprehend what level of intelligence she would have –
again the exponential progress.
So, there is nothing we should worry about. She is still
just a robot – more about illusion than intelligence. The shocking effect is
also due to the fact that she looks like a human, has emotions and facial
expressions. This unique combination of her features might make us a bit alert.
And for sure Sophia is a great representation of all the advances of AI
In fact, to able to realize human-level AI we essentially
need to model a human’s brain. The human brain contains around 10 neurons. On the other hand,
functional neural networks have in the order of tens of millions of neurons.
These four orders of magnitude difference are sizeable. Moreover, it also takes
quite some time to train a system with a large number of neurons. At the end of
the day, we do not yet have the capacity to realize a human-level AI.
JUnQ: In case something goes wrong, will we able to
“unplug” the machine. Do autonomous AI systems exist yet?
Autonomous systems do exist. Think of a toy-dog, that we
have discussed already, or a vacuum cleaner, they are programmed to charge when
needed. These are completely autonomous as long as the power source is
available. Military branch sure has got some as well. I can imagine an armed
flying drone, self-charging, and self-rechargeable.
But the existing autonomous AI systems are not a threat to
humans. Despite having all the advantages of machine learning they follow a
defined program to accomplish a specific task. It can be the best in
recognizing people’s faces, shooting targets or avoiding bullets. But it is
still a mindless machine, that we can destroy, or fool or at least hide all the
power stations from it.
As long as any of these do not have human-level
intelligence, as long as they are not smarter than us, they should not be
considered as a potential threat.
JUnQ: So reaching human-level intelligence would be
the point from which on AI can potentially live without us.
Anton Bogomolov: Correct. There is an opinion that
biological life is just a means to create an electronic life. In other words,
some believe that this is our mission, to give birth to an electronic conscious
creature, surpassing our capacity, that will develop much faster than humans.
In some sense, it is similar to the early times of our planet. Life on Earth
began relatively early. But the first living creatures – unicellular organisms
– were progressing very slowly, until the multicellular organism occurred,
which boosted the progress tremendously. And the progress always seems to be
exponential. Thus, the idea of this theory is that we create something to keep
up to this exponential progress. And if we look at it globally, like in the
scale of the Universe, if this should ever happen that AI takes over the world,
it would make sense. Because AI would go further exploring the Universe much
faster than we would. Thus, from the point of view of global progress, it would
be more advantageous.
JUnQ: Now, when you put it this way the technological
singularity does not sound so frustrating anymore. Are you optimistic overall?
Will we make it to the end of the 21st century?
Anton Bogomolov: To me, it feels great to witness the
progress and to be a part of it. But we will see how it goes. We live within a
self-organized system, where everything has got a direction to go. Even though
humans are all independent creatures, we still obey the same laws of synergy,
we self-organize as well, we cluster forming cities, etc. And sure we also have
something to move towards, thus we develop and evolve. So, this progress is so
In fact, experts expect the technological singularity to occur already in the 21st century. But it is not trivial to give a correct estimate. On the other hand, not related to AI, there is research going on in the field of so-called negligible senescence. The idea is that by engineering the reversal of all the major molecular and cellular changes that occur with age we would enable us to constantly rejuvenate ourselves. The researchers believe that negligible aging for humans will be achieved in this century. There even exists a provocative opinion that the first human beings who will live to 1,000 years old are already alive.
At the end of the day, there has been tremendous progress in
many fields, not only AI. Along with AI, we may succeed in developing other
technologies, which will help us to prolong our lives as well as humans’ in
Curious things happen around us all the
time – and sometimes we are so familiar with them that we do not even notice
If you read the title you might now think that
this article was about the Leidenfrost effect , that is, this little funny
dance water droplets perform on a hot surface such as a frying pan. It is not,
though. The Leidenfrost effect occurs when a material – usually a liquid – meets
a surface far above its boiling temperature. A thin layer of the droplet’s
surface will then evaporate rapidly, causing a protective gas coating to appear
that effectively insulates the droplet and lets it last longer on the hot
surface. Similar effects can also be seen with liquid nitrogen on a material at
room temperature. These droplets appear to travel around due to ejected gasses.
But does a similar phenomenon also occur without the necessity of a hot
There is in fact a location where such an
effect occurs regularly without us usually noticing: The bathroom. Under
certain conditions water droplets can be seen moving on a surface of water as
if they had hydrophobic properties. The easiest way to see them is in the
shower, when the shower floor is already covered in a thin layer of water. If new
water droplets now impact on this surface at certain angles and speeds, they
can be seen rushing around for a while before disappearing. It turns out that
in recent years a few scientific publications were dedicated to investigating
this effect more closely. [2,3] With a high-speed camera, the bouncing effect
can be visualized rather easily, as shown in Fig. 1: The droplet appears to
cause a dent in the water surface and then bounce off without merging with the
rest of the liquid. Of course, the first idea that comes into mind now is the
Leidenfrost effect, where a similar behavior can be seen caused by a layer of
vapor. However, here no high temperatures are involved and thus the generation
of water vapor is negligible.
The first intuition of an air coating to
protect the water droplet is still standing, though, and thus the scientists
tried to model the behavior. It turns out that there is indeed a protective
coating of air, which can get compressed when the droplet approaches the surface
of the liquid underneath. The air simply cannot escape quickly enough and
therefore protects the droplet on impact and pushes away from the water surface.
This phenomenon causes what is called the residence time of a droplet, that is,
the time a droplet can sit on top of a pool of the same liquid before
coalescing (see Fig. 2). The theory was confirmed by lowering the ambient air
pressure around the experiment, which caused the residence time to decrease.
 However, one would expect that this thin layer of gas should not withstand
a heavy impact of a droplet coming from e.g. the shower head with a lot of
speed and thus kinetic energy.
An explanation can be found using a simple
speaker membrane: When the droplets are put in contact with an oscillation surface,
like water on an oscillating speaker, the bouncing is facilitated, and the
droplets can remain intact for much longer. Moreover, the droplets now travel
around just like they do in a shower! High-speed camera footage can show the
reason for this change in behavior: The surface of the water pool, excited into
periodic up- and down-movement patterns, gently catches the droplet if the
surface is moving downwards in the moment of impact and therefore prevents the
impact from destroying the protective gas layer. It is just like gently catching
a water balloon with your hand by grabbing it in motion and then slowing it
down. Additionally, the continuous movement of the surface seems to stabilize
the gas layer and therefore massively increases the residence time, all while allowing
the droplet to travel from minimum to minimum, thus creating the “walking
water” effect.  In a shower, the impact of many, many droplets cause the
surface of the water pool on the ground to oscillate in a similar manner,
creating landing spots for some droplets that then move around the surface. The
phenomenon can thus be explained by the residence time of a droplet together
with an oscillating surface.
Finally, one can reproduce a similar
behavior in space, where microgravity does not pull the droplets down. An air
bubble inside of a water bubble can thus act like an isolated system where
droplets can form and move… excited by the sound of a cello! If you got
curious, please check out the beautiful footage in Ref.  where much of the
inspiration of this article came from.
As stated initially, the most curious
things happen around us and we simply have to notice them.
Superstitions are having hard
times in our modern always progressing world. It is no longer easy to fool
someone with a myth or a beautiful legend from childhood. But how about this
one: have you ever heard that a thunderstorm could curdle milk?
A correlation between
thunderstorms and the souring or curdling of milk has been observed for
centuries. As early as in 1685 the first clue was written down in the book “The
Paradoxal Discourses of F. M. Van Helmont: Concerning the Macrocosm and
Microcosm, Or the Greater and Lesser World, and Their Union” :
“Now that the Thunder hath its
peculiar working, may be partly perceived from hence, that at the time when it
thunders, Beer, Milk, &c. turn sower in the Cellars … the Thunder doth
everywhere introduce corruption and putrefaction”.
By the beginning of the 19th
century there had been numerous attempts to find theories of a causal
relationship. [2-7] They all were not plausible, many even contradicting.
Later, after refrigeration and pasteurization became widespread, eliminating
bacteria growth, interest in this phenomenon almost disappeared. While the most
popular explanation remains that these occasions are only a correlation, we
would like to draw the reader’s attention to some of the suggested
In order to understand what
actually happens with milk during a thunderstorm we would need to know (i) what
processes are behind the milk souring and (ii) what accompanies thunderstorm,
e.g. lightning. While the latter is not yet entirely clear to scientists, 
the simplified picture of the first point we will cover in the next few
Fresh milk is a textbook example of
colloid – a solution consisting of fat and protein molecules, mainly casein,
floating in a water-based fluid.  The structure of milk is schematically
illustrated in Fig. 1. Fat globules are coated with protein and charged
phospholipids. Such a formation protects the fat from being quickly digested by
bacteria, which also exist in milk. Casein proteins under normal conditions are
negatively charged and repel each other so that these formations naturally
distribute evenly through the liquid. Normally, milk is slightly acidic (pH ca.
6.4-6.8),  being sweet at the same time due to lactose, one of the
other carbohydrates within the milk. When the acidity increases to pH lower
than 4, proteins denature and are no longer charged. Thus, they bind to each
other or coagulate into the clumps known as curds. The watery
liquid that remains is called whey.
The acidity of milk is determined
by the bacteria which produce lactic acid. The acids lower the pH of milk so
the proteins can clump together. The bacteria living in milk naturally produce
lactic acid as they digest lactose so they can grow and reproduce. This occurs
for raw milk as well as for pasteurized milk. Refrigerating milk slows the
growth of bacteria. Similarly, warm milk accommodates bacteria thrive and also
increases the rate of the clumping reaction.
Now, we can think of a few things
that may speed up the souring process. The first one could be ozone that is
formed during a thunderstorm. In one of the works it was shown that a
sufficient amount of ozone is generated at such times to coagulate milk by
direct oxidation and a consequent production of lactic acids.  However, if
this were the case, a similar effect would occur for sterilized milk. The
corresponding studies were carried out by A. L. Treadwell, reporting that,
indeed, the action of oxygen or oxygen and ozone coagulated milk faster Ref.
. But the effect was not observed if the milk had been sterilized. The
conclusion drawn from this study was that the souring was produced by unusually
rapid growth of bacteria in an oxygen rich environment.
In the meantime, a number of
other investigations suggested that a rapid souring of milk was most likely due
to the atmosphere that is well known to become sultry or hot just prior to a
thunderstorm. This warm condition of the air is very favourable for the
development of lactic acid in the milk. [3, 4] Thus, these studies were also in
favour of thunderstorms affecting the bacteria.
A fundamentally different
explanation was tested by e.g. A. Chizhevsky in Ref. . It was suggested that
the electric fields with particular characteristics produced during
thunderstorms could stimulate a souring process. To check this hypothesis the
coagulation of milk was studied under the influence of electric discharges of
different strength. Importantly, in these experiments the electric pulses were
kept short to eliminate any thermal phenomena. Eventually, the observed coagulation
for certain parameter ranges was explained by breaking of protein-colloid system
in milk due to the influence of the electric field.
Other experiments investigating the
effect of electricity on the coagulation process in milk turned out to be
astonishing.  When an
electric current was passed directly through milk in a container, in all the
test variations, the level of acidity rose less quickly in the ‘electrified’
milk samples compared with the ‘control’ sample. Which contradicted all the
To conclude, while there is no
established theory explaining why milk turns sour during thunderstorms, we
cannot disregard numerous occasions of this curious phenomenon.  What
scientists definitely know is that milk goes sour due to bacteria – bacilli
acidi lactici – which produce lactic acid. These bacteria are known to be fairly
inactive at low temperatures. Which is why having a fridge is very convenient
for milk-lovers. However, when the temperature rises, the bacteria multiply
with increasing rapidity until at ca. 50°C it becomes too hot for them to
survive. Thus, in pre-refrigerator days, when this phenomenon was most popular,
in thundery weather with its anomalous conditions the milk would often go off
within a short time after being opened. Independently of the exact mechanism,
i.e. increased bacteria activity or breaking of the protein-colloid system, the
result is – curdled milk.
Should you ever witness this phenomenon yourself, do not be sad immediately. Try adding a bit brown sugar into your fresh milk curds…
— Mariia Filianina
 F. M. van Helmont Franciscus “The Paradoxal Discourses
of F. M. Van Helmont, Concerning the Macrocosm And Microcosm, Or The Greater
and Lesser World, And their Union” set down in writing by J.B. and now
published, London, 1685.
Once, thunderstorms with thunder and
lightning were interpreted as signs of the god’s wrath; nowadays, we are taught
the mechanics behind a thunderstorm in school. You are probably already
thinking about ice crystals that are smashed together by strong winds inside
clouds, creating static charges in the process. How does a lightning bolt,
though, find its way from the cloud to the ground? This question still keeps
scientists awake at night – and there is still not a clear answer to how
exactly the formation and movement of a lightning bolt work. This Question of
the Month will give a brief summary on how a lightning bolt selects its target.
Lightning [1,2] occurs always when a large
thunderstorm cloud with strong winds generates sufficient electrostatic charge that
it must discharge towards the ground. The discharge itself occurs (simplified)
in a twostep process, consisting of a main lightning bold and a preflash: The
preflash travels as comparably weak (but still dangerous!) current downwards
from the cloud. This usually happens in little jumps, which have been
investigated with high-speed cameras. They show that the current path is
apparently selected randomly by slowing down at a given position and then
randomly selecting the next to jump to. This random selection appears to happen
within a sphere of a few tens of meters in diameter around the tip of the
growing lightning bolt. The process also involves growing many tendrils with
individual tips and thus covers a large area (see also Fig. 1). With this
procedure, the lightning bold eventually “feels” its way to the ground until it
reaches it either directly or via a structure connected to it.
Therefore, if a conductive object reaches
into such a sphere, the bolt will immediately jump to it and use it as a
low-resistance shortcut to the ground – as a result, if possible, shortening
the path for the discharge. This behavior leads to the curious effect of
exclusion areas around structures that are protected with lightning rods, in
which practically no ground strike will occur, and a person will not be hit
directly. Unfortunately, this will not completely protect the person, as the
electricity can still be dangerous within the ground.
Now that the preflash has found a path to
the ground, the second phase starts, and the majority of the charge starts to
flow with up to 20 000 A along the path found by the preflash. This is also the
portion of the discharge that is visible by bare eye. It can consist of several
distinct discharges that all follow the path of ionized air of the previous one,
creating the characteristic flickering of a lightning bolt.
How the entire process from preflash to
main discharge works is still not completely understood today and much of the
presented insights were simply gathered phenomenologically by camera imaging.
Additionally, there are many more types of and effects related to lightning
bolts, which are relevant for our understanding of a variety of weather
phenomena. All in all, thunderstorms are still something magical today, even if