Adrien Thurotte

Mar 052019
 
Spread the love

Genetic information is encoded in the deoxyribonucleic acid (DNA). In form of a long double-helix molecule, lo-cated in living cells, it governs most of the organisms traits. Explicitly, information from genes is used to form func-tional gene products such as proteins. This process of gene expression is used by all known forms of life on earth to generate the macromolecular machinery for life. Thus, it poses the fundamental level of how the genotype causes the phenotype, i.e. the composite of organisms’ observ-able characteristics. Genomic modification is a powerful tool to amend those characteristics. Reproductional and environmentally caused changes to the DNA is a substrate for evolution. In nature, those changes happen and may cause favourable or unfavourable changes to the phenotype, which allow the cell or organism to improve or reduce the ability to survive and reproduce, respectively.

In the first half of the 20th century, several methods to alter the genetic structure of cells were discovered, which include exposing it to heat, X-rays, UV-light, and chemicals1-4. A significant number of crop cultivated today were developed using those methods of traditional muta-genesis, an example of which is Durum wheat, the most prevalent wheat for pasta production. With traditional mu-tagenesis thousands of mutations are introduced at random within the DNA of the plant. A subsequent screening iden-tifies and separates cells with favourable mutations in their DNA, followed by attempts to remove or reduce possible unfavourable mutations in those by mutagenesis or cross-breeding.

As those methods are usually unspecific and complex, researchers have developed site-determined gene editing techniques, the most successful of which is the so called CRISPR/Cas9 method (clustered regularly interspaced short palindromic repeats). This method borrows from how bacteria defend viral invasion.6 When the bacterium detects virus DNA invasion, it forms two strands of RNA (single helix molecules), one of which contains a sequence that matches that of the invading virus DNA and is hence called guide RNA. These two RNAs form a complex with a Cas9 protein, which, as a nuclease enzyme, can cleave DNA. When the guide RNA finds the target in the viral genome, the RNA-Cas9 complex will lock to a short se-quence known as the PAM, the Cas9 unzippes the viral DNA to which the RNA will match. Cas9 then cleaves the viral DNA, forcing the cell to repair the DNA.6 As this repair process is error prone, it may lead to mutations that might disable certain genes, changing the phenotype. In 2012 and 2013 it was discovered that the guide RNA can be considerably modified for the system to work site-determined5, and that by modifying the enzyme it not only works in bacteria and archaea, but also in eukaryotes (plants and animals), respectively.7

Figure 1: CRISPR/Cas9 working principle.8

Research published since demonstrated the method’s poten-tial for RNA-programmable genome editing. Modifications can be made so during the repair an artificially designed DNA sequence pairs with the cleaved ends, recombines and replaces the original sequence, introducing new genes to the genome.11,12 The advantages of this technique over tra-ditional gene editing methods is multifold. It can act very targeted, i.e. site- and therefore gene-specific in any form of known life. It is comparatively inexpensive, simple enough to be conducted in basic labs, effective, and fast regarding preparation and realisation. The production of multiplex ge-netically modified mice, for instance, was reduced from up to two years to few weeks,9 as CRISPR/Cas9 has the unique advantage over earlier genome editing methods, that multi-plexable targeting is easily achieved by co-expressing Cas9 with multiple single-guide RNAs simultaneously. Conse-quently, within few years after its discovery, it evolved to be the routine procedure for genome modification of virtually all model plants and animals.

The availability of such a method evokes medical and botanical development interests. A plethora of possible medical applications are discussed and researched, among which is healing cancer or treating genetic disorders. For cancer research it is imaginable to induce a multitude of deliberate mutations to artificially form cells similar to can-cerous cell, study the caused modification to the cells, and thus learn to inhibit their reproduction or the original muta-tion. In the clinical research focus now are blood diseases or those related to haematopoietic cells, such as leukaemia, HBV, HIV, or haemophilia.13,14 This is because for the treatment of those diseases, the cells (blood cells or bone marrow) can be extracted from the body in a known way, their genome can be edited in vitro by the CRISPR/Cas9 method, and finally the cells can be reintroduced to the body. The advantage of the extraction is that no additional vector (agent to help finding the right cells in vivo) is re-quired, and the genomic modification can be controlled ex vivo. While the editing efficiency with CRISPR-Cas9 can be extremely high, the resulting cell population will be inherently heterogeneous, both in the percentage of cells that were edited and in the specific genotype of the edited cells. Potentially problematic for in vivo application is the bacterial origin of the endonuclease Cas9. A large portion of humans show humoral and cell-mediated immune re-sponses to the Cas9 protein complex,10 most likely because of prior infection with related bacteria.

Although clinical applications of CRISPR/Cas9 grab a lot of media attention, agricultural applications draw even more commercial interest. Prospects here are the faster, cheaper and more targeted development of crops than by traditional methods of mutagenesis, which are extremely more aggressive in comparison. The main aim is unchanged though: improve plants regarding yield, resistance to dis-eases or vermin, and resilience to aridity, heat, cold, humid-ity, or acidity.15,16 CRISPR/Cas9 is therefore considered an important method to ameliorate agricultural food produc-tion to feed the earth’s ever-growing human population.

Regulations of thusly modified products vary largely be-tween countries. While Canada considers such plants equal to not genetically modified if no transgene was inserted, the USA assesses CRISPR plants on a case by case basis, gauging whether the modification would have been possible by natural mutation. This way they chose to not regulate mushrooms that do not turn brown and maize with an al-tered starch contend. Last year the European court of justice ruled all CRISPR/Cas9 modified plants as genetically mod-ified organisms, reasoning that the risks of such a novel method are unknown, compared to traditional mutagenesis as an established method of plant breeding.

Instigated by genome editing in human-embryonic cells in 201518 a group of scientists called for a moratorium to dis-cuss the possible risks and impact of the wide usage of the CRISPR/Cas9 technology, especially when it comes to mu-tations in humans.19 On the 2015 International Summit on Human Gene Editing leading international scientists con-sidered the scientific and societal implications of genome editing. The discussed issues span clinical, agricultural and environmental applications, with most attention focused on human-germline editing, owing to the potential for this application to eradicate genetic diseases and, ultimately, to alter the course of evolution. Some scientists advise to ban CRISPR/Cas9 based human genomic editing research for the foreseeable future, whereas others favour a rapid progress in developing it.20 A line of argument of support-ers of the latter viewpoint is, that the majority of ethical concerns are effectively based on methodical uncertainties of the CRISPR/Cas9 method at its current status, which can be overcome only with extensive research. Those methodical uncertainties include possible cleavage at undesired sites of the DNA, or insertion of wrong sequences at the cleavage site, resulting in the disabling of the wrong genes or even the creation of new genetic diseases.

Whilst a total ban is considered impractical because of the widespread accessibility and ease of use of this technology,21 the summit statement says, that “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and effi-cacy issues have been resolved . . . and (ii) there is broad societal consensus about the appropriateness of the pro-posed application.” The moral concerns about embryonic or germline treatment base on the fact that CRISPR/Cas9 not only would allow the elimination of genetic diseases, but also enable genetic human enhancement, from simple tweaks like eye colour or non-balding to severe modifica-tions relating bone density, muscular strength or sensory and mental capabilities.

Although most scientist echo the summit statement, in 2018 a biochemist claimed to have created the first genetically edited human babies, two twin sisters. After in vitro fertil-ization, he targeted a gene that codes for a protein that one HIV variant uses to enter cells, enforcing a kind of HIV immunity, which is a very rare trait among humans.22 His conduct was harshly criticised in the scientific community, widely condemned, and-after enormous public pressure-redoing forbidden by the responsible regulatory offices.

Ultimately the CRIPSR/Cas9 technology is a paramount example of real world societal implications of basic re-search and demonstrates researchers’ responsibilities. This also raises the question whether basic ethical schooling should be part of every researcher’s education.

— Alexander Kronenberg

Read more:

[1] K. M. Gleason (2017) “Hermann Joseph Muller’s Study of X-rays as a Mutagen”

[2] Muller, H. J. (1927). Science. 66 (1699): 84–87.

[3] Stadler, L. J.; G. F. Sprague (1936). Proc. Natl. Acad. Sci. U.S.A. US Department of Agriculture and Missouri Agricul-tural Experiment Station. 22 (10): 572–8.
[4] Auerbach, C.; Robson, J.M.; Carr, J.G. (March 1947). Sci-ence. 105 (2723): 243–7.

[5] M. Jinek, K. Chylinski, I. Fonfara, M. Hauer, J. A. Doudna, E. Charpentier. Science, 337, 2012, p. 816–821.
[6] R. Sorek, V. Kunin, P. Hugenholtz. Nature reviews. Micro-biology. 6, 3, (2008), p. 181–186.

[7] Cong, L., et al., (2013). Science. 339 (6121) p. 819–823.

[8] https://commons.wikimedia.org/wiki/File:GRNA-Cas9.png

[9] H. Wang, et al., Cell. Band 153, 4, (2013), S. 910–918.

[10] D. L. Wagner, et al., Nature medicine. (2018).

[11] O. Shalem, N. E. Sanjana, F. Zhang; Nature reviews. Genet-ics 16, 5, (2015), p. 299–311.

[12] T. R. Sampson, D. S. Weiss; BioEssays 36, 1, (2014), p. 34–38.

[13] G. Lin, K. Zhang, J. Li; International journal of molecular sciences 16, 11, (2015), p. 26077–26086.

Mar 052019
 
Spread the love

Dr. Roman Stilling

Disclaimer: The opinions, views, and claims expressed in this essay are those of the author and do not necessarily reflect any opinion whatsoever of the members of the editorial board. The editorial board further reserves the right not to be responsible for the correctness of the information provided. Liability claims regarding damage caused by the use of any information provided will therefore be rejected.

Roman Stilling graduated with a B.Sc. in Biosciences from the University of Mün-ster in 2008 and received a Ph.D. degree from the International Max Planck Re-search School for Neurosciences / University of Göttingen in 2013. Afterwards he joined the APC Microbiome Ireland in Cork, Ireland, as postdoctoral researcher. Since 2016 he is the scientific officer for for the information initiative “Tierver-suche verstehen”1, coordinated by the Alliance of Science Organisations in Germany.


Ethical concerns on using animals in biomedical research have been raised since the 19th century. For example, in England the “Cruelty to Animals Act” was passed in 1876 as a result of a debate especially on the use of dogs un-der inhumane conditions such as invasive physiological experiments or demonstrations without general anaesthe-sia. Interestingly, it was Charles Darwin who put in all his scientific and political gravitas to push for regulation by the law while at the same time providing highly differen-tiated argumentation towards using animals for advancing knowledge, especially in the quickly developing field of physiology 1,2. In an 1881 letter to a Swedish colleague he wrote:

“[. . . ]I fear that in some parts of Europe little regard is paid to the sufferings of animals, and if this be the case I should be glad to hear of legislation against inhumanity in any such country. On the other hand, I know that physiology cannot possibly progress except by means of experiments on living animals, and I feel the deepest conviction that he

who retards the progress of physiology commits a crime against mankind.”3

Animal research as a moral dilemma

In this letter Darwin succinctly summarized the ethical dilemma that is the core of the debate on using animals for research: whether we may cause harm to animals if it is necessary to advance science and medicine.

In fact, the ability to suffer is generally accepted as the sin-gle most morally relevant criterion when animals are con-sidered as subjects of moral worth. This reasoning is based on the philosophies of Jeremy Bentham who’s thoughts on this matter culminated in the aphorism: “The question is not, Can they reason? nor, Can they talk? but, Can they suffer?”4

Today, animal welfare legislation is based on this notion in most countries, which has fundamental consequences on how different species of animals are protected by these reg-ulations. For example, in the EU, only the use of animals within the taxonomical subphylum Vertebrata (i.e. verte-brates) are covered by the respective EU directive.5 More recently also the use of Decapoda (e.g. crayfish, crabs, lob-sters) and Cephalopoda (e.g. squids, octopuses) falls within this regulation since it is assumed that these animals have a complex enough nervous system to perceive pain and expe-rience suffering.

Most current legislation in industrialized countries ac-knowledges that animals (not exclusively, but especially those able to suffer) have intrinsic value and a moral sta-tus that is different from other biological forms of life such as plants, fungi or bacteria and inanimate matter. At the same time no country has established legislation that con-siders the moral status of any animal the same as the moral status of a human being – irrespective of the developmental state or status of health of that human being.

Together this reasoning has led to the appreciation, that leg-islation cannot reflect a general rule of “one size fits all”, but a compromise needs to be implemented, where ethical and scientific judgment for each individual experiment or study is made on a case-by-case basis.

Adherence to the 3R-principle is necessary but not suf-ficient for ethical justification of laboratory animal use

The moral dilemma of inflicting harm on animals to ad-vance knowledge and medical progress was addressed in more detail in 1959, when William Russell and Rex Burch published “The principles of humane experimental technique”, in which they formulated the now famous 3R-principle for the first time: Replace, reduce, refine.6. This principle acknowledges human benefit from animal exper-iments but provides a guideline to minimize suffering in animals: Only if there is no alternative method to achieve the scientific goal, all measures to reduce the necessary number of animals in a given study, and the best possible conditions to confine suffering to the necessary minimum have been established, an experiment can be considered as potentially ethically justifiable. Meeting the 3R criteria is, however, a necessary but not sufficient requirement for eth-ical justification of a particular experiment.

Today the 3R-principle is well accepted worldwide7 as a formula to minimize animal suffering and has become an integral part of EU animal welfare regulations, which have been translated to national law in all EU member states.

Responsibility towards human life and safety – lessons from history

Another key aspect of research involving the use of ani-mals is human safety, especially in the context of medical research on humans. The atrocities of medical experiments on humans in Nazi Germany has led the international com-munity to implement strong protection of human subjects and patients. In addition, drug scandals like the thalidomide birth defect crisis in the 1950s and 1960s have led to pro-found changes in drug regulations. The results of this pro-cess have been condensed in the “Declaration of Helsinki”

adopted by the World Medical Association (WMA) in 1964. Importantly, this declaration states that medical research on human subjects is only justified if all other possible sources haven been utilised for gaining information about efficacy and potential adverse effects of any new experimental ther-apy, prevention or treatment. This explicitly includes infor-mation gained from experiments with animals,8 which has additionally been addressed in a dedicated statement by the WMA on animal use in biomedical research.9.

In analogy to the Helsinki Declaration, which has effec-tively altered the ethical landscape of human clinical re-search, members of the international research community have adopted the Basel Declaration to acknowledge their re-sponsibility towards research animals by further advancing the implementation of ethical principles whenever animals are being used in research.10 Further goals of this initiative are to foster trust, transparency and communication on ani-mal research.

Fostering an evidence-based public debate on the ethics of animal research

Transparency and public dialogue is a critical prerequisite for a thoughtful and balanced debate on the ethical implica-tions of using animals in potentially harmful experiments.

However, a meaningful public debate about ethical consid-erations is only worthwhile, if we agree on the facts regard-ing the usefulness of research on animals for scientific and medical progress.

Yet, the contribution of animal models and toxicology testing to scientific and medical progress as well as sub-ject/patient safety is sometimes doubted by animal rights activists. Certainly, in most biomedical research areas, in-cluding those that involve animal experimentation, there is room for improvement, e.g. on aspects of reproducibility or translation of results from bench to bedside. However, there is widespread agreement among researchers and med-ical professionals, together with a large body of published evidence, on the principal usefulness of animal models in general. As for all science, constant improvement of mod-els and careful consideration of whether any model used is still state of the scientific art at any given point of time is crucial for scientific advancement. Also the responsibility to avoid animal suffering as much as possible dictates that new scientific methods and models free of animal suffering are developed with both vigour and rigour.

A fruitful debate needs to be based on these insights and evidence-based common ground needs to be established when discussing ethical considerations and stimulating new ideas. Finally, we need to acknowledge that we are always in the middle of a continuing thought process, in which we very democratically and carefully need to negotiate the importance of different views, values and arguments.

Read more:

[1] Johnson, E. M. Charles Darwin and the Vivisection Outrage. The Primate Diaries (2011).

[2] Feller, D. Dog fight: Darwin as animal advocate in the anti-vivisection controversy of 1875. Stud. Hist. Philos. Sci. Part C Stud. Hist. Philos. Biol. Biomed. Sci. 40, 265-271 (2009).

[3] Darwin, C. R. 1881. Mr. Darwin on Vivisection.

The Times. (18 April): 10. (1881). Available

at: http://darwin-online.org.uk/content/frameset?pageseq= 1&itemID=F1352&viewtype=text. (Accessed: 25th October 2017)

[4] Bentham, J. An Introduction to the Principles of Morals and Legislation. (W. Pickering, 1823).

[5] DIRECTIVE 2010/63/EU OF THE EUROPEAN PARLIA-MENT AND OF THE COUNCIL on the protection of animals used for scientific purposes. 2010/63/EU, (2010).

[6] Russell, W. M. S. & Burch, R. L. The principles of humane experimental technique. (Methuen, 1959).

[7] Guidelines for Researchers. ICLAS Available at: http://iclas.

org/guidelines-for-researchers. (Accessed: 29th November 2018)

[8] WMA – The World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects. Available at: https://www. wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. (Accessed: 29th November 2018)
[9] WMA – The World Medical Association-WMA State-ment on Animal Use in Biomedical Research. Avail-able at: https://www.wma.net/policies-post/wma-statement-on-animal-use-in-biomedical-research/. (Accessed: 29th November 2018)

[10] Basel Declaration | Basel Declaration. Available at: https://www.basel-declaration.org/. (Accessed: 30th November 2018)

Feb 052019
 
Spread the love

Imagine you are on an airplane, ten thousand meters up in the sky. Now, if you close your eyes you know exactly which way the airplane has started moving, whether it has begun to manoeuvre to the right or to descend. This ability we owe to our inner ear as a part the humans’ vestibular system.

The vestibular system is designed to send information about the position of the head to the brain’s movement control centre, that is the cerebellum. It is made up of three semi-circular canals and two pockets called the otolith organs (Fig. 1), which together provide constant feedback to the cerebellum about head movement. Each of the semi-circular canals is orthogonal to the two others so that they detect the variety of movements in three independent directions: rotation around the neck (horizontal canal), nodding (superior canal) and tilting to the sides (posterior canal). Movement of fluid inside these canals due to the head movement stimulates tiny hairs that send signals via the vestibular nerve to the cerebellum. The two otolith organs (called the saccule and utricle) signal to the brain about linear movements (backwards/forwards or upwards/downwards) and also about where the head is in relation to gravity. These organs contain small crystals that are displaced during linear movements and stimulate tiny hairs communicating via the vestibular, or balance nerve to the cerebellum.

So why is that, even equipped with such a tool, sometimes we get a feeling sitting on an airplane that it is falling down when in fact it is not? Why is that some people, particularly underwater divers, may lose direction and no longer know which way is up?[1] Surely, typical divers should still have the inner ear, unless a shark has bitten their heads off. Is it all caused by stress? Actually, there is much more to it!

Humans have evolved to maintain spatial orientation on the ground, whereas the three-dimensional environment of flight or underwater is unfamiliar to the human body, creating sensory conflicts and illusions that make spatial orientation difficult. Normally, changes in linear and angular accelerations and gravity, detected by the vestibular system, and the relative position of parts of our own bodies, provided by muscles and joints to the proprioceptive system, are compared in the brain with visual information. In unusual conditions, these sensory stimuli vary in magnitude, direction, and frequency. Any differences or discrepancies between visual, vestibular, and proprioceptive sensory inputs result in a sensory mismatch that can produce illusions. Often the result of these various visual and nonvisual illusions is spatial disorientation.

For example, fighter pilots who turn and climb at the same time (they call it “bank and yank”), feel a strong sensation of heaviness. That feeling, caused by their acceleration, surpasses the pull of gravity. Now, if you asked them while blindfolded to tell which way was down using only their vestibular organ, they would point to the cues provided by the turn, not to the cues provided by the earth’s gravity. [2]

Furthermore, the vestibular system detects only changes in acceleration, thus a prolonged rotation of 15-20 seconds [3] results in a cessation of semi-circular output. As a result, the brain adjusts and does not feel the acceleration anymore which can even result in the perception of motion in the opposite direction. In other words, it is possible to gradually climb or descend without a noticeable change in pressure against the seat. Moreover, in some airplanes, it is even possible to execute a loop without exerting negative G-forces so that, without visual reference, the pilot could be upside down without being aware of it.

Another interesting example is the phenomenon of loopy walking. When lost in a desert or a thick forest terrain without landmarks people tend to walk in circles. Recent studies performed by researchers of Max Planck Institute for Biological Cybernetics, Germany, revealed that blindfolded people show the same tendency. Lacking external reference points, they curve around in loops as tight as 20 meters in diameter while believing they are walking in straight lines. [4]

Seemingly the vestibular system is quite easy to trick by eliminating other sensory inputs. However, even when visual information is accessible, e.g. underwater, spatial disorientation can still occur [any scuba diving forum – for the reference]. The obvious fact that water changes visual and proprioceptive perception is crucial here: humans move slower, see differently and let’s not forget the Archimedes’ principle. It happened a lot, that a confused diver thought that the surface was down, especially when the bottom seemed brighter because of reflections. This can be a dangerous mirage in such an unusual gravity. On top of it, water can affect the vestibular system directly through the outer ear. When the cold water penetrates and reaches the vestibular system, it can cause thermal effects on the walls of the semi-circular canals, leading to slight movements of the fluid inside, which are enough to be detected by the brain.[5] Just like in the situations described before this causes the symptoms of spatial disorientation and dizziness.



Fig. 1. Schematic structure of a humans’ inner ear [6].

The vestibular system is indeed frightfully complicated. We can trick it for fun riding roller coasters in an adventure park, but when incorrect interpretation of the signals coming from the vestibular system occurs at the wrong moment this can lead to serious consequences. Luckily, nowadays the airplanes and even divers are equipped with precise instruments used to complement the awareness of the situation and thus avert dangerous situations.

P.S. If you are interested, try riding an elevator while seated on a bike.

— Mariia Filianina

References:

  1. The Editors of Encyclopaedia Britannica, (2012). Spatial disorientation, Encyclopædia Britannica, inc.,
  2. L. King, (2017). The science of psychology: An appreciative view. (4th. ed.) McGraw-Hill, New York.
  3. Previc, F. H., & Ercoline, W. R. (2004). Spatial disorientation in aviation. Reston, VA: American Institute of Astronautics and Aeronautics.
  4. J. L. Souman, I. Frissen, M. N. Sreenivasa and M. O. Ernst,Walking straight into circles, Current Biology 19, 1538 (2009).
  5. http://www.videodive.ru/diving/vizov5.shtml
  6. http://www.nidcd.nih.gov/health/balance/balance_disorders.asp
Dec 042018
 
Spread the love

 

When Francis Guthrie took on the task to colour a map of England in 1852 he needed four colours to ensure that no neighbouring shires had the same colour. Is this the case for any map imaginable, he wondered.

As it turns out, five colours do suffice, as mathematically proven in 1890 in the five-colour theorem [1]. That indeed four colours are enough to colour a map if every country is a connected region took until 1967 to prove [2] and required computer assistance. It abstracted the idea to geometric graph theory where regions are represented by vertices connected by an edge if they share a border (see fig. 1).

Fig 1: Illustration of the abstraction of the map colouring problem to graph theory.

The four-colour theorem was then proven by demonstrating the absence of a map with the smallest number of regions requiring at least five colours. In its long history the theorem attracted numerous false proofs and disproofs. The simplest versions of counterexamples focus on painting extensive regions that bordering many others, thereby forcing the other regions to be painted with only three colours. The focus on the large region might cause people’s inability to see that colouring the remaining regions with three colours is actually possible.

Even before the four-colour theorem was proven, the abstraction to graph theory evoked the question as to how many colours would be needed to colour a plane so that no two points on that plane with distance 1 do have the same colour. This is also known as the Hadwiger–Nelson problem. Note that we are not colouring continuous areas in this case, but instead each individual point of the plane, rendering it extremely more complex. In the 1950s it was known that this sought number, the chromatic number of the plane, had to be between four and seven.

The upper border is known from the existing tessellation of a plane by regular hexagons that can be seven-coloured [4] (fig. 2). The maximal distance within one hexagon, the diameter, needs to be smaller than one to comply with the requirement. Additionally one needs to ensure that the distance to the next hexagon of the same colour is larger than one. These constraints imply that the hexagon edge length a has to be between 0.5 and $\sqrt(7)/2$ for an allowed colouring of the plane, where no two points with distance one have the same colour.

Fig. 2: Colouring of a plane in a seven colour tessellation pattern of regular hexagons.

As to the lower border for the chromatic number of the plane, it is obvious that two colours will not suffice to colour even the simple unit-distance path of an equilateral triangle (see fig. 3 a). To demonstrate that three colours do not suffice either and therefore at least four colours a needed, we take a look at the Moser spindle shown in fig. 3 b. The seven vertices (all eleven edges / connections have unit-distance) cannot be coloured with three colours, say green, blue, and yellow. Assigning green to vertex A, its neighbours B and C need to be blue and yellow, respectively, or vice versa, enforcing D to be green again. A’s other neighbouring vertices E and F analogously are assigned blue and yellow, or vice versa, enforcing in turn G to be green. This conflicts with G’s neighbour D to be green, too, thus demonstrating that arbitrary unit-distance graphs require at least four colours.

Fig 3: a) An equilateral triangle as a simple example for a unit-distance graph. b) The Moser spindle is a four-colourable unit distance graph [3].

After many years of intractability only this year there was some significant progress in closing in on the Hadwiger–Nelson problem. It was demonstrated that “the chromatic number of the plane is at least 5” [5], by finding two non-four-colourable unit-distance graphs (with 20425 and 1581 vertices). The smallest unit-distance graph with chromatic number five found this year has 553 vertices [6] and is shown in fig. 4. Whether the chromatic number of the plane is five, six, or seven still remains to be shown.

Fig 4: Five-colourable unit distance graph with 533 vertices. The fifth colour (white) is only used in the centre. [6]

 

— Alexander Kronenberg

[1] Heawood, (1890), “Map-Colour Theorems”, Quarterly Journal of Mathematics 24, pp. 332–338

[2] Appel, Haken, (1989), “Every Planar Map is Four-Colorable”, Contemporary Mathematics 98, With the collaboration of J. Koch., doi:10.1090/conm/098

[3] Soifer, (2009) “The Mathematical Coloring Book”, Springer

[4] Hadwiger, (1945), “?berdeckung des euklidischen Raumes durch kongruente Mengen”, Portugal. Math. 4 ,pp. 238–242

[5] de Grey, (2018), “The chromatic number of the plane is at least 5”, arXiv:1804.02385

[6] Heule, (2018), “Computing Small Unit-Distance Graphs with Chromatic Number 5”, arXiv:1805.12181

Nov 222018
 
Spread the love

It is one of the most common educational experiments in school and straight from the books: The reaction of an alkali metal with water. During this reaction significant amounts of hydrogen gas are produced, which can ignite and thus explode due to the strongly exothermic reaction – at least that is the explanation one finds pretty much everywhere. However, there is something odd about this reasoning. On the one hand, a complete immersion of the metal within water should then prevent the explosion from happening as no oxygen is present to ignite the hydrogen gas. On the other hand, it is surprising that the solid-liquid interface of this heterogeneous reaction creates enough physical contact to drive the reaction. Additionally, the produced gas tends to separate the educts and therefore stop the reaction. Overall, there are quite a few unclear details in this proposed reaction mechanism.

A study of the Czech Academy of Sciences in Prague and the Technical University of Braunschweig, however, showed that even in presumably clear textbook reactions a lot of surprises may be found sometimes. [1,2] The scientists used drops of sodium-potassium alloy that is liquid at room temperature and filmed the reaction with high speed cameras. They could show that the explosive reaction also happens under water when the metal is completely immersed, thus ruling out the ignition of the hydrogen gas as the main driving mechanism for the explosion. Supported by molecular dynamics simulations, they instead showed what mechanism actually drives the reaction: A Coulomb explosion! During the reaction of a clean metal surface with the adjacent water molecules, electrons move quickly from the metal atoms into the water. This also explains why a solid piece of an alkali metal does not always explode in water: it needs a clean interface without significant oxidation. After the electrons left the metal surface and moved into the water, a strongly charged surface is left. On this surface, the ionized atoms strongly repel each other, and thus open up a path to more inner atoms that have not taken part in the reaction yet. On a time scale of about 0.1 ms, metal dendrites shoot into the water (see figure) and suddenly increase the surface area of the metal. [1-3] This happens extremely fast with giant charge currents flowing in the interface region. The surface tension is pretty much nullified in this case [2,3] and the expanding surface provides more reactive area. As a result, large amounts of hydrogen gas are suddenly produced. Together, these effects drive the explosion, while the ignition of the gas is not directly necessary for the explosion to occur. Instead, the hydrogen gas can also burn off later. [2]

Further results of the study could lead to approaches to avoid metal-water explosions and thus gain application relevance in industry. What is however most unusual about this study is that parts of it got funded by the YouTube science channel of the lead author of the paper, which he explicitly acknowledges. In this exciting case, science and media are really in a close relationship.

As soon as a drop of NaK-alloy gets in contact with water (top left), fine metal fingers are protruding into the water (middle). These are driven by the Coulomb explosion that massively increases the surface area and therefore the reactive interface. As a result, a fast production of hydrogen becomes possible, which further drives the explosion (bottom left). The right column depicts the impact of a water droplet for reference. [1,3]

— Kai Litzius

References:

[1] P. E. Mason et al., Nature Chemistry 7, 250–254 (2015).

[2] https://youtu.be/LmlAYnFF_s8

[3] https://youtu.be/xMfQSV4ygHE

 

Sep 282018
 
Spread the love

Certainly, most of us enjoy an occasional nice bowl of spaghetti. Some of us use a spoon along with the fork, some don’t. Doesn’t matter, as long as you enjoy and don’t make a mess ?

But have you ever wondered if there is a preferred direction to turn the screw? And is it related to where you live? We did!

Please take a minute of your time and participate in our survey to enlighten the world.

Note:

If you are both-handed, please choose your preferred direction for right- and left-handed.

It is irrelevant whether you use a spoon in addition or not.

Please don’t shovel. That’s rude

The results will be published on Spaghetti Day (Jan 4th, 2019) on Junq.info

The Spaghetti Turn

Jul 172018
 
Spread the love

We are all familiar with the appearance of a candle flame. Warm, bright yellow, and formed like a teardrop it nestles up the wick just to reach far out into the empty above it. This behavior can be easily explained by the rise – the convection – of the less dense air that is heated by the combustion around the wick. While colder, more dense air floats inward, the buoyancy of the warm air lets it move upward and away from the combustion zone. However, this process requires buoyancy, which only exists in an environment with gravity. But what would then happen to a flame in zero gravity?

In so-called microgravity, that is an environment with very little gravity like it is present in the Earth’s orbit, there is no convection since there is no definition of a classical “up and down”. The flame therefore looks significantly different and forms a light blue, spherical shape instead of the familiar teardrops. To understand this behavior, one has to consider the chemistry of the combustion as well as the physics of the gas exchange.

In case of the “normal” candle flame, the bright yellow color stems from soot particles that originate in the (non-perfect) combustion. They rise with the hot air and glow yellow in the upper region on the flame. The lower blue-ish region on the other hand is fed by the stream of fresh oxygen-rich air from below. In case of the flame in microgravity, there is no preference for up and down and therefore it assumes a spherical shape. Due to the lack of conversion, the combustion is fed only by (slow) diffusion of the oxygen into and the fuel out of the central combustion zone. This means that the zero-gravity flame burns much slower and does not produce equally distributed soot particles. Thus it is blue, spherical, and produces much more CO and formaldehyde than CO2, soot, and water.

This behavior, and how to extinguish a flame in microgravity, is under investigation aboard on the International Space Station (ISS) in the so-called FLame Extinguishment Experiment (FLEX). It is carried out on small heptane bubbles that are ignited in a controlled atmosphere. The experiment found that such small flame bubbles are not just exotic to look at, but also can pose a threat to space exploration since they can be much more difficult to extinguish. In this way, research on small bubbly flames can thus help making space exploration a bit safer.

A candle on Earth (left) and in microgravity (right): The different combustion patterns are clearly visible. [3, NASA]

 

— Kai Litzius

References:

[1] www.nasa.gov/mission_pages/station/research/experiments/666.html

[2] medium.com/@philipbouchard/why-is-a-candle-flame-in-zero-gravity-so-different-than-one-on-earth-1775194cf21a

[3] https://www.youtube.com/watch?v=DmrOzeXWxdw

 

May 292018
 
Spread the love

Sonoluminescence is a fascinating, mysterious physical phenomenon, that combines the principles of light and sound.

In the year 1934 H. Frenzel and H. Schultes discovered a luminous effect by ultrasonication of water.[1] The defining moment that leads to sonoluminescence is the emergence of a cavitation in the liquid (figure 1). The high frequency ultrasound leads to the formation of bubbles, that are filled with gas and expand and collapse rapidly like a shock wave. Shortly after the collapse, the energy is released in form of sound and a short lightning, which is barely observable with the bare eye and reaches temperatures up to 10,000 K.[2,3]

Figure 1. Schematic illustration of the formation of sonoluminescence (f.l.t.r.): Growth of a gas bubble in a liquid, collapse or implosion of the bubble and emission of light.[4]

In the 1990s, the causes and impacts that lead to sonoluminescence have been intensively investigated but the real cause of this phenomenon remains unresolved even nearly 85 years after its discovery.[5,6] There are different quantum mechanical approaches, but they are highly controversial.[7,8]

Sonoluminescence is not only a physical phenomenon, it does indeed show capability for an academic application, at least in chemistry: in 1991 Grinstaff et al. were able to generate nearly pure amorphous iron by ultrasonication of an iron pentacarbonyl solution in decane. Compared to crystalline iron this compound shows enhanced catalytic activity when used in the Fischer-Tropsch process.[3]

Sonoluminescence also occurs in wildlife: by snapping their claws, pistol shrimp create a sharp stream of water that does not only kill prey but generates a cavitation bubble and thus a short lightning. Scientists call this special phenomenon “shrimpoluminescence”.[9]

 

— Tatjana Daenzer

 

Bibliography

[1] H. Frenzel, H. Schultes, Z. Phys. Chem. 1934, 27, 421–424.

[2] B. P. Barber, S. J. Putterman, Nature, 1991, 352, 318–320.

[3] K. Suslick, S.-B. Choe, A. A. Cichowias, M. Grinstaff, Nature, 1991, 353, 414–416.

[4] „Creative Commons“ from Dake CC BY-SA 3.0. (https://commons.wikimedia.org/wiki/File:Sonoluminescence.png#/media/File:Sonoluminescence.png) last access: 15.05.2018.

[5] B. P. Barber, C.-C. Wu, R. L?fstedt, P. H. Roberts, S. J. Puttermann, Phys. Rev. Lett. 1994, 72, 1380–1383.

[6] R. Hiller, K. Weninger, S. J. Puttermann, Science, 1994, 266, 248–250.

[7] C. Eberlein, Phys, Rev. Lett. 1996, 76, 3842–3845.

[8] R. P. Taleyerkhan, C. D. West, J. S. Cho, R. T. Lahey Jr., R. I. Nigmatulin, R. C. Block, Science, 2002, 295, 1868–1873.

[9] D. Lohse, B Schmitz, M. Versluis, Nature, 2001, 413, 477–478.

Apr 012018
 
Spread the love

“Dr.” Martin Luther plagiarized in his dissertation

LutherPlag checks

Theology professor Kim Lee-jung of Luther University in Giheung-gu, Yongin, South Korea, reports that he found the doctoral thesis of Martin Luther. The title: Iocorum Encomium (In Praise of Jokes). This discovery is in itself an epochal event. The sensation beyond that: up to 80 percent of the work is plagiarized.

Martin Luther’s is one of the best-researched lives in German history. So far it has been assumed that the reformer never submitted a dissertation, since he never mentioned such an endeavor in his writings, his letters or his diaries.

According to the trilingual press release of South Korean Luther University (see below), theology professor Kim has discovered and examined the dissertation of Martin Luther. The amazing thing is that Martin Luther apparently plagiarized massively in his dissertation. Whole passages are believed to come from a text by his humanist colleague, the Dutch theologian Erasmus of Rotterdam, says Kim.

On his spectacular find and on the content of Luther’s dissertation professor Kim will publish an article in the American Journal of Protestant Theology. In his article he will also address the question: How could such an upright man as Martin Luther do such a thing?

The Korean professor of theology has noticed that countless monuments in Germany refer to the reformer as “Dr. Martin Luther”, whereas in America the academic title is completely absent in his naming. As a reason for this, Kim suspects a cultural preference that arose in Germany during Luther’s lifetime.

“A doctor’s degree seems to be very important to Germans,” he supposes. Even Martin Luther, perhaps the most German of all Germans, may not have resisted this temptation. His example was later followed, among others, by Doktor Faustus, Doktor Allwissend, Dr. h. c. Erich Honecker, Dr. Karl-Theodor zu Guttenberg.

The news has attracted a lot of attention worldwide. Internet activists have set up LutherPlag and run the text through the plagiarism software. Already, it has been said, up to 80 percent of the text consists of plagiarism.

Meanwhile, at Martin Luther University in Halle-Wittenberg, there are unofficial debates going on whether or not to strip Luther of his academic title. This university is the successor of the University of Wittenberg, where Luther submitted his doctoral thesis on 19 October 1512. What would the divestiture mean? Should the title at the dozens of Luther statues in Germany be removed and all the publications on “Dr. Martin Luther” have an erratum attached?

 

Professor Kim Lee-jung had no idea what consequences his discovery would have. In a telephone conversation with JUnQ, he said: “It is about time, however, that thinking about Martin Luther enters into a postheroic and postmonumental, even into a postdoctoral phase. That’s what I stand for as a scientist, I can do no other.”

Dr. Antje Käßmann for Journal of Unsolved Questions

Mainz, April 1st; 2018

 

Online-Version:

For further information please click here: website.

 

War der Reformator ein Plagiator?

Dr.” Martin Luther hat in seiner Dissertation abgeschrieben – LutherPlag prüft

Der Theologieprofessor Kim Lee-jung von der Luther University in Giheung-gu, Yongin, Südkorea, berichtet, er habe die Doktorarbeit Martin Luthers gefunden. Der Titel: Iocorum encomium (Lob der Scherze). Diese Entdeckung ist an sich ein Jahrhundertereignis. Die Sensation darüberhinaus: bis zu 80 Prozent der Arbeit sollen abgeschrieben sein.

Die Biographie Martin Luthers gehört zu den am besten recherchierten Leben in der deutschen Geschichte. Bisher ist man davon ausgegangen, der Reformator habe nie eine Dissertation vorgelegt, da er weder in seinen Schriften, noch in Briefen oder Tagebüchern ein solches Bemühen erwähnt habe.

Laut der dreisprachigen Pressemitteilung der südkoreanischen Luther University (siehe unten) hat der Theologieprofessor Kim Lee-jung die Dissertation Martin Luthers entdeckt und untersucht. Das Erstaunliche ist, dass Martin Luther in seiner Dissertation anscheinend massiv plagiiert habe. Ganze Textpassagen sollen aus einer Schrift seines humanistischen Kollegen, dem holländischen Theologen Erasmus von Rotterdam stammen, behauptet Kim.

Über den spektakulären Fund und über den Inhalt der Lutherschen Dissertation wird Professor Kim einen Aufsatz im American Journal of Protestant Theology publizieren. Darin wird er sich auch der Frage widmen: Wie konnte ein so geradrückiger Mensch wie Martin Luther so etwas tun?

Dem koreanischen Theologieprofessor ist aufgefallen, dass die unzähligen Denkmale in Deutschland den Reformator stets als „Dr. Martin Luther” ausweisen, wohingegen man in Amerika auf den akademischen Titel bei der Namensnennung komplett verzichtet. Kim vermutet als Grund eine kulturelle Vorliebe, die in Deutschland zu Luthers Lebzeiten aufkam.

„Die Doktorwürde scheint den Deutschen sehr wichtig zu sein”, schätzt er. Selbst Martin Luther als vielleicht Deutschester aller Deutschen habe wohl der Versuchung nicht widerstehen können. Seinem Beispiel folgten später u.a. Doktor Faustus, Doktor Allwissend, Dr. h. c. Erich Honecker, Dr. Karl-Theodor zu Guttenberg.

Die Nachricht hat weltweit große Aufmerksamkeit erregt. Internet-Aktivisten haben LutherPlag eingerichtet und jagen den Text durch die Plagiatssoftware. Schon jetzt wird von einem bis zu 80-prozentigen Plagiat gesprochen.

Inzwischen wird an der Martin-Luther-Universität zu Halle-Wittenberg inoffiziell diskutiert, ob man Luther den akademischen Grad aberkennen müsse. Diese Universität ist die Nachfolgerin der Universität Wittenberg, wo Luther am 19. Oktober 1512 seine Doktorarbeit eingereicht hat. Was würde die Aberkennung bedeuten? Müsste der Titel von den Dutzenden Lutherdenkmälern in Deutschland mechanisch getilgt werden und all den Publikationen über „Dr. Martin Luther” ein Erratum beigefügt werden?

Professor Kim habe nicht geahnt, welche Konsequenzen seine Entdeckung nach sich ziehen würde. In einem Telefonat mit JUnQ sagte er: „Es ist aber an der Zeit, dass der Umgang mit Martin Luther in eine postheroische und postmonumentale, ja sogar in eine postdoktorale Phase eintritt. Dafür stehe ich als Wissenschaftler, ich kann nicht anders.”

 

Dr. Antje Käßmann für Journal of Unsolved Questions

Mainz, 01. April 2018

 

Online-Version:

Für weitere Informationen klicken sie bitte hier: website.

 

Continue reading “Was the Reformer a Plagiarist?” »

Mar 152018
 
Spread the love

Probably never, since a Dyson sphere is not a vacuum cleaner of the same-named famous brand. In fact, until now it is just a thought experiment:

In 1960 Freeman J. Dyson published his theory about the “the long-scale conversion of starlight into far infrared radiation” in Science.[1] He states that aliens with further developed technology than ours must have found an advanced way like this to harvest solar energy.

Such a device could be a shell around the system’s sun at a distance of about two earth orbits, a thickness of 2?3 m, and nearly the mass of Jupiter. All the energy emitted by the star could thus be absorbed and harnessed on the inner surface. Of course, one must first exploit an entire planet to obtain all the mass needed for this device –  a huge technical trouble.

But with his hypothesis Dyson also proposed a way to trace intelligent existence in far-away solar systems that was new up to then. Until the 1960s the search for aliens based on the search for extra-terrestrial radio signals. However, a Dyson sphere would appear as a dark object emitting radiation in the far infrared (about 10 µm).[1] Now, instead for only listening to strange radio noise, scanning the sky for abnormalities in the infrared spectrum became also of importance.

Some years ago mankind seemed to be one step closer to discovering a Dyson sphere (or something similar): the light of the star KIC 8462852 shows an immensely changing intensity as if a huge object is regularly passing by. An orbiting planet would be too small to cause such an eclipse. This evokes suspicions about space-factories or cities and even whole Dyson-like devices. But the shadow could probably also be cast by natural causes like the remains of a burst asteroid or an interstellar cloud.[2]

Until we will be able to construct a Dyson sphere millions of years could pass. We first have to develop advanced methods for space-travel and the technology to destruct a whole planet. Not to speak of the energy we will already have consumed on the way.

But then, of course, we might be able to drive our hoovers (or anything else) with energy from a Dyson sphere ;)

 

— Tatjana Daenzer

 

Read more:

[1] Dyson F. J., Science 1960, 131, 1667-1668.

[2] https://www.seti.org/seti-institute/mysterious-star-kic-8462852 (last access 16.02.2018).