Dig through the JUnQ

Here you find all contributions made by external authors to JUnQ. This includes peer-reviewed articles and editorial board reviewed open questions.

Mar 052019
 
Spread the love

Genetic information is encoded in the deoxyribonucleic acid (DNA). In form of a long double-helix molecule, lo-cated in living cells, it governs most of the organisms traits. Explicitly, information from genes is used to form func-tional gene products such as proteins. This process of gene expression is used by all known forms of life on earth to generate the macromolecular machinery for life. Thus, it poses the fundamental level of how the genotype causes the phenotype, i.e. the composite of organisms’ observ-able characteristics. Genomic modification is a powerful tool to amend those characteristics. Reproductional and environmentally caused changes to the DNA is a substrate for evolution. In nature, those changes happen and may cause favourable or unfavourable changes to the phenotype, which allow the cell or organism to improve or reduce the ability to survive and reproduce, respectively.

In the first half of the 20th century, several methods to alter the genetic structure of cells were discovered, which include exposing it to heat, X-rays, UV-light, and chemicals1-4. A significant number of crop cultivated today were developed using those methods of traditional muta-genesis, an example of which is Durum wheat, the most prevalent wheat for pasta production. With traditional mu-tagenesis thousands of mutations are introduced at random within the DNA of the plant. A subsequent screening iden-tifies and separates cells with favourable mutations in their DNA, followed by attempts to remove or reduce possible unfavourable mutations in those by mutagenesis or cross-breeding.

As those methods are usually unspecific and complex, researchers have developed site-determined gene editing techniques, the most successful of which is the so called CRISPR/Cas9 method (clustered regularly interspaced short palindromic repeats). This method borrows from how bacteria defend viral invasion.6 When the bacterium detects virus DNA invasion, it forms two strands of RNA (single helix molecules), one of which contains a sequence that matches that of the invading virus DNA and is hence called guide RNA. These two RNAs form a complex with a Cas9 protein, which, as a nuclease enzyme, can cleave DNA. When the guide RNA finds the target in the viral genome, the RNA-Cas9 complex will lock to a short se-quence known as the PAM, the Cas9 unzippes the viral DNA to which the RNA will match. Cas9 then cleaves the viral DNA, forcing the cell to repair the DNA.6 As this repair process is error prone, it may lead to mutations that might disable certain genes, changing the phenotype. In 2012 and 2013 it was discovered that the guide RNA can be considerably modified for the system to work site-determined5, and that by modifying the enzyme it not only works in bacteria and archaea, but also in eukaryotes (plants and animals), respectively.7

Figure 1: CRISPR/Cas9 working principle.8

Research published since demonstrated the method’s poten-tial for RNA-programmable genome editing. Modifications can be made so during the repair an artificially designed DNA sequence pairs with the cleaved ends, recombines and replaces the original sequence, introducing new genes to the genome.11,12 The advantages of this technique over tra-ditional gene editing methods is multifold. It can act very targeted, i.e. site- and therefore gene-specific in any form of known life. It is comparatively inexpensive, simple enough to be conducted in basic labs, effective, and fast regarding preparation and realisation. The production of multiplex ge-netically modified mice, for instance, was reduced from up to two years to few weeks,9 as CRISPR/Cas9 has the unique advantage over earlier genome editing methods, that multi-plexable targeting is easily achieved by co-expressing Cas9 with multiple single-guide RNAs simultaneously. Conse-quently, within few years after its discovery, it evolved to be the routine procedure for genome modification of virtually all model plants and animals.

The availability of such a method evokes medical and botanical development interests. A plethora of possible medical applications are discussed and researched, among which is healing cancer or treating genetic disorders. For cancer research it is imaginable to induce a multitude of deliberate mutations to artificially form cells similar to can-cerous cell, study the caused modification to the cells, and thus learn to inhibit their reproduction or the original muta-tion. In the clinical research focus now are blood diseases or those related to haematopoietic cells, such as leukaemia, HBV, HIV, or haemophilia.13,14 This is because for the treatment of those diseases, the cells (blood cells or bone marrow) can be extracted from the body in a known way, their genome can be edited in vitro by the CRISPR/Cas9 method, and finally the cells can be reintroduced to the body. The advantage of the extraction is that no additional vector (agent to help finding the right cells in vivo) is re-quired, and the genomic modification can be controlled ex vivo. While the editing efficiency with CRISPR-Cas9 can be extremely high, the resulting cell population will be inherently heterogeneous, both in the percentage of cells that were edited and in the specific genotype of the edited cells. Potentially problematic for in vivo application is the bacterial origin of the endonuclease Cas9. A large portion of humans show humoral and cell-mediated immune re-sponses to the Cas9 protein complex,10 most likely because of prior infection with related bacteria.

Although clinical applications of CRISPR/Cas9 grab a lot of media attention, agricultural applications draw even more commercial interest. Prospects here are the faster, cheaper and more targeted development of crops than by traditional methods of mutagenesis, which are extremely more aggressive in comparison. The main aim is unchanged though: improve plants regarding yield, resistance to dis-eases or vermin, and resilience to aridity, heat, cold, humid-ity, or acidity.15,16 CRISPR/Cas9 is therefore considered an important method to ameliorate agricultural food produc-tion to feed the earth’s ever-growing human population.

Regulations of thusly modified products vary largely be-tween countries. While Canada considers such plants equal to not genetically modified if no transgene was inserted, the USA assesses CRISPR plants on a case by case basis, gauging whether the modification would have been possible by natural mutation. This way they chose to not regulate mushrooms that do not turn brown and maize with an al-tered starch contend. Last year the European court of justice ruled all CRISPR/Cas9 modified plants as genetically mod-ified organisms, reasoning that the risks of such a novel method are unknown, compared to traditional mutagenesis as an established method of plant breeding.

Instigated by genome editing in human-embryonic cells in 201518 a group of scientists called for a moratorium to dis-cuss the possible risks and impact of the wide usage of the CRISPR/Cas9 technology, especially when it comes to mu-tations in humans.19 On the 2015 International Summit on Human Gene Editing leading international scientists con-sidered the scientific and societal implications of genome editing. The discussed issues span clinical, agricultural and environmental applications, with most attention focused on human-germline editing, owing to the potential for this application to eradicate genetic diseases and, ultimately, to alter the course of evolution. Some scientists advise to ban CRISPR/Cas9 based human genomic editing research for the foreseeable future, whereas others favour a rapid progress in developing it.20 A line of argument of support-ers of the latter viewpoint is, that the majority of ethical concerns are effectively based on methodical uncertainties of the CRISPR/Cas9 method at its current status, which can be overcome only with extensive research. Those methodical uncertainties include possible cleavage at undesired sites of the DNA, or insertion of wrong sequences at the cleavage site, resulting in the disabling of the wrong genes or even the creation of new genetic diseases.

Whilst a total ban is considered impractical because of the widespread accessibility and ease of use of this technology,21 the summit statement says, that “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and effi-cacy issues have been resolved . . . and (ii) there is broad societal consensus about the appropriateness of the pro-posed application.” The moral concerns about embryonic or germline treatment base on the fact that CRISPR/Cas9 not only would allow the elimination of genetic diseases, but also enable genetic human enhancement, from simple tweaks like eye colour or non-balding to severe modifica-tions relating bone density, muscular strength or sensory and mental capabilities.

Although most scientist echo the summit statement, in 2018 a biochemist claimed to have created the first genetically edited human babies, two twin sisters. After in vitro fertil-ization, he targeted a gene that codes for a protein that one HIV variant uses to enter cells, enforcing a kind of HIV immunity, which is a very rare trait among humans.22 His conduct was harshly criticised in the scientific community, widely condemned, and-after enormous public pressure-redoing forbidden by the responsible regulatory offices.

Ultimately the CRIPSR/Cas9 technology is a paramount example of real world societal implications of basic re-search and demonstrates researchers’ responsibilities. This also raises the question whether basic ethical schooling should be part of every researcher’s education.

— Alexander Kronenberg

Read more:

[1] K. M. Gleason (2017) “Hermann Joseph Muller’s Study of X-rays as a Mutagen”

[2] Muller, H. J. (1927). Science. 66 (1699): 84–87.

[3] Stadler, L. J.; G. F. Sprague (1936). Proc. Natl. Acad. Sci. U.S.A. US Department of Agriculture and Missouri Agricul-tural Experiment Station. 22 (10): 572–8.
[4] Auerbach, C.; Robson, J.M.; Carr, J.G. (March 1947). Sci-ence. 105 (2723): 243–7.

[5] M. Jinek, K. Chylinski, I. Fonfara, M. Hauer, J. A. Doudna, E. Charpentier. Science, 337, 2012, p. 816–821.
[6] R. Sorek, V. Kunin, P. Hugenholtz. Nature reviews. Micro-biology. 6, 3, (2008), p. 181–186.

[7] Cong, L., et al., (2013). Science. 339 (6121) p. 819–823.

[8] https://commons.wikimedia.org/wiki/File:GRNA-Cas9.png

[9] H. Wang, et al., Cell. Band 153, 4, (2013), S. 910–918.

[10] D. L. Wagner, et al., Nature medicine. (2018).

[11] O. Shalem, N. E. Sanjana, F. Zhang; Nature reviews. Genet-ics 16, 5, (2015), p. 299–311.

[12] T. R. Sampson, D. S. Weiss; BioEssays 36, 1, (2014), p. 34–38.

[13] G. Lin, K. Zhang, J. Li; International journal of molecular sciences 16, 11, (2015), p. 26077–26086.

Mar 052019
 
Spread the love

Dr. Roman Stilling

Disclaimer: The opinions, views, and claims expressed in this essay are those of the author and do not necessarily reflect any opinion whatsoever of the members of the editorial board. The editorial board further reserves the right not to be responsible for the correctness of the information provided. Liability claims regarding damage caused by the use of any information provided will therefore be rejected.

Roman Stilling graduated with a B.Sc. in Biosciences from the University of Mün-ster in 2008 and received a Ph.D. degree from the International Max Planck Re-search School for Neurosciences / University of Göttingen in 2013. Afterwards he joined the APC Microbiome Ireland in Cork, Ireland, as postdoctoral researcher. Since 2016 he is the scientific officer for for the information initiative “Tierver-suche verstehen”1, coordinated by the Alliance of Science Organisations in Germany.


Ethical concerns on using animals in biomedical research have been raised since the 19th century. For example, in England the “Cruelty to Animals Act” was passed in 1876 as a result of a debate especially on the use of dogs un-der inhumane conditions such as invasive physiological experiments or demonstrations without general anaesthe-sia. Interestingly, it was Charles Darwin who put in all his scientific and political gravitas to push for regulation by the law while at the same time providing highly differen-tiated argumentation towards using animals for advancing knowledge, especially in the quickly developing field of physiology 1,2. In an 1881 letter to a Swedish colleague he wrote:

“[. . . ]I fear that in some parts of Europe little regard is paid to the sufferings of animals, and if this be the case I should be glad to hear of legislation against inhumanity in any such country. On the other hand, I know that physiology cannot possibly progress except by means of experiments on living animals, and I feel the deepest conviction that he

who retards the progress of physiology commits a crime against mankind.”3

Animal research as a moral dilemma

In this letter Darwin succinctly summarized the ethical dilemma that is the core of the debate on using animals for research: whether we may cause harm to animals if it is necessary to advance science and medicine.

In fact, the ability to suffer is generally accepted as the sin-gle most morally relevant criterion when animals are con-sidered as subjects of moral worth. This reasoning is based on the philosophies of Jeremy Bentham who’s thoughts on this matter culminated in the aphorism: “The question is not, Can they reason? nor, Can they talk? but, Can they suffer?”4

Today, animal welfare legislation is based on this notion in most countries, which has fundamental consequences on how different species of animals are protected by these reg-ulations. For example, in the EU, only the use of animals within the taxonomical subphylum Vertebrata (i.e. verte-brates) are covered by the respective EU directive.5 More recently also the use of Decapoda (e.g. crayfish, crabs, lob-sters) and Cephalopoda (e.g. squids, octopuses) falls within this regulation since it is assumed that these animals have a complex enough nervous system to perceive pain and expe-rience suffering.

Most current legislation in industrialized countries ac-knowledges that animals (not exclusively, but especially those able to suffer) have intrinsic value and a moral sta-tus that is different from other biological forms of life such as plants, fungi or bacteria and inanimate matter. At the same time no country has established legislation that con-siders the moral status of any animal the same as the moral status of a human being – irrespective of the developmental state or status of health of that human being.

Together this reasoning has led to the appreciation, that leg-islation cannot reflect a general rule of “one size fits all”, but a compromise needs to be implemented, where ethical and scientific judgment for each individual experiment or study is made on a case-by-case basis.

Adherence to the 3R-principle is necessary but not suf-ficient for ethical justification of laboratory animal use

The moral dilemma of inflicting harm on animals to ad-vance knowledge and medical progress was addressed in more detail in 1959, when William Russell and Rex Burch published “The principles of humane experimental technique”, in which they formulated the now famous 3R-principle for the first time: Replace, reduce, refine.6. This principle acknowledges human benefit from animal exper-iments but provides a guideline to minimize suffering in animals: Only if there is no alternative method to achieve the scientific goal, all measures to reduce the necessary number of animals in a given study, and the best possible conditions to confine suffering to the necessary minimum have been established, an experiment can be considered as potentially ethically justifiable. Meeting the 3R criteria is, however, a necessary but not sufficient requirement for eth-ical justification of a particular experiment.

Today the 3R-principle is well accepted worldwide7 as a formula to minimize animal suffering and has become an integral part of EU animal welfare regulations, which have been translated to national law in all EU member states.

Responsibility towards human life and safety – lessons from history

Another key aspect of research involving the use of ani-mals is human safety, especially in the context of medical research on humans. The atrocities of medical experiments on humans in Nazi Germany has led the international com-munity to implement strong protection of human subjects and patients. In addition, drug scandals like the thalidomide birth defect crisis in the 1950s and 1960s have led to pro-found changes in drug regulations. The results of this pro-cess have been condensed in the “Declaration of Helsinki”

adopted by the World Medical Association (WMA) in 1964. Importantly, this declaration states that medical research on human subjects is only justified if all other possible sources haven been utilised for gaining information about efficacy and potential adverse effects of any new experimental ther-apy, prevention or treatment. This explicitly includes infor-mation gained from experiments with animals,8 which has additionally been addressed in a dedicated statement by the WMA on animal use in biomedical research.9.

In analogy to the Helsinki Declaration, which has effec-tively altered the ethical landscape of human clinical re-search, members of the international research community have adopted the Basel Declaration to acknowledge their re-sponsibility towards research animals by further advancing the implementation of ethical principles whenever animals are being used in research.10 Further goals of this initiative are to foster trust, transparency and communication on ani-mal research.

Fostering an evidence-based public debate on the ethics of animal research

Transparency and public dialogue is a critical prerequisite for a thoughtful and balanced debate on the ethical implica-tions of using animals in potentially harmful experiments.

However, a meaningful public debate about ethical consid-erations is only worthwhile, if we agree on the facts regard-ing the usefulness of research on animals for scientific and medical progress.

Yet, the contribution of animal models and toxicology testing to scientific and medical progress as well as sub-ject/patient safety is sometimes doubted by animal rights activists. Certainly, in most biomedical research areas, in-cluding those that involve animal experimentation, there is room for improvement, e.g. on aspects of reproducibility or translation of results from bench to bedside. However, there is widespread agreement among researchers and med-ical professionals, together with a large body of published evidence, on the principal usefulness of animal models in general. As for all science, constant improvement of mod-els and careful consideration of whether any model used is still state of the scientific art at any given point of time is crucial for scientific advancement. Also the responsibility to avoid animal suffering as much as possible dictates that new scientific methods and models free of animal suffering are developed with both vigour and rigour.

A fruitful debate needs to be based on these insights and evidence-based common ground needs to be established when discussing ethical considerations and stimulating new ideas. Finally, we need to acknowledge that we are always in the middle of a continuing thought process, in which we very democratically and carefully need to negotiate the importance of different views, values and arguments.

Read more:

[1] Johnson, E. M. Charles Darwin and the Vivisection Outrage. The Primate Diaries (2011).

[2] Feller, D. Dog fight: Darwin as animal advocate in the anti-vivisection controversy of 1875. Stud. Hist. Philos. Sci. Part C Stud. Hist. Philos. Biol. Biomed. Sci. 40, 265-271 (2009).

[3] Darwin, C. R. 1881. Mr. Darwin on Vivisection.

The Times. (18 April): 10. (1881). Available

at: http://darwin-online.org.uk/content/frameset?pageseq= 1&itemID=F1352&viewtype=text. (Accessed: 25th October 2017)

[4] Bentham, J. An Introduction to the Principles of Morals and Legislation. (W. Pickering, 1823).

[5] DIRECTIVE 2010/63/EU OF THE EUROPEAN PARLIA-MENT AND OF THE COUNCIL on the protection of animals used for scientific purposes. 2010/63/EU, (2010).

[6] Russell, W. M. S. & Burch, R. L. The principles of humane experimental technique. (Methuen, 1959).

[7] Guidelines for Researchers. ICLAS Available at: http://iclas.

org/guidelines-for-researchers. (Accessed: 29th November 2018)

[8] WMA – The World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects. Available at: https://www. wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. (Accessed: 29th November 2018)
[9] WMA – The World Medical Association-WMA State-ment on Animal Use in Biomedical Research. Avail-able at: https://www.wma.net/policies-post/wma-statement-on-animal-use-in-biomedical-research/. (Accessed: 29th November 2018)

[10] Basel Declaration | Basel Declaration. Available at: https://www.basel-declaration.org/. (Accessed: 30th November 2018)

Mar 012019
 
Spread the love

Just a few years before Dolly was born as the first surviving clone of a sheep in 1996, the movie Jurassic Park was launched, based on the same-named novel by Michael Crichton.[1,2] In this story scientists insert genetic material derived from fossils into amphibious eggs to bring all sorts of dinosaurs back to life. The actual cloning of animals follows a quite similar approach called somatic cell nuclear transfer or SCNT (fig 1): a nucleus with the desired DNA is isolated from a somatic (body) cell and introduced into an emptied ovum of the same species. Several electrical impulses excite the cell and stimulate proliferation in a nutritional medium. The most stable cell clusters, called blastomeres, can then be transferred to a host mother and grow into an embryo.[1] Dolly managed to fully develop into a lamb and lived 13 years until she died of an infection. She even gave birth to a lamb, proving the viability of cloned creatures.[3] Blastomeres that are dissected instead of implanted can be used to treat diseases or might enable the growth of tissue. Maybe in the future we will be even able to grow a whole surrogate organ ‒ an approach that is highly controversial since human somatic cells are mostly derived from embryotic tissue.[4]

Fig 1: Schematic depiction of the SCNT process: The nucleus with the desired genetic material is inserted into an empty egg cell which is growing into a blastomere.[5]

According to a report from the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) about one million species of an estimated number of around 8 million species (only counting eukaryotes) on earth are currently endangered or threatened with loss of habitat.[6,7] In the history of Earth extinction has mostly been a consequence of natural disasters like climate change, volcanic eruptions, or meteorite impacts until human population started to expand.[8,9] The IPBES report demonstrates the present impact of human behaviour on biodiversity and it seems that we are facing many more extinctions caused by anthropogenic reasons in the next decades. It has become a growing interest to not only preserve existing species but also to revive those that have already died out.

One attempt is currently being made to revive Quaggas, a subspecies of the living plain zebra that has died out in the 1880s (fig 2), by selective breeding. Due to their close genetic relation some plain zebras that resemble the characteristic pattern of the quaggas have been selected in the hope to one day give birth to a zebra that looks just like them and shows similar genetic information.[10,11,12]

Fig 2: Taxidermied Quagga foal in the Museum of Natural History in Mainz, Germany. (© Tatjana Dänzer)

More demanding is the CRISPR Cas9 method: the DNA that can be extracted from most fossils like the woolly mammoth could be much too old to produce a healthy individuum. But their DNA might be partially recovered by replacing some sequences in the DNA of their closest living relative, the elephant, with extracted mammoth DNA. The genome will not be the same as it was millions of years ago and no one really knows how this will influence the livability of the animals.[13]

But most of the extinct species do not have such close relatives anymore. Interspecies nuclear transfer like in Jurassic Park can be another possibility for de-extinction, that means to revive species that have gone extinct or are on the verge of extinction. The San Diego Zoo Institute for Conservation Research maintains a large collection of cells and embryos called Frozen Zoo®.[14] By using reproductive technologies they develop methods to prevent endangered species like the northern white rhino or the Przewalski horse from extinction or inbreeding.[ 15] The first animal of an endangered species that was successfully cloned was a gaur (bos gaurus), an Asian ox, in 2001 by Advanced Cell Technology using genetic material from the San Diego Zoo. DNA from the skin cells of a male gaur were implanted into empty cow egg cells, grown into blastomeres that were then transferred into the wombs of domestic cows. One of eight embryos developed to a full-grown calf. Unfortunately, after being born, the gaur did not live for more than two days. However, the cause of death is considered to be an infection and not the fact that it is a trans-species clone.[16] The second clone that was created with the very same method had a higher life expectance. It was a banteng (bos javanicus), another endangered Asian cattle. Also remarkable is, that the used fibroblasts were taken and frozen 25 years before, in 1978.[17] An attempt to clone a species that has already gone extinct, the Pyrenean ibex (capra pyrenaica pyrenaica) failed since the kid was born with a deformed lung.[18]

The fact that cloned cells do in principle develop to embryos and even prolific adult animals (like Dolly) gives hope that one day species that have recently been wiped out could come back to life. But besides the challenging and time-consuming scientific research these plans also evoke a lot of critical questions in the society:

How is decided which species will be revived and which stays extinct?

It is clearly difficult to revive every species that we know has ever lived on this planet. There would just not be enough space and food and we might soon experience another wave of mass extinction. Since DNA from fossils might be too old, mammoths and dinosaurs are still out of question. This is shifting the focus on species of the recent past. But how can we select which species can live again and which won’t? We surely must consider the preservation of still existing species as a priority.

Where should they live?

If it is possible to clone many animals of one kind that can even mate, there must be a safe and nourishing environment, most likely captivity. Who knows how an entire species that has been created in captivity will develop? And the knowledge about the behaviour and needs of most of those animals is very little.[13]

Who is going to pay?

The scientist’s motivation might surely be an idealistic one but somehow all the research and maintenance must be financed. Innovations will always attract temporizers that try to exploit it financially. Zoos and wildlife parks that exhibit animals are the lesser problem. Some worry that wealthy poachers and “gourmets” who don’t withhold from hunting and eating endangered species now will just as much be attracted by the thought of getting hold of a cloned specimen. Paying to hunt an endangered species to support the protection financially is already practised in southern Africa and raises a lot of ethical issues.[19,20]

To see living “fossils” like dinosaurs, mammoths, dodos and all the others is surely an exciting thought. But if mankind proceeds like this, in just a few decades there might be much less animals on earth than there are now. Let’s hope that combined common sense, technical progress, and less vanity will lead to a preserved and healthy nature in our future.

‒Tatjana Dänzer

Read more:

[1] I. Wilmut, A. E. Schnieke, J. McWhir, A. J. Kind, K. H. S. Campbell, Nature 1997, 385, 810–813.

[2] M. Crichton, Jurassic Park, Alfred A. Knopf, Inc., 1990.

[3] http://www.roslin.ac.uk/publicInterest/DollyFinalIilness.php.

[4] S. Lü, Y. Li, S. Gao, S. Liu, H. Wang, W. He, J. Zhou, Z. Liu, Y. Zhang, Q. Lin, C. Duan, X. Yang, C. Wang, J. Cell. Mol. Med. 2010, 14, 2771‒2779.

[5] By en: converted to SVG by Belkorin, modified and translated by Wikibob – derived from image drawn by / de: Quelle: Zeichner: Schorschski / Dr. Jürgen Groth, with text translated, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=3080344.

[6] https://www.ipbes.net/news/Media-Release-Global-Assessment.

[7] C. Mora, D. P. Tittensor, S. Adl, A. g. B. Simpson, B. Worm, PLoS Biology, 2011, 9, 1‒8.

[8] D. B. Weishampel, P. Dodson, H. Osmólksa, The Dinosauria, 2nd ed., University of California, 2004.

[9] D. P. G. Bond, P. B. Wignall, Geological Society of America Special Papers, 2014, 505, 29–55.

[10] https://www.quaggaproject.org/.

[11] https://blog.nature.org/science/2014/10/13/quagga-can-an-extinct-animal-be-bred-back-into-existence/.

[12] J. A. Leonard, N. Rohland, S. Glaberman, R. C. Fleischer, A. Caccone, M. Hofreiter, Biol. Lett., 2005, 1, 291‒295.

[13] B. Shapiro, Genome Biology, 2015, 16, 1‒3.

[14] https://institute.sandiegozoo.org/resources/frozen-zoo%C2%AE, [15]https://institute.sandiegozoo.org/conservation-genetics.

[16] https://web.archive.org/web/20080531142827/http://www.advancedcell.com/press-release/advanced-cell-technology-inc-announced-that-the-first-cloned-endangered-animal-was-born-at-730-pm-on-monday-january-8-2001.

[17] D. L. Janssen, A. L. Edwards, J. A. Koster, R. P. Lanza, O. A. Ryder, Reproduction, Fertility and Development, 2004, 16, 224‒224.

[18] https://faculty.mtsac.edu/cbriggs/Bringing%20them%20back%20to%20life%202013.pdf.

[19] http://www.bbc.com/future/story/20180328-the-increasingly-realistic-prospect-of-extinct-animal-zoos.

[20] https://www.pri.org/stories/2012-02-29/hunters-shoot-and-pay-save-rhino.

Feb 052019
 
Spread the love

Imagine you are on an airplane, ten thousand meters up in the sky. Now, if you close your eyes you know exactly which way the airplane has started moving, whether it has begun to manoeuvre to the right or to descend. This ability we owe to our inner ear as a part the humans’ vestibular system.

The vestibular system is designed to send information about the position of the head to the brain’s movement control centre, that is the cerebellum. It is made up of three semi-circular canals and two pockets called the otolith organs (Fig. 1), which together provide constant feedback to the cerebellum about head movement. Each of the semi-circular canals is orthogonal to the two others so that they detect the variety of movements in three independent directions: rotation around the neck (horizontal canal), nodding (superior canal) and tilting to the sides (posterior canal). Movement of fluid inside these canals due to the head movement stimulates tiny hairs that send signals via the vestibular nerve to the cerebellum. The two otolith organs (called the saccule and utricle) signal to the brain about linear movements (backwards/forwards or upwards/downwards) and also about where the head is in relation to gravity. These organs contain small crystals that are displaced during linear movements and stimulate tiny hairs communicating via the vestibular, or balance nerve to the cerebellum.

So why is that, even equipped with such a tool, sometimes we get a feeling sitting on an airplane that it is falling down when in fact it is not? Why is that some people, particularly underwater divers, may lose direction and no longer know which way is up?[1] Surely, typical divers should still have the inner ear, unless a shark has bitten their heads off. Is it all caused by stress? Actually, there is much more to it!

Humans have evolved to maintain spatial orientation on the ground, whereas the three-dimensional environment of flight or underwater is unfamiliar to the human body, creating sensory conflicts and illusions that make spatial orientation difficult. Normally, changes in linear and angular accelerations and gravity, detected by the vestibular system, and the relative position of parts of our own bodies, provided by muscles and joints to the proprioceptive system, are compared in the brain with visual information. In unusual conditions, these sensory stimuli vary in magnitude, direction, and frequency. Any differences or discrepancies between visual, vestibular, and proprioceptive sensory inputs result in a sensory mismatch that can produce illusions. Often the result of these various visual and nonvisual illusions is spatial disorientation.

For example, fighter pilots who turn and climb at the same time (they call it “bank and yank”), feel a strong sensation of heaviness. That feeling, caused by their acceleration, surpasses the pull of gravity. Now, if you asked them while blindfolded to tell which way was down using only their vestibular organ, they would point to the cues provided by the turn, not to the cues provided by the earth’s gravity. [2]

Furthermore, the vestibular system detects only changes in acceleration, thus a prolonged rotation of 15-20 seconds [3] results in a cessation of semi-circular output. As a result, the brain adjusts and does not feel the acceleration anymore which can even result in the perception of motion in the opposite direction. In other words, it is possible to gradually climb or descend without a noticeable change in pressure against the seat. Moreover, in some airplanes, it is even possible to execute a loop without exerting negative G-forces so that, without visual reference, the pilot could be upside down without being aware of it.

Another interesting example is the phenomenon of loopy walking. When lost in a desert or a thick forest terrain without landmarks people tend to walk in circles. Recent studies performed by researchers of Max Planck Institute for Biological Cybernetics, Germany, revealed that blindfolded people show the same tendency. Lacking external reference points, they curve around in loops as tight as 20 meters in diameter while believing they are walking in straight lines. [4]

Seemingly the vestibular system is quite easy to trick by eliminating other sensory inputs. However, even when visual information is accessible, e.g. underwater, spatial disorientation can still occur [any scuba diving forum – for the reference]. The obvious fact that water changes visual and proprioceptive perception is crucial here: humans move slower, see differently and let’s not forget the Archimedes’ principle. It happened a lot, that a confused diver thought that the surface was down, especially when the bottom seemed brighter because of reflections. This can be a dangerous mirage in such an unusual gravity. On top of it, water can affect the vestibular system directly through the outer ear. When the cold water penetrates and reaches the vestibular system, it can cause thermal effects on the walls of the semi-circular canals, leading to slight movements of the fluid inside, which are enough to be detected by the brain.[5] Just like in the situations described before this causes the symptoms of spatial disorientation and dizziness.



Fig. 1. Schematic structure of a humans’ inner ear [6].

The vestibular system is indeed frightfully complicated. We can trick it for fun riding roller coasters in an adventure park, but when incorrect interpretation of the signals coming from the vestibular system occurs at the wrong moment this can lead to serious consequences. Luckily, nowadays the airplanes and even divers are equipped with precise instruments used to complement the awareness of the situation and thus avert dangerous situations.

P.S. If you are interested, try riding an elevator while seated on a bike.

— Mariia Filianina

References:

  1. The Editors of Encyclopaedia Britannica, (2012). Spatial disorientation, Encyclopædia Britannica, inc.,
  2. L. King, (2017). The science of psychology: An appreciative view. (4th. ed.) McGraw-Hill, New York.
  3. Previc, F. H., & Ercoline, W. R. (2004). Spatial disorientation in aviation. Reston, VA: American Institute of Astronautics and Aeronautics.
  4. J. L. Souman, I. Frissen, M. N. Sreenivasa and M. O. Ernst,Walking straight into circles, Current Biology 19, 1538 (2009).
  5. http://www.videodive.ru/diving/vizov5.shtml
  6. http://www.nidcd.nih.gov/health/balance/balance_disorders.asp
Jan 052019
 
Spread the love

Certainly, most of us enjoy an occasional nice bowl of spaghetti. Some of us use a spoon along with the fork, some don’t. Doesn’t matter, as long as you enjoy and don’t make a mess. But have you ever wondered whether there is a preferred direction to turn the fork? And is it related to where you live? We did!

In our last issue (Vol 2, 2018), we launched a survey asking our readers exactly this question (Figure 1).

Figure 1: The Spaghetti Turn survey as it appeared on the webpage.

Our survey was advertised in social media (Facebook, LinkedIn, Twitter, ResearchGate) and via QR codes on flyers. The survey reached a total number of n=160 readers, 132 of them found their way directly to our website. The results are shown in Table 1 and Figure 2.

Table 1: Results of the survey “The Spaghetti Turn”.

Northern hemisphere Southern hemisphere worldwide
n % n % n %
right-handed clockwise 117 75.5 3 60 120 75.0
right-handed counter clockwise 12 7.7 1 20 13 8.1
left-handed clockwise 10 6.5 0 0 10 6.3
left-handed counter clockwise 10 6.5 0 0 10 6.3
both-handed clockwise 0 0 0 0 0 0
both-handed counter clockwise 1 0.6 0 0 1 0.6
shovel 4 2.6 1 20 5 3.1
other 1 0.6 0 0 1 0.6
sum 155 96.9 5 3.1 160 100

Figure 2: Worldwide percentage of the preferred direction to turn the fork when eating spaghetti related to the handedness (values in %).

The option „no preferred direction” remained unanswered. One single participant chose “I am right-handed and turn clockwise” and “I am right-handed and turn counter clockwise”, depicted as “other”. Assuming that this is no miss-click one out of a total number of 160 participants has no preferred direction when using the fork with their right hand. This underlines that most people on earth indeed have a favourite direction to screw the fork.

Although there is no clear definition to determine handedness, some publications claim that 70–95 % of human population worldwide are right-handed, 5–30% are left-handed and a small minority is ambidextrous.[1] This is consistent with our findings: the survey was answered by 133 right-handed people, which is 86.9% of all 154 participants who revealed their handedness. 20 participants are left-handed (13.1% of all 154 participants who revealed their handedness). One participant (<1%) is ambidextrous and turns the fork counter clockwise with both hands.

75.0% of all participants are right handed and turn the fork in clockwise direction. Only 8.1% turn it counter clockwise. Surprisingly, there seems to be no preference about the turning direction among left-handed people. Their numbers equal (each ten or 6.3%), while 90.2% of all right-handed people turn clockwise. Fortunately (or shockingly?), 3.1% of spaghetti eaters worldwide shovel.

Unfortunately, we did not reach a significant number of readers from the southern hemisphere. Four participants out of five are right-handed, one shovels. 60% of the right-handed southerners turn the fork clockwise, 20% turn it counter clockwise. Considered that only five participants (3.1% of all) do not represent the whole ~10% of the human population living on the southern hemisphere,[2] the preference of turning counter clockwise shows the same tendency for both hemispheres. There is therefore supposedly no relation to where you live on this planet.

But why is the clockwise direction so obviously favoured?

Time and therefore clocks have a powerful influence in our daily lives. Also, in a lot of cultures texts are written from left to right (as the clockhand moves). Moving and looking to the right is very often linked to the future and openness. An experiment from Sascha Topolinski and Peggy Sparenberg from 2012 suggests, that the preferred direction to turn objects could be determined by one’s conservative or open personality.[3] Or is it just for handling reasons only and it is a little easier to apply force on the edge of the fork while turning it clockwise?

With a simple survey like our’s it is impossible to determine whether the habit to turn the fork left or right is a matter of education, subconsciousness or technique.

Throughout the active survey it was possible to answer the poll via the Facebook “Surveys for Pages” and our webpage. Hence, we cannot entirely assure the integrity of the results. Also, we hope our readers understand humour but also answer the survey genuinely. We simply trust in the scientific spirit of our readers. We also did not consider that for cultural habits in certain cultures spaghetti dishes might not be available or forks might not be part of the traditional cutlery. Although it is very often a cause for heavy crossfires during meals, the use of a spoon along with the fork is discounted in the evaluation of the results too. With this survey we just aim to give a picture about the general turning behaviour of spaghetti eaters. To the best of our knowledge there has not been a similar survey until now.

We are now smarter than before but still missing the details of the big picture. Let’s see what the new year brings…

Tatjana Dänzer, Mariia Filianina, Alexander Kronenberg, Kai Litzius, Adrien Thurotte

The editorial team of the Journal of Unsolved Questions thanks all 160 participants of the survey and wishes Bon Appetit and a very happy start into the year 2019!

Read more:

[1] [https://www.scientificamerican.com/article/why-are-more-people-right/ (last access 31.12.18, 15:20).

[2] https://bigthink.com/strange-maps/563-pop-by-lat-and-pop-by-long?page=all (last access 31.12.18, 15:40).

[3] Sascha Topolinski, Peggy Sparenberg, Social Psychological and Personality Science, 2012, 3, 308–314.

Dec 042018
 
Spread the love

 

When Francis Guthrie took on the task to colour a map of England in 1852 he needed four colours to ensure that no neighbouring shires had the same colour. Is this the case for any map imaginable, he wondered.

As it turns out, five colours do suffice, as mathematically proven in 1890 in the five-colour theorem [1]. That indeed four colours are enough to colour a map if every country is a connected region took until 1967 to prove [2] and required computer assistance. It abstracted the idea to geometric graph theory where regions are represented by vertices connected by an edge if they share a border (see fig. 1).

Fig 1: Illustration of the abstraction of the map colouring problem to graph theory.

The four-colour theorem was then proven by demonstrating the absence of a map with the smallest number of regions requiring at least five colours. In its long history the theorem attracted numerous false proofs and disproofs. The simplest versions of counterexamples focus on painting extensive regions that bordering many others, thereby forcing the other regions to be painted with only three colours. The focus on the large region might cause people’s inability to see that colouring the remaining regions with three colours is actually possible.

Even before the four-colour theorem was proven, the abstraction to graph theory evoked the question as to how many colours would be needed to colour a plane so that no two points on that plane with distance 1 do have the same colour. This is also known as the Hadwiger–Nelson problem. Note that we are not colouring continuous areas in this case, but instead each individual point of the plane, rendering it extremely more complex. In the 1950s it was known that this sought number, the chromatic number of the plane, had to be between four and seven.

The upper border is known from the existing tessellation of a plane by regular hexagons that can be seven-coloured [4] (fig. 2). The maximal distance within one hexagon, the diameter, needs to be smaller than one to comply with the requirement. Additionally one needs to ensure that the distance to the next hexagon of the same colour is larger than one. These constraints imply that the hexagon edge length a has to be between 0.5 and $\sqrt(7)/2$ for an allowed colouring of the plane, where no two points with distance one have the same colour.

Fig. 2: Colouring of a plane in a seven colour tessellation pattern of regular hexagons.

As to the lower border for the chromatic number of the plane, it is obvious that two colours will not suffice to colour even the simple unit-distance path of an equilateral triangle (see fig. 3 a). To demonstrate that three colours do not suffice either and therefore at least four colours a needed, we take a look at the Moser spindle shown in fig. 3 b. The seven vertices (all eleven edges / connections have unit-distance) cannot be coloured with three colours, say green, blue, and yellow. Assigning green to vertex A, its neighbours B and C need to be blue and yellow, respectively, or vice versa, enforcing D to be green again. A’s other neighbouring vertices E and F analogously are assigned blue and yellow, or vice versa, enforcing in turn G to be green. This conflicts with G’s neighbour D to be green, too, thus demonstrating that arbitrary unit-distance graphs require at least four colours.

Fig 3: a) An equilateral triangle as a simple example for a unit-distance graph. b) The Moser spindle is a four-colourable unit distance graph [3].

After many years of intractability only this year there was some significant progress in closing in on the Hadwiger–Nelson problem. It was demonstrated that “the chromatic number of the plane is at least 5” [5], by finding two non-four-colourable unit-distance graphs (with 20425 and 1581 vertices). The smallest unit-distance graph with chromatic number five found this year has 553 vertices [6] and is shown in fig. 4. Whether the chromatic number of the plane is five, six, or seven still remains to be shown.

Fig 4: Five-colourable unit distance graph with 533 vertices. The fifth colour (white) is only used in the centre. [6]

 

— Alexander Kronenberg

[1] Heawood, (1890), “Map-Colour Theorems”, Quarterly Journal of Mathematics 24, pp. 332–338

[2] Appel, Haken, (1989), “Every Planar Map is Four-Colorable”, Contemporary Mathematics 98, With the collaboration of J. Koch., doi:10.1090/conm/098

[3] Soifer, (2009) “The Mathematical Coloring Book”, Springer

[4] Hadwiger, (1945), “?berdeckung des euklidischen Raumes durch kongruente Mengen”, Portugal. Math. 4 ,pp. 238–242

[5] de Grey, (2018), “The chromatic number of the plane is at least 5”, arXiv:1804.02385

[6] Heule, (2018), “Computing Small Unit-Distance Graphs with Chromatic Number 5”, arXiv:1805.12181

Nov 222018
 
Spread the love

It is one of the most common educational experiments in school and straight from the books: The reaction of an alkali metal with water. During this reaction significant amounts of hydrogen gas are produced, which can ignite and thus explode due to the strongly exothermic reaction – at least that is the explanation one finds pretty much everywhere. However, there is something odd about this reasoning. On the one hand, a complete immersion of the metal within water should then prevent the explosion from happening as no oxygen is present to ignite the hydrogen gas. On the other hand, it is surprising that the solid-liquid interface of this heterogeneous reaction creates enough physical contact to drive the reaction. Additionally, the produced gas tends to separate the educts and therefore stop the reaction. Overall, there are quite a few unclear details in this proposed reaction mechanism.

A study of the Czech Academy of Sciences in Prague and the Technical University of Braunschweig, however, showed that even in presumably clear textbook reactions a lot of surprises may be found sometimes. [1,2] The scientists used drops of sodium-potassium alloy that is liquid at room temperature and filmed the reaction with high speed cameras. They could show that the explosive reaction also happens under water when the metal is completely immersed, thus ruling out the ignition of the hydrogen gas as the main driving mechanism for the explosion. Supported by molecular dynamics simulations, they instead showed what mechanism actually drives the reaction: A Coulomb explosion! During the reaction of a clean metal surface with the adjacent water molecules, electrons move quickly from the metal atoms into the water. This also explains why a solid piece of an alkali metal does not always explode in water: it needs a clean interface without significant oxidation. After the electrons left the metal surface and moved into the water, a strongly charged surface is left. On this surface, the ionized atoms strongly repel each other, and thus open up a path to more inner atoms that have not taken part in the reaction yet. On a time scale of about 0.1 ms, metal dendrites shoot into the water (see figure) and suddenly increase the surface area of the metal. [1-3] This happens extremely fast with giant charge currents flowing in the interface region. The surface tension is pretty much nullified in this case [2,3] and the expanding surface provides more reactive area. As a result, large amounts of hydrogen gas are suddenly produced. Together, these effects drive the explosion, while the ignition of the gas is not directly necessary for the explosion to occur. Instead, the hydrogen gas can also burn off later. [2]

Further results of the study could lead to approaches to avoid metal-water explosions and thus gain application relevance in industry. What is however most unusual about this study is that parts of it got funded by the YouTube science channel of the lead author of the paper, which he explicitly acknowledges. In this exciting case, science and media are really in a close relationship.

As soon as a drop of NaK-alloy gets in contact with water (top left), fine metal fingers are protruding into the water (middle). These are driven by the Coulomb explosion that massively increases the surface area and therefore the reactive interface. As a result, a fast production of hydrogen becomes possible, which further drives the explosion (bottom left). The right column depicts the impact of a water droplet for reference. [1,3]

— Kai Litzius

References:

[1] P. E. Mason et al., Nature Chemistry 7, 250–254 (2015).

[2] https://youtu.be/LmlAYnFF_s8

[3] https://youtu.be/xMfQSV4ygHE

 

Sep 282018
 
Spread the love

Certainly, most of us enjoy an occasional nice bowl of spaghetti. Some of us use a spoon along with the fork, some don’t. Doesn’t matter, as long as you enjoy and don’t make a mess ?

But have you ever wondered if there is a preferred direction to turn the screw? And is it related to where you live? We did!

Please take a minute of your time and participate in our survey to enlighten the world.

Note:

If you are both-handed, please choose your preferred direction for right- and left-handed.

It is irrelevant whether you use a spoon in addition or not.

Please don’t shovel. That’s rude

The results will be published on Spaghetti Day (Jan 4th, 2019) on Junq.info

The Spaghetti Turn

Jul 172018
 
Spread the love

We are all familiar with the appearance of a candle flame. Warm, bright yellow, and formed like a teardrop it nestles up the wick just to reach far out into the empty above it. This behavior can be easily explained by the rise – the convection – of the less dense air that is heated by the combustion around the wick. While colder, more dense air floats inward, the buoyancy of the warm air lets it move upward and away from the combustion zone. However, this process requires buoyancy, which only exists in an environment with gravity. But what would then happen to a flame in zero gravity?

In so-called microgravity, that is an environment with very little gravity like it is present in the Earth’s orbit, there is no convection since there is no definition of a classical “up and down”. The flame therefore looks significantly different and forms a light blue, spherical shape instead of the familiar teardrops. To understand this behavior, one has to consider the chemistry of the combustion as well as the physics of the gas exchange.

In case of the “normal” candle flame, the bright yellow color stems from soot particles that originate in the (non-perfect) combustion. They rise with the hot air and glow yellow in the upper region on the flame. The lower blue-ish region on the other hand is fed by the stream of fresh oxygen-rich air from below. In case of the flame in microgravity, there is no preference for up and down and therefore it assumes a spherical shape. Due to the lack of conversion, the combustion is fed only by (slow) diffusion of the oxygen into and the fuel out of the central combustion zone. This means that the zero-gravity flame burns much slower and does not produce equally distributed soot particles. Thus it is blue, spherical, and produces much more CO and formaldehyde than CO2, soot, and water.

This behavior, and how to extinguish a flame in microgravity, is under investigation aboard on the International Space Station (ISS) in the so-called FLame Extinguishment Experiment (FLEX). It is carried out on small heptane bubbles that are ignited in a controlled atmosphere. The experiment found that such small flame bubbles are not just exotic to look at, but also can pose a threat to space exploration since they can be much more difficult to extinguish. In this way, research on small bubbly flames can thus help making space exploration a bit safer.

A candle on Earth (left) and in microgravity (right): The different combustion patterns are clearly visible. [3, NASA]

 

— Kai Litzius

References:

[1] www.nasa.gov/mission_pages/station/research/experiments/666.html

[2] medium.com/@philipbouchard/why-is-a-candle-flame-in-zero-gravity-so-different-than-one-on-earth-1775194cf21a

[3] https://www.youtube.com/watch?v=DmrOzeXWxdw

 

May 292018
 
Spread the love

Sonoluminescence is a fascinating, mysterious physical phenomenon, that combines the principles of light and sound.

In the year 1934 H. Frenzel and H. Schultes discovered a luminous effect by ultrasonication of water.[1] The defining moment that leads to sonoluminescence is the emergence of a cavitation in the liquid (figure 1). The high frequency ultrasound leads to the formation of bubbles, that are filled with gas and expand and collapse rapidly like a shock wave. Shortly after the collapse, the energy is released in form of sound and a short lightning, which is barely observable with the bare eye and reaches temperatures up to 10,000 K.[2,3]

Figure 1. Schematic illustration of the formation of sonoluminescence (f.l.t.r.): Growth of a gas bubble in a liquid, collapse or implosion of the bubble and emission of light.[4]

In the 1990s, the causes and impacts that lead to sonoluminescence have been intensively investigated but the real cause of this phenomenon remains unresolved even nearly 85 years after its discovery.[5,6] There are different quantum mechanical approaches, but they are highly controversial.[7,8]

Sonoluminescence is not only a physical phenomenon, it does indeed show capability for an academic application, at least in chemistry: in 1991 Grinstaff et al. were able to generate nearly pure amorphous iron by ultrasonication of an iron pentacarbonyl solution in decane. Compared to crystalline iron this compound shows enhanced catalytic activity when used in the Fischer-Tropsch process.[3]

Sonoluminescence also occurs in wildlife: by snapping their claws, pistol shrimp create a sharp stream of water that does not only kill prey but generates a cavitation bubble and thus a short lightning. Scientists call this special phenomenon “shrimpoluminescence”.[9]

 

— Tatjana Daenzer

 

Bibliography

[1] H. Frenzel, H. Schultes, Z. Phys. Chem. 1934, 27, 421–424.

[2] B. P. Barber, S. J. Putterman, Nature, 1991, 352, 318–320.

[3] K. Suslick, S.-B. Choe, A. A. Cichowias, M. Grinstaff, Nature, 1991, 353, 414–416.

[4] „Creative Commons“ from Dake CC BY-SA 3.0. (https://commons.wikimedia.org/wiki/File:Sonoluminescence.png#/media/File:Sonoluminescence.png) last access: 15.05.2018.

[5] B. P. Barber, C.-C. Wu, R. L?fstedt, P. H. Roberts, S. J. Puttermann, Phys. Rev. Lett. 1994, 72, 1380–1383.

[6] R. Hiller, K. Weninger, S. J. Puttermann, Science, 1994, 266, 248–250.

[7] C. Eberlein, Phys, Rev. Lett. 1996, 76, 3842–3845.

[8] R. P. Taleyerkhan, C. D. West, J. S. Cho, R. T. Lahey Jr., R. I. Nigmatulin, R. C. Block, Science, 2002, 295, 1868–1873.

[9] D. Lohse, B Schmitz, M. Versluis, Nature, 2001, 413, 477–478.