Adrien Thurotte

Dec 172019
 
Spread the love

Due to technical improvements during the last years, machines outcompete humans in a couple of specialized tasks: Whereas it can take a human person very long to calculate the square root of a (non-square) number, a computer can finish this calculation at high precision within a fraction of a second. However, there are some areas in which machines still cannot compete with nature (yet). One of them is olfaction: Currently, no device is available that could replace police dogs with the ability to detect trace amounts of molecules. Similarly, farmers sometimes even train pigs to search for truffles hidden in the soil. Of course, the ability to detect relevant molecules in low amounts offers an enormous advantage and is thus subject to extensive optimization by evolution.

https://upload.wikimedia.org/wikipedia/commons/7/7d/Brooklyn_Museum_-_L%27Odorat_-_Honor%C3%A9_Daumier.jpg
L’odorat, Honoré Daumier (circa 1839, public domain – wikimedia)

How exactly olfaction works in higher organisms has not been known for a long time. Nonetheless, it had been intuitively clear that there must be specific receptors interacting with the corresponding odours. This simple assumption has a remarkable consequence: Since mammals can distinguish a high number of odours, there also must be a high number of different receptors encoded in the genome. Indeed, the two scientists Linda Buck and Richard Axel discovered a comparatively large family of genes encoding for odorant receptors [1]. For this discovery, they were awarded the Nobel Prize in Physiology or Medicine in 2004. The activation of these receptors on the cell surface always results in similar intracellular reactions. If a cell had receptors for different odour molecules on its surface, it could therefore not distinguish these odours. In accordance to this consideration, it turned out that each olfactory cell only carries one type of all the different odorant receptors encoded in its genome. Why exactly this is the case is still not known in detail to date. Even more surprisingly, it even turned out that the axons of cells, which carry the same type of odorant receptor on their surface, end on the same set of cells.

An odour can of course consist of several kinds of molecules. The activation of different combinations of olfactory sensory neurons further increases the number of differentiable odours. A phenomenon seemingly similar to the exclusive expression of a single odorant receptor by an olfactory sensory neuron is the generation of only one type of antigen receptor by immune cells. They achieve this by a complicated recombination of genes, which is clearly not observed in olfactory neurons.

Investigating how a biological structure develops is often very helpful: In a later work, Linda Buck could show that in contrast to mature olfactory neurons, there are multiple mRNAs for different odorant receptors in immature neurons [2]. Why cells of our body can have entirely different morphologies and properties even though they all carry a copy of the same genome is a fundamental question which keeps many biologists busy. It is the differential expression of the genes in a cell, which causes these differences. This gives muscle cells the ability to contract and enables neurons to generate action potentials.

However, all olfactory neurons express a very similar pattern of genes except for their odorant receptor. One of the reasons for the transcription of different amounts of RNAs from different genes is the spatial arrangement of the DNA in the nucleus. Had it not been tightly packed into the nucleus, the DNA in each cell would have a total length of 1.8 m and highly condensed sections of DNA are usually not accessible for transcription into RNA. Stavros Lomvardas, a former member of the group of Richard Axel, could show that DNA segments encoding for odorant receptors on different chromosomes get pulled close to each other in a small spatial region in the nucleus. Interactions between the different DNA segments encoding for odorant receptors could contribute to the exclusive transcription of one specific odorant receptor gene [3,4].

The relevance of the spatial arrangement of the DNA within the nucleus for gene expression is an open question of major interest beyond olfaction. To which degree there is a specific nuclear arrangement of DNA and how this is established after cell division would then be further important for other unsolved questions in biology.

— Tobias Ruff

References

  • [1] Buck, L. and Axel, R. , A novel multigene family may encode odorant receptors: a molecular basis for odor recognition. Cell 1991, 65-1 PP175-187 DOI:10.1016/0092-8674(91)90418-x
  • [2] Hanchate, N. K. and Kondoh, K. and Lu, Z. and Kuang, D. and Ye, X. and Qiu, X. and Pachter, L. and Trapnell, C. and Buck, L. B. , Science 2015, 350-6265 PP1251–1255
  • [3] Clowney, E. J. and LeGros, M. A. and Mosley, C. P. and Clowney, F. G. and Markenskoff-Papadimitriou, E. C. and Myllys, M. and Barnea, G. and Larabell, C. A. and Lomvardas, S., Cell 2012, 151-4 PP724–737
  • [4] Markenscoff-Papadimitriou, E. and Allen, W. E. and Colquitt, B. M. and Goh, T. and Murphy, K. K. and Monahan, K. and Mosley, C. P. and Ahituv, N. and Lomvardas, S., Cell 2014, 159-3 PP543–557

Sep 182019
 
Spread the love

Alex Steffen[1] makes enterprises future-proof. He is an expert for business strategy and innovation. He is also a no.1 Best-Selling Author and Speaker. His mission by 2025 is to empower 150,000 business leaders to future-proof their enterprise with ease. How? Alex turns business leaders into entrepreneurs. Alex Steffen was named Management Thought Leader 2019 by Change X and his book “Die Orbit Organisation” was nominated for the getAbstract International Book Award. His Keynotes “The Atlas of Innovation” and “Unstoppable Human” are international hits. Learn about Alex at https://alextsteffen.com.

[1] info@alextsteffen.com

Alex Steffen

JUnQ: What is digital citizenship? Should there be a basic education in responsible handling of digital tools in (early) schools?

Alex T. Steffen: Let’s pick a narrow definition. I understand digital citizenship as a human’s ability to be a more rounded part of society thanks to information technology. The truth is: technology often simply emphasizes the existing design.

Digital schooling isn’t better schooling, as long as schools fail to teach us the central skills required in the modern world: thinking for ourselves. In my opinion, that’s what the society and workplace of the future needs. We’re trying to stitch digital onto an outdated paradigm, which tells us that memorizing facts is fundamental to a successful career. And then we’re surprised to find that machines take away jobs.

The truth: a rounded human, well-equipped to play his or her part in society combines a unique blend of complex skills. Uniqueness is an advantage, not a disadvantage. I see micro degrees, potent mentoring, and real exposure to the world as essential ingredients of education towards digital citizenship. We don’t need any more homogenous machine workers. The new standard for humans and businesses is hyper-customization. A smart country isn’t a country that has advanced to digital citizen services only.

A smart country is a society where its citizens can create a career and life on their own terms using highly customizable (education) resources. That will make them uniquely trained and attractive according to their strengths and inclinations. Look around, the top talents are already living this very design. Now it’s our responsibility to take it from niche to commonplace.

JUnQ: What are the general problems and dangers that arise with (global) digitalization and what are possible solutions?

Alex T. Steffen: This begs the exploration of the new relationship between digital processes and human habits. Let’s first crush a myth: our problem isn’t the technology disrupting our lives. Humans will create what’s possible. They always have. The problem lies in our own comfort to reconsider what we see as “normal”, “customary” and “acceptable”. Our problem is: we think that most of what we look at is permanent when in fact, the world is in constant change.

We underestimate our need for validation and our inability to accept outside perspectives. Those are the real causes of resistance. I am convinced that if we could measure the real damage of business as usual, it would vastly outweigh the so-called threats of digitization. I would like to see an approach where anything new is met with a cool-headed evaluation. Reactive resistance contra change based on individual discomfort stands in the way of realizing beneficial trends.

These trends often end up as part of our lives anyway, built by others, who were open-minded in the first place. And, equally important, a lack of engagement with trends prevents us from making them safe and aligned with our values. I suggest training leaders on emotional intelligence and on staying curious. As soft as this sounds to our logical minds, it’s the vastly underestimated skill that nourishes our ability to be competitive. Innovation starts with the very subject in question: rethinking (innovating) the way we train our leaders, so that change can be embraced .

JUnQ: Data processing, communication, and research have become impossible without digital tools, especially in the field of technology and science. A regression has become unthinkable. Are there limitations to further digital progress?

Alex T. Steffen: Every society comfortable enough to explore this philosophical question faces a dilemma between two seemingly exclusive ideas.

Idea 1: we’ve arrived at the pinnacle of innovation. Further innovation seems unthinkable or unethical. Further innovation causes more harm than good.

Idea 2: awe-inspiring science fiction scenarios that look completely absurd but encapsulate even more human optimization potential.

The two ideas are not exclusive. Rather, they lie on opposite poles of a scale. I’m always curious where a person or society sits on that scale. In other words, how much of each idea do they express. My take is that we often ignore the bigger picture. History can provide data for a more realistic standpoint, namely that humans will continue innovating indefinitely. It’s like that because with new capabilities come ever new desires. These trigger our ingenuity anew.

This begs the question: will we be able to find a healthy balance between a paralyzing public debate about the implications of change on the one hand and co-creating the inevitable changes, so that they end up in favor of future generations? Let’s look at an example: In Sweden the question of female equality at work has largely been resolved for a few years. “We focus on doing rather than talking” an executive at Volvo shared with me. In Germany, after years of debate this is still a hot topic.

JUnQ: How will the future digital workplace look like?

Alex T. Steffen: I love this question and yet I’ll keep my answer deliberately vague. Nobody can predict the future with 100% accuracy. I sincerely hope that for most people the future workplace will be driven by vitality, intuition, and self-actualization. This will mean better health and quality of life for the individual as well as higher competitiveness for business. [1]

JUnQ: In Germany, digitalization appears to proceed more slowly than in other industrial countries. What are possible troubles and how can we overcome this gap?

Alex T. Steffen: All innovation starts in the mind. History is full of examples where German ingenuity put us in the pole position, only to be halted by doubt and cumbersome processes. We wake up and find ourselves late in the game. No question, their intention is good. But after some time of business as usual, further resistance to creative destruction creates more harm than good. In 2019 German car giant Volkswagen came out with its car for the future. Unfortunately the car is not an exponential innovation at all. It’s traditional car with an electric engine. Major improvements still require a garage.

Tesla Motors on the other hand, has shown us what a disruption of the automotive industry really looks like. Tesla has built a digital platform on which major improvements are performed over the internet via digital upgrades. The result: the need for a garage drops drastically. So does the dependency on a complex web of stakeholders, turning Tesla Motors into the more flexible player. This example shows that Germany’s industry still loves its traditions. They are safe. Planning and due diligence is our fetish. But safe does not make our designs future-proof. The key competitive edge for the future is flexibility. Sooner or later we need to start killing our legacy darlings and commit to real change.

JUnQ: How important do you see 5G in general?

Alex T. Steffen: Humans have great difficulty perceiving change that is happening right now. Change is always seen from the understanding of the past. For example, the first movies were recorded in the style of plays. Only after some time directors developed the unique movie style we know today. I see 5G as an essential building block of the future, both for business and private. The debate about the why is holding up the potential to work on the how.

JUnQ: What could be the next big step in digitalization after smart devices, AI and augmented reality?

Alex T. Steffen: I heard a fascinating statement the other day: In the last two years we have undergone more change than the previous ten. The discomfort of uncertainty makes us ask questions like this. Just like a cigarette drag they are just dangerous fixes that ignore the root problem: anxiety. We cannot trust any so-called futurists because nobody actually knows the future. Many experts’ predictions have been dramatic errors costing businesses large sums of money. Other predictions have never reached the mainstream, leaving everyone unprepared. Instead I suggest us all to take on a calm and confident attitude towards the future:

1. Being optimistic. Not all of the future is great but there’s more good than bad.

2. Embracing uncertainty. Accepting the fact that for the rest of our lives we’ll be newbies.

Build our very own ability to separate what’s important from the noise, based on concrete data points. Then decide for ourselves without taking dangerous shortcuts. To help with this I recommend three books: “The Inevitable” by Kevin Kelly, “Factfulness” by Hans Rosling, “The Rise of The Creative Class” by Richard Florida.3-5

JUnQ: The data flood is growing evermore, and coherencies seem to become impenetrable with every new discovery. How applicable is “fail fast, fail often” for the digital learning processes in terms of time and resources?

Alex T. Steffen: In the late 1800s, as economic activity grew, people were debating solutions for the drastic increase of horse dung in the streets. It was becoming a huge issue and no solutions in sight. The advent of the combustion engine solved that pressing issue within one decade. As humans evolve they design capabilities for pressing challenges. These days we’re addressing the issues caused by the combustion engine and other contributors to global heating.

In the same fashion, we’ll come up with technology that can manage and interpret existing and new data for our needs. Because of the increase of speed and complexity, prototyping in a fail fast, fail often fashion as we know it from startups remains highly relevant in my view.

JUnQ: Can you give future leaders a piece of advice to take along?

Alex T. Steffen: There’s only one, but it means everything: embrace discomfort. In order to go further we often need to tolerate some discomfort. A trampoline requires a downward strain in order to gain the force that can shoot a person up in the air. Without the down there’s no up. In most cases the internal resistance is much greater than the external struggle. In other words: it’s easier than we think. If we have a good reason to act we’ll do it. So here’s mine: if we want to leave a better world for our kids, we have to get better at embracing change.

JUnQ: Inspiring words, thank you very much for the interview, Mr. Steffen!

— Tatjana Daenzer


You can find some perspectives on how to design a future-proof workplace in Alex’ book “The Orbit Organisation” and on Alex’ blog (http://www.alextsteffen.com/blog).[1,2]

Read more:

[1] A.M. Schüller, A.T. Steffen, Die Orbit-Organisation, 2019, Gabal
[2] http://www.alextsteffen.com/blog.
[3] K. Kelly, The Inevitable, 2017, Penguin Books
[4] H. Rosling , O. Rosling, et al., Factfullness, 2018, Sceptre
[5] R. Florida, The Rise of The Creative Class, 2014, Basic Books
Sep 182019
 
Spread the love

Haydn Belfield [1] is a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk. He is also an Associate Fellow at the Leverhulme Centre for the Future of Intelligence. He works on the international security applications of emerging technologies, especially artificial intelligence. He has a background in policy and politics, including as a Senior Parliamentary Researcher to a British Shadow Cabinet Minister, as a Policy Associate to the University of Oxford’s Global Priorities Project, and a degree in Philosophy, Politics and Economics from Oriel College, University of Oxford.
[1]hb492@cam.ac.uk

Haydn Belfield

Artificial intelligence (AI) is beginning to change our world – for better and for worse. Like any other powerful and useful technology, it can be used both to help and to harm. We explored this in a major Febuary 2018 report The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.[1] We co-authored this report with 26 international experts from academia and industry to assess how criminals, terrorists and rogue states could maliciously use AI over the next five years, and how these misuses might be prevented and mitigated. In this piece I will cover recent advances in artificial intelligence, some of the new threats these pose, and what can be done about it.

In this piece I will cover recent advances in artificial intelligence, some of the new threats these pose, and what can be done about it.

AI, according to Nilsson, “is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment”.[2] It has been a field of study from at least Alan Turing in the 1940s, and perhaps from Ada Lovelace in the 1840s. Most of the interest in recent years has come from the subfield of ‘machine learning’, in which instead of writing lots of explicit rules, one trains a system (or ‘model’) on data and the system ‘learns’ to carry out a particular task. Over the last few years there has been a notable increase in the capabilities of AI systems, and an increase in access to those capabilities.

The increase in AI capabilities is often dated from 2012’s seminal Alexnet paper.[3] This system achieved a big jump in capabilities on an image recognition task. This task has now been so comprehensively beaten that it has become a benchmark for new systems – “this method achieves state-of-the-art in less time, or at a lower cost”. Advances in natural language processing (NLP) have led to systems capable of advanced translation, comprehension and analysis of text and audio – and indeed the creation of synthetic text (OpenAI’s GPT-2) and audio (Google’s Duplex). Generative Adversarial Networks (GANs) are capable of creating incredibly convincing synthetic images and videos. The UK company DeepMind achieved fame within the AI field with their systems capable of beating Atari games from the 1980s such as Pong. But they broke into the popular imagination with their AlphaGo systems defeat of Lee Sedol at Go. AlphaGo Zero, the successor program, was also superhuman at Chess and Shogi. AI systems have continued to match or surpass human performance at more games, and more complicated games: fast-paced, complex, ‘real-time strategy’ games such as DOTA II and Starcraft II.

This increase has been driven by key conceptual breakthroughs, the application of lots of money and talented people, and an increase in computing power (or ‘compute’). For example, training AlphaGo Zero used 300,000 times as much compute as AlexNet.[4]

Access to AI systems has also increased. Most ML papers are freely, openly published by default on the online depository arXiv. Often the code or trained AI system can be freely downloaded from open source software libraries like GitHub or TensorFlow, which also tend to standardise programming methods. People new to the field can get up to speed through online courses such as Coursera, or the many tutorials available on YouTube. Instead of training their systems on their own computers, people can easily and cheaply train them on cloud computing providers such as Amazon Web Services or Microsoft Azure. Indeed the computer chips best suited to machine learning (GPUs and TPUs) are so expensive that it normally makes more sense to use a cloud provider, and only rent the time one needs. Overall then, it has become much easier, quicker and cheaper for someone to get up to speed, and create a working system of their own.

These two processes have had many benefits: new scientific advances, better and cheaper goods and services, and access to advanced capabilities from around the world. However they have also uncovered new vulnerabilities. One is the discovery of ‘adversarial examples’ – adjustments to input data so minor to be imperceptible to humans, but that cause a system to misclassify an input. For example, misclassifying a picture of a stop sign as a 45 mph speed limit sign.

These vulnerabilities has prompted some important work on ‘AI safety’, that is, reducing the risk of accidents involving AI systems in the short-term [6,7] and long-term.[8] Our report focussed, however, on AI security: reducing the risk of malicious use of AI by humans. We looked at the short-term: systems either currently or soon to be in use in the next five years.

AI is a ‘dual-use’ technology – it can be used for good or ill. Indeed it has been described as an ‘omni-use’ technology as it can be used in so many settings. Across many different areas however, common threat factors emerge. Existing threats are expanding, as automation allows a greater scale of attacks. The skill transfer and diffusion of capabilities described above will allow a wider range of people to carry out attacks that currently the preserve of experts. Novel threats are emerging, using the superhuman performance and speed of AI systems, or attacking the unique vulnerabilities of AI systems. The character of threats is being altered as attacks become more customised to particular targets, and the distance between target and attacker makes attacks harder to attribute.

These common factors will affect security in different ways – we split them into three domains.

In ‘digital security’, for example, current ‘spear phishing’ emails are tailor-made for a particular victim. An attacker trawls through all the information they can find on a target, and drafts a message aimed at that target. This process could be automated through the use of AI. An AI could trawl social media profiles for information, and draft tailored synthetic text. Attacks shift from being handcrafted to mass-produced.

In ‘physical security’, for example, civilian drones are likely to be repurposed for attacks. The Venezuelan regime claims to have been targeted by a drone assassination. Even if, as is most likely, this is propaganda, it gives an indication of threats to come. The failure of British police for several days to deal with a remote-controlled drone over Gatwick airport does not bode well.

In ‘political security’ or ‘epistemic security’, the concern is both that in repressive societies governments are using advanced data analytics to better surveil their populations and profile dissidents; and that in democratic societies polities are being polarised and manipulated through synthetic media and targeted political advertising.

We made several recommendations for policy-makers, technical researchers and engineers, company executives, and wide range of other stakeholders. Since we published the report, it has received global media coverage and was welcomed by experts in different domains, such as AI policy, cybersecurity, and machine learning. We have subsequently consulted several governments, companies and civil society groups on the recommendations of this report. It was featured in the House of Lords Select Committee on AI’s Report. We have run a workshop series on Epistemic Security with the Alan Turing Institute. The topic has received a great deal of coverage, due in part to the Cambridge Analytica scandal and Zuckerberg’s testimony to Congress. The Association for Computing Machinery (ACM) has called for impact assessment in the peer review process. OpenAI decided not to publish the full details of their GPT-2 system due to concerns about synthetic media. On physical security, the topic of Lethal Autonomous Weapons Systems has burst into the mainstream with the controversy around Google’s Project MAVEN.

Despite these promising developments, there is a lot still more to be done to research and develop policy around the malicious use of artificial intelligence, so that we can reap the benefits and avoid the misuse of this transformative technology. The technology is developing rapidly, and malicious actors are quickly adapting it to malicious ends. There is no time to wait.

Read more:

[1] Brundage, M., Avin, S., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, arXiv:1802.07228.
[2] Nilsson, N. J. (2009). The quest for artificial intelligence. Cambridge University Press.
[3] Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems (pp. 1097-1105).
[4] Amodei, D. Hernandez, D. (2018). AI and Compute. OpenAI: https://blog.openai.com/ai-and-compute/.
[5] Karpathy, A. (2015) Breaking Convnets. http://karpathy.github.io/2015/03/30/breaking-convnets.
[6] Amodei, D., Olah, C. et al. (2016) Concrete Problems in AI Safety.
[7] Leike, J. et al. (2017) AI Safety Gridworlds. DeepMind.
[8] Bostrom, N. (2014) Superintelligence. Oxford University Press.
[9] House of Lords Select Committee on Artificial Intelligence (2018). Report of Session AI in the UK: ready, willing and able? 2017–19 HL Paper 100.
Sep 182019
 
Spread the love

Anton Bogomolov[1] is a data scientist with PhD in Physics, currently working in IoT branch. He is passionate for artificial intelligence with ten years of experience in automated data analysis and machine learning.

[1]abogomolov86@gmail.com

Anton Bogomolov

JUnQ: The everlasting technological progress is aimed to fulfill many needs of humans: most of them are physical, informational and commercial. In particular, robots were created to perform tasks that were too dangerous for humans or that humans could not or did not want to do. But what do we need intelligent machines for and what is implied by “Artificial Intelligence” (AI)?

Anton Bogomolov: The answer was already said – we need AI to make our life simpler, i.e. to simplify some routine work that humans have to do. Generally, we are heading towards automation, and in the ideal case, we want to automize everything, every kind of work. So far, the processes we are capable of automizing have been prioritized.

Now, what is understood by the term “AI”? Over the course of this interview we will go deeper in the discussion, so let’s start with a fairly broad definition: AI is something that is able to accomplish certain tasks with the help of self-learning.

JUnQ: Does it imply that AI is not meant to create anything, like art or music?

Anton Bogomolov: There is a number of definitions of AI. Indeed, the term “intelligence” implies that it can do creative work as well. It is not a simple calculator. You don’t just tell it what you want it to calculate, and then it does exactly what has been asked. It does something more complicated and, thus, it also involves some learning experience. In this context, the creative work does not necessarily mean being an artist or a musician, or a composer. A chatbot, as an example of an AI feature, is also a kind of creative work, because it is required to react accordingly or ask appropriate questions, in other words to be engaged into a conversation as a human would be i.e. express creativity.

Generally, yes, AI can generate art. For example, “Deep Dream”1 was popular a few years back. This algorithm uses AI to generate the dream-like appearance of the uploaded images. Another one is “Neural style transfer”2 which allows one to compose an image in the style of another image. Should one ever want to paint like Van Gogh or Picasso, this can be easily done, using this algorithm. There is also AI-composed music already creeping into the background of games, film, and media. With AI it is now possible to create music in different genres just at the push of a button.

JUnQ: In the news or podcasts, the term “machine learning” often seems to come together with AI. What is, simply put, machine learning and how does it relate to AI?

Anton Bogomolov: As I mentioned before there are many definitions of AI. In simple words, AI is a broader term than the machine learning (ML), i.e. AI includes ML. Being sort of an advanced algorithm, AI achieves specific goals by means of ML, at the same time it is able to adapt to its environment, just like humans. ML is also an algorithm, but a simpler one, with the key feature – the ability to learn (thus the name). It is not meant to achieve a global goal, its goal is to eventually enable programs to automatically improve through experience, without the programmer having to change the code. ML relies on working with data sets, that one needs to input first. It then examines and analyses the data to find common patterns, so that eventually it becomes possible to make experience-driven predictions or decisions.

JUnQ: So what it means is that AI does not exist without ML?

Anton Bogomolov: Right. Machine learning is a subset of AI, more like a tool to achieve AI. One example might be the first chatbots from the 90s. They had hardcoded “intelligence”, i.e hardcoded answers to possible questions. If such bot sees certain keywords it outputs accordingly relevant keywords. These did not have machine learning. But the intelligence of these was doubtful since the algorithm did not adapt. And as we discussed previously the key asset of AI is the ability to adapt.

JUnQ: Since we are on this page, how can one tell the difference between the AI system and a more “conventional” program?

Anton Bogomolov: There are “intelligence” tests for AI, among which the most renown one is the Turing Test.3 But this is more to test whether or not a system is capable of thinking like a human being. However, no AI technology today has passed the Turing test, i.e. that has shown to be convincingly intelligent and able to think. So, this is the main goal of this AI branch – we want to create a machine that will be indistinguishable from a human, in particular, that will be self-aware and act somewhat mindfully. In the end, such a machine will be able to pass the Turing test. Once again, so far, they do not exist. Self-awareness tuned out to be tough to realize.

Now, back to what was asked. I believe, no one is interested in differentiating AI from a mindless linear algorithm. Because as long as the desired goal is achieved no one cares what type of algorithm was used for it.

JUnQ: AI is no longer a futuristic concept, as some may naively think. Can you name some examples where is AI being used already? Are there any AI applications used in the everyday life of ordinary people?

Anton Bogomolov: The most straightforward example is our smartphones. The more recent ones can recognize the owner’s face. This is known to use neural networks. Also, in smartphones, there is Google assistant. Spoken inquiries are transferred to a server where neural network-based algorithms convert them to text, and which is then processed to deliver the relevant information. These are the simplest examples. We all watch Youtube where based on one’s watch history the system suggests what else one might be interested in. These AI-based recommendation engines now seem to know us to an uncanny degree.

If we now go further from everyday life, I would say AI is used pretty much in every field. In finance – there are already automatic trading robots. Some use AI for analysing financial markets to generate profitable trading strategies or make market predictions.

Autonomous driving has become very popular recently. There are even toys for children that make use of a variety of AI and ML technologies, including voice and image recognition, to identify the child and other people around, based on their voices and appearance. This all is owing to the computation power we currently have, which has advanced in the last years.

AI has found its application in medicine as well. As AI demonstrated remarkable progress in image-recognition tasks it is now widely used in medical radiology and computer tomography. One example is that there are neural networks that are trained to analyze tumours and do it as well as the top-class specialists in the field. Just as radiologists are trained to identify abnormalities based on changes in imaging intensities or the appearance of unusual patterns, AI can automatically find these features, and many others, based on its experience from the previous radiographic images, coupled with data on clinical outcomes. This also yields a more quantitative outcome, while radiologists perform only a quantitative assessment.4

JUnQ: As AI develops further is it going to make human jobs obsolete? And what will people be doing if there is nothing else to do?

Anton Bogomolov: Ideally, this is what we aim for – to have everything automized. But this can be achieved, in my opinion, only when so-called artificial general intelligence is realized. This will be a machine capable of experiencing consciousness and think autonomously and thus will be able to accomplish any intellectual task that a human being can.

What will happen to humans after all? There is a concept of universal basic income. The idea is that the robot replacing you is working on your behalf and you are given an income sufficient to meet basic needs, with zero conditions on that income. Because in the end the job is being done and the resources are being produced while you are free for other pursuits.

There has been a lot of research interest in this regard. Back in the 60’s, there was a researcher, John Bumpass Calhoun, who reported on an experiment with rats, the experiment is also known as “Universe 25”. The researchers provided rats with unlimited resources, such as water and food. Besides, they eliminated the danger otherwise coming from nature, like predators, climate, etc. Thus, the rats were said to be in “rat utopia”. At first, the population peaked but shortly after it started to exhibit a variety of abnormal, often destructive behaviours. After some time of the experiment, the rats became too lazy to reproduce and the population was on its way to extinction. There is, of course, the controversy over the implications of the experiment but it can be perceived as one of the possible scenarios of the future.

JUnQ: What about the programming jobs? And scientists?

Anton Bogomolov: Well, first we automize what we can do – so far, the simplest work. AI is now partly replacing the jobs of translators and customer service work. The next in line are self-driving cars that will automize the entire transportation industry, bus, and taxi drivers and so on. But programming jobs are of a different kind, they are creative. Programs that develop other programs exist already, but they are rather limited in what they can do.

Eventually, all jobs will be replaced. Programming jobs will be among the last ones though. Just as other creative jobs, including scientists.

One day we will have a super-intelligent machine, that develops further programs similar to itself at less expense and much faster compared to when supervised by humans. At some point we might not be able to follow its advances anymore and here comes the term “technological singularity”. This is believed to occur when AI starts discovering new science at enormous rates while always learning and evolving on top of it uncontrollably from human’s side.

JUnQ: Is the “singularity” inevitable?

Anton Bogomolov: There is an everlasting argument whether at all it is possible to realize a self-aware AI, that will act mindfully, much like a human. Therefore, depending on “yes” or “no” there will be a technological singularity or not. It can as well occur for other reasons, it is just that among others AI is more likely to bring us to the technological singularity.

On the other hand, it is not proven that such AI can ever be created, to be able to run autonomously and replace all of us. In this case, there will be no AI-induced singularity.

So, this is now a really hot topic in the community.

JUnQ: Does it mean that self-awareness is prerequisite for a possible singularity to occur and we are not yet passed the point of no return?

Anton Bogomolov: Right. The algorithms that exist now and are known to beat the world-class champions in chess and Go are harmless. They are just trained extraordinary well on one particular subject, to achieve a well-defined goal. They are not able to think outside of the box, like “what else is there that I could do”.

Once we create a machine that will be able to think this way, to exhibit human-level consciousness, it is expected to bring us to the singularity. Because it will be able to operate and develop without any supervision. All existing AI technologies do develop themselves but only to a certain degree, they do not have this freedom yet.

JUnQ: Speaking about self-awareness. For example, Sophia – the social humanoid robot developed by Hanson Robotics – realizes itself (herself) as being a programmed female robot. Does it mean that she is self-aware? How did they manage to program “her” self-realization?

Anton Bogomolov: As far as I understand she is programmed to answer this way. If there comes a question about what she thinks she is, her answer will be according to what has been built in her program. Most likely she was trained on thousands of real dialogs among people about their self-awareness. Like other AI systems, she also has machine learning that, if you feed it with enough data, will enable her to learn how to answer and how to behave, as people would.

Sophia communicates very well on a topic known in advance. Because in this case she can get trained in advance: they provide her with enough information about a given topic to get trained. Then she is able to have a sensible conversation because she has the statistics on what is typically answered when. Nevertheless, it is not as simple as when you say X, she replies Y. Thanks to machine learning what she says is a result of rather complicated non-linear connections.

I did not have a chance to speak with her personally though, but I think she is certainly not self-aware. Otherwise, the singularity would have been just around the corner by now. If she had a human-level consciousness, there would be nothing that she would need people for. She would be able to program herself to increase her memory. In just a few days she would reach the level of intelligence of all the people on Earth. In a few more days we would not be able to comprehend what level of intelligence she would have – again the exponential progress.

So, there is nothing we should worry about. She is still just a robot – more about illusion than intelligence. The shocking effect is also due to the fact that she looks like a human, has emotions and facial expressions. This unique combination of her features might make us a bit alert. And for sure Sophia is a great representation of all the advances of AI technology.

In fact, to able to realize human-level AI we essentially need to model a human’s brain. The human brain contains around 10 neurons. On the other hand, functional neural networks have in the order of tens of millions of neurons. These four orders of magnitude difference are sizeable. Moreover, it also takes quite some time to train a system with a large number of neurons. At the end of the day, we do not yet have the capacity to realize a human-level AI.

JUnQ: In case something goes wrong, will we able to “unplug” the machine. Do autonomous AI systems exist yet?

Autonomous systems do exist. Think of a toy-dog, that we have discussed already, or a vacuum cleaner, they are programmed to charge when needed. These are completely autonomous as long as the power source is available. Military branch sure has got some as well. I can imagine an armed flying drone, self-charging, and self-rechargeable.

But the existing autonomous AI systems are not a threat to humans. Despite having all the advantages of machine learning they follow a defined program to accomplish a specific task. It can be the best in recognizing people’s faces, shooting targets or avoiding bullets. But it is still a mindless machine, that we can destroy, or fool or at least hide all the power stations from it.

As long as any of these do not have human-level intelligence, as long as they are not smarter than us, they should not be considered as a potential threat.

JUnQ: So reaching human-level intelligence would be the point from which on AI can potentially live without us.

Anton Bogomolov: Correct. There is an opinion that biological life is just a means to create an electronic life. In other words, some believe that this is our mission, to give birth to an electronic conscious creature, surpassing our capacity, that will develop much faster than humans. In some sense, it is similar to the early times of our planet. Life on Earth began relatively early. But the first living creatures – unicellular organisms – were progressing very slowly, until the multicellular organism occurred, which boosted the progress tremendously. And the progress always seems to be exponential. Thus, the idea of this theory is that we create something to keep up to this exponential progress. And if we look at it globally, like in the scale of the Universe, if this should ever happen that AI takes over the world, it would make sense. Because AI would go further exploring the Universe much faster than we would. Thus, from the point of view of global progress, it would be more advantageous.

JUnQ: Now, when you put it this way the technological singularity does not sound so frustrating anymore. Are you optimistic overall? Will we make it to the end of the 21st century?

Anton Bogomolov: To me, it feels great to witness the progress and to be a part of it. But we will see how it goes. We live within a self-organized system, where everything has got a direction to go. Even though humans are all independent creatures, we still obey the same laws of synergy, we self-organize as well, we cluster forming cities, etc. And sure we also have something to move towards, thus we develop and evolve. So, this progress is so natural.

In fact, experts expect the technological singularity to occur already in the 21st century. But it is not trivial to give a correct estimate. On the other hand, not related to AI, there is research going on in the field of so-called negligible senescence. The idea is that by engineering the reversal of all the major molecular and cellular changes that occur with age we would enable us to constantly rejuvenate ourselves. The researchers believe that negligible aging for humans will be achieved in this century. There even exists a provocative opinion that the first human beings who will live to 1,000 years old are already alive.

At the end of the day, there has been tremendous progress in many fields, not only AI. Along with AI, we may succeed in developing other technologies, which will help us to prolong our lives as well as humans’ in general.

JUnQ: Thank you very much for the interview!

— Mariia Filianina

Read more:

[1] http://deepdreamgenerator.com
[2] L.A. Gatys, A.S. Ecker and M. Bethge arXiv1508.06576 (2015).
[3] https://en.wikipedia.org/wiki/Turing_test
[4] A. Hosny, C. Parmar, J. Quackenbush, L.H. Schwartz and H.J.W.L. Aerts Nature Reviews Cancer 18, 500 (2018).
[5] https://www.ted.com/speakers/aubrey_de_grey

Sep 102019
 
Spread the love

28.11.2019 Die Chemie des Katers

The next seminar will be given by Klaus Roth. It will revolve around hangover chemistry: What ethanol and its reaction products do to our bodies (conference in German).

Die Chemie des Katers

Der Zustand der Erkrankten ist besorgniserregend: Übelkeit, Erbrechen, Gliederzittern, Schweißausbrüche, Leichenblässe, Brummschädel und Kreislaufschwäche. Statt Mitgefühl blitzt aus den Augen der Lieben nur Schadenfreude: „War das 12. Bier schlecht?“, „Geschieht Dir recht, du konntest ja den Rachen nicht voll genug bekommen“. Wie kann ein so kleines Molekül wie Ethanol nur so viel menschliches Leid verursachen? Ergründen wir die chemischen Folgen eines feucht-fröhlichen Abends.

Sep 102019
 
Spread the love

Curious things happen around us all the time – and sometimes we are so familiar with them that we do not even notice them anymore.

If you read the title you might now think that this article was about the Leidenfrost effect [1], that is, this little funny dance water droplets perform on a hot surface such as a frying pan. It is not, though. The Leidenfrost effect occurs when a material – usually a liquid – meets a surface far above its boiling temperature. A thin layer of the droplet’s surface will then evaporate rapidly, causing a protective gas coating to appear that effectively insulates the droplet and lets it last longer on the hot surface. Similar effects can also be seen with liquid nitrogen on a material at room temperature. These droplets appear to travel around due to ejected gasses. But does a similar phenomenon also occur without the necessity of a hot surface?

There is in fact a location where such an effect occurs regularly without us usually noticing: The bathroom. Under certain conditions water droplets can be seen moving on a surface of water as if they had hydrophobic properties. The easiest way to see them is in the shower, when the shower floor is already covered in a thin layer of water. If new water droplets now impact on this surface at certain angles and speeds, they can be seen rushing around for a while before disappearing. It turns out that in recent years a few scientific publications were dedicated to investigating this effect more closely. [2,3] With a high-speed camera, the bouncing effect can be visualized rather easily, as shown in Fig. 1: The droplet appears to cause a dent in the water surface and then bounce off without merging with the rest of the liquid. Of course, the first idea that comes into mind now is the Leidenfrost effect, where a similar behavior can be seen caused by a layer of vapor. However, here no high temperatures are involved and thus the generation of water vapor is negligible.

Figure 2: A schematic depiction of the resistance time phenomenon. On impact, a thin layer of gas (air) is compressed on the surface, causing a protection from immediate coalescence. However, eventually, the air escapes and the lower periphery of the droplet merges with the rest of the liquid. The surface tension can then rapidly squeeze the edges of the droplet together, causing the upper half of the droplet to be cut off from the rest. It can then repeat the bouncing process if the conditions are right. Reproduced from [4].

The first intuition of an air coating to protect the water droplet is still standing, though, and thus the scientists tried to model the behavior. It turns out that there is indeed a protective coating of air, which can get compressed when the droplet approaches the surface of the liquid underneath. The air simply cannot escape quickly enough and therefore protects the droplet on impact and pushes away from the water surface. This phenomenon causes what is called the residence time of a droplet, that is, the time a droplet can sit on top of a pool of the same liquid before coalescing (see Fig. 2). The theory was confirmed by lowering the ambient air pressure around the experiment, which caused the residence time to decrease. [4] However, one would expect that this thin layer of gas should not withstand a heavy impact of a droplet coming from e.g. the shower head with a lot of speed and thus kinetic energy.

An explanation can be found using a simple speaker membrane: When the droplets are put in contact with an oscillation surface, like water on an oscillating speaker, the bouncing is facilitated, and the droplets can remain intact for much longer. Moreover, the droplets now travel around just like they do in a shower! High-speed camera footage can show the reason for this change in behavior: The surface of the water pool, excited into periodic up- and down-movement patterns, gently catches the droplet if the surface is moving downwards in the moment of impact and therefore prevents the impact from destroying the protective gas layer. It is just like gently catching a water balloon with your hand by grabbing it in motion and then slowing it down. Additionally, the continuous movement of the surface seems to stabilize the gas layer and therefore massively increases the residence time, all while allowing the droplet to travel from minimum to minimum, thus creating the “walking water” effect. [6] In a shower, the impact of many, many droplets cause the surface of the water pool on the ground to oscillate in a similar manner, creating landing spots for some droplets that then move around the surface. The phenomenon can thus be explained by the residence time of a droplet together with an oscillating surface.

Finally, one can reproduce a similar behavior in space, where microgravity does not pull the droplets down. An air bubble inside of a water bubble can thus act like an isolated system where droplets can form and move… excited by the sound of a cello! If you got curious, please check out the beautiful footage in Ref. [6] where much of the inspiration of this article came from.

As stated initially, the most curious things happen around us and we simply have to notice them.

— Kai Litzius

References:

[1] https://www.engineersedge.com/physics/leidenfrost_effect_13089.htm

[2] Y. Couder et al., From Bouncing to Floating: Noncoalescence of Drops on a Fluid Bath, Phys. Rev. Lett. 94, 177801 (2005).

[3] J. Molácek & J. W. M. Bush, Drops bouncing on a vibrating bath, J. Fluid Mech. 727, 582-611 (2013).

[4] I. Klyuzhin et al., Persisting Water Droplets on Water Surfaces, J. Phys. Chem. B 114, 14020-14027 (2010).

[5] https://upload.wikimedia.org/wikipedia/commons/1/1d/Bouncing_droplets.gif

[6] https://www.youtube.com/watch?v=KJDEsAy9RyM (Water bubble in space at time index 8:18).
 

 Tagged with:
Sep 042019
 
Spread the love

Superstitions are having hard times in our modern always progressing world. It is no longer easy to fool someone with a myth or a beautiful legend from childhood. But how about this one: have you ever heard that a thunderstorm could curdle milk

A correlation between thunderstorms and the souring or curdling of milk has been observed for centuries. As early as in 1685 the first clue was written down in the book “The Paradoxal Discourses of F. M. Van Helmont: Concerning the Macrocosm and Microcosm, Or the Greater and Lesser World, and Their Union” [1]:

“Now that the Thunder hath its peculiar working, may be partly perceived from hence, that at the time when it thunders, Beer, Milk, &c. turn sower in the Cellars … the Thunder doth everywhere introduce corruption and putrefaction”.

By the beginning of the 19th century there had been numerous attempts to find theories of a causal relationship. [2-7] They all were not plausible, many even contradicting. Later, after refrigeration and pasteurization became widespread, eliminating bacteria growth, interest in this phenomenon almost disappeared. While the most popular explanation remains that these occasions are only a correlation, we would like to draw the reader’s attention to some of the suggested theories. 

In order to understand what actually happens with milk during a thunderstorm we would need to know (i) what processes are behind the milk souring and (ii) what accompanies thunderstorm, e.g. lightning. While the latter is not yet entirely clear to scientists, [8] the simplified picture of the first point we will cover in the next few paragraphs.     


Figure1: Schematic image of casein micelles covering fat globules within milk as a colloid solution.

Fresh milk is a textbook example of colloid – a solution consisting of fat and protein molecules, mainly casein, floating in a water-based fluid. [9] The structure of milk is schematically illustrated in Fig. 1. Fat globules are coated with protein and charged phospholipids. Such a formation protects the fat from being quickly digested by bacteria, which also exist in milk. Casein proteins under normal conditions are negatively charged and repel each other so that these formations naturally distribute evenly through the liquid. Normally, milk is slightly acidic (pH ca. 6.4-6.8), [10] being sweet at the same time due to lactose, one of the other carbohydrates within the milk. When the acidity increases to pH lower than 4, proteins denature and are no longer charged. Thus, they bind to each other or coagulate into the clumps known as curds. The watery liquid that remains is called whey.

The acidity of milk is determined by the bacteria which produce lactic acid. The acids lower the pH of milk so the proteins can clump together. The bacteria living in milk naturally produce lactic acid as they digest lactose so they can grow and reproduce. This occurs for raw milk as well as for pasteurized milk. Refrigerating milk slows the growth of bacteria. Similarly, warm milk accommodates bacteria thrive and also increases the rate of the clumping reaction.

Now, we can think of a few things that may speed up the souring process. The first one could be ozone that is formed during a thunderstorm. In one of the works it was shown that a sufficient amount of ozone is generated at such times to coagulate milk by direct oxidation and a consequent production of lactic acids. [2] However, if this were the case, a similar effect would occur for sterilized milk. The corresponding studies were carried out by A. L. Treadwell, reporting that, indeed, the action of oxygen or oxygen and ozone coagulated milk faster Ref. [2]. But the effect was not observed if the milk had been sterilized. The conclusion drawn from this study was that the souring was produced by unusually rapid growth of bacteria in an oxygen rich environment.

In the meantime, a number of other investigations suggested that a rapid souring of milk was most likely due to the atmosphere that is well known to become sultry or hot just prior to a thunderstorm. This warm condition of the air is very favourable for the development of lactic acid in the milk. [3, 4] Thus, these studies were also in favour of thunderstorms affecting the bacteria.

A fundamentally different explanation was tested by e.g. A. Chizhevsky in Ref. [5]. It was suggested that the electric fields with particular characteristics produced during thunderstorms could stimulate a souring process. To check this hypothesis the coagulation of milk was studied under the influence of electric discharges of different strength. Importantly, in these experiments the electric pulses were kept short to eliminate any thermal phenomena. Eventually, the observed coagulation for certain parameter ranges was explained by breaking of protein-colloid system in milk due to the influence of the electric field.

Other experiments investigating the effect of electricity on the coagulation process in milk turned out to be astonishing. [6] When an electric current was passed directly through milk in a container, in all the test variations, the level of acidity rose less quickly in the ‘electrified’ milk samples compared with the ‘control’ sample. Which contradicted all the previous reports.

To conclude, while there is no established theory explaining why milk turns sour during thunderstorms, we cannot disregard numerous occasions of this curious phenomenon. [7] What scientists definitely know is that milk goes sour due to bacteria – bacilli acidi lactici – which produce lactic acid. These bacteria are known to be fairly inactive at low temperatures. Which is why having a fridge is very convenient for milk-lovers. However, when the temperature rises, the bacteria multiply with increasing rapidity until at ca. 50°C it becomes too hot for them to survive. Thus, in pre-refrigerator days, when this phenomenon was most popular, in thundery weather with its anomalous conditions the milk would often go off within a short time after being opened. Independently of the exact mechanism, i.e. increased bacteria activity or breaking of the protein-colloid system, the result is – curdled milk.

Should you ever witness this phenomenon yourself, do not be sad immediately. Try adding a bit brown sugar into your fresh milk curds…

— Mariia Filianina

Read more:

[1] F. M. van Helmont Franciscus “The Paradoxal Discourses of F. M. Van Helmont, Concerning the Macrocosm And Microcosm, Or The Greater and Lesser World, And their Union” set down in writing by J.B. and now published, London, 1685.

[2] A. L. Treadwell, “The Souring of Milk During Thunder-StormsScience Vol. XVIII, No. 425, 178 (1891).

[3] “Lightning and Milk”, Scientific American 13, 40, 315 (1858). doi:10.1038/scientificamerican06121858-315

[4] H. McClure, “Thunder and Sour Milk.” British Medical Journal vol. 2, 651 (1890).

[5]V. V. Fedynskii (Ed.), The earth in the universe” (orig. “Zemlya vo vselnnoi”), Moscow 1964, Translated from Russian by the Israel Program for Scientific Translations in 1968.

[6] W. G. Duffield and J. A. Murray, “Milk and Electrical Discharges”, Journal of the Röntgen Society 10(38), 9 (1914). doi:10.1459/jrs.194.0004

[7] “Influence of Thunderstorms on MilkThe Creamery and Milk Plant Monthly 11, 40 (1922).

[8] K. Litzius, “How does a lightning bolt find its target?” Journal of Unsolved Questions 9(2) (2019).

[9] R. Jost (Ed.), “Milk and Dairy Products.” In Ullmann’s Encyclopedia of Industrial Chemistry (2007). doi: 10.1002/14356007.a16_589.pub3

[10] https://en.wikipedia.org/wiki/Milk

May 222019
 
Spread the love

Once, thunderstorms with thunder and lightning were interpreted as signs of the god’s wrath; nowadays, we are taught the mechanics behind a thunderstorm in school. You are probably already thinking about ice crystals that are smashed together by strong winds inside clouds, creating static charges in the process. How does a lightning bolt, though, find its way from the cloud to the ground? This question still keeps scientists awake at night – and there is still not a clear answer to how exactly the formation and movement of a lightning bolt work. This Question of the Month will give a brief summary on how a lightning bolt selects its target.

Lightning [1,2] occurs always when a large thunderstorm cloud with strong winds generates sufficient electrostatic charge that it must discharge towards the ground. The discharge itself occurs (simplified) in a twostep process, consisting of a main lightning bold and a preflash: The preflash travels as comparably weak (but still dangerous!) current downwards from the cloud. This usually happens in little jumps, which have been investigated with high-speed cameras. They show that the current path is apparently selected randomly by slowing down at a given position and then randomly selecting the next to jump to. This random selection appears to happen within a sphere of a few tens of meters in diameter around the tip of the growing lightning bolt. The process also involves growing many tendrils with individual tips and thus covers a large area (see also Fig. 1). With this procedure, the lightning bold eventually “feels” its way to the ground until it reaches it either directly or via a structure connected to it.

Figure 1: Lightning bolts are branching off into many tendrils. [3]

Therefore, if a conductive object reaches into such a sphere, the bolt will immediately jump to it and use it as a low-resistance shortcut to the ground – as a result, if possible, shortening the path for the discharge. This behavior leads to the curious effect of exclusion areas around structures that are protected with lightning rods, in which practically no ground strike will occur, and a person will not be hit directly. Unfortunately, this will not completely protect the person, as the electricity can still be dangerous within the ground.

Now that the preflash has found a path to the ground, the second phase starts, and the majority of the charge starts to flow with up to 20 000 A along the path found by the preflash. This is also the portion of the discharge that is visible by bare eye. It can consist of several distinct discharges that all follow the path of ionized air of the previous one, creating the characteristic flickering of a lightning bolt.

How the entire process from preflash to main discharge works is still not completely understood today and much of the presented insights were simply gathered phenomenologically by camera imaging. Additionally, there are many more types of and effects related to lightning bolts, which are relevant for our understanding of a variety of weather phenomena. All in all, thunderstorms are still something magical today, even if only figuratively.

— Kai Litzius

Further reading:

[1] http://stormhighway.com/cgdesc.php#part1

[2] https://what-if.xkcd.com/16/

[3] https://commons.wikimedia.org/wiki/File:Lightning_over_Oradea_Romania_2.jpg

[4] Chem. Unserer Zeit, 2019, 53. DOI: 10.1002/ciuz.201980045

Mar 052019
 
Spread the love

Genetic information is encoded in the deoxyribonucleic acid (DNA). In form of a long double-helix molecule, lo-cated in living cells, it governs most of the organisms traits. Explicitly, information from genes is used to form func-tional gene products such as proteins. This process of gene expression is used by all known forms of life on earth to generate the macromolecular machinery for life. Thus, it poses the fundamental level of how the genotype causes the phenotype, i.e. the composite of organisms’ observ-able characteristics. Genomic modification is a powerful tool to amend those characteristics. Reproductional and environmentally caused changes to the DNA is a substrate for evolution. In nature, those changes happen and may cause favourable or unfavourable changes to the phenotype, which allow the cell or organism to improve or reduce the ability to survive and reproduce, respectively.

In the first half of the 20th century, several methods to alter the genetic structure of cells were discovered, which include exposing it to heat, X-rays, UV-light, and chemicals1-4. A significant number of crop cultivated today were developed using those methods of traditional muta-genesis, an example of which is Durum wheat, the most prevalent wheat for pasta production. With traditional mu-tagenesis thousands of mutations are introduced at random within the DNA of the plant. A subsequent screening iden-tifies and separates cells with favourable mutations in their DNA, followed by attempts to remove or reduce possible unfavourable mutations in those by mutagenesis or cross-breeding.

As those methods are usually unspecific and complex, researchers have developed site-determined gene editing techniques, the most successful of which is the so called CRISPR/Cas9 method (clustered regularly interspaced short palindromic repeats). This method borrows from how bacteria defend viral invasion.6 When the bacterium detects virus DNA invasion, it forms two strands of RNA (single helix molecules), one of which contains a sequence that matches that of the invading virus DNA and is hence called guide RNA. These two RNAs form a complex with a Cas9 protein, which, as a nuclease enzyme, can cleave DNA. When the guide RNA finds the target in the viral genome, the RNA-Cas9 complex will lock to a short se-quence known as the PAM, the Cas9 unzippes the viral DNA to which the RNA will match. Cas9 then cleaves the viral DNA, forcing the cell to repair the DNA.6 As this repair process is error prone, it may lead to mutations that might disable certain genes, changing the phenotype. In 2012 and 2013 it was discovered that the guide RNA can be considerably modified for the system to work site-determined5, and that by modifying the enzyme it not only works in bacteria and archaea, but also in eukaryotes (plants and animals), respectively.7

Figure 1: CRISPR/Cas9 working principle.8

Research published since demonstrated the method’s poten-tial for RNA-programmable genome editing. Modifications can be made so during the repair an artificially designed DNA sequence pairs with the cleaved ends, recombines and replaces the original sequence, introducing new genes to the genome.11,12 The advantages of this technique over tra-ditional gene editing methods is multifold. It can act very targeted, i.e. site- and therefore gene-specific in any form of known life. It is comparatively inexpensive, simple enough to be conducted in basic labs, effective, and fast regarding preparation and realisation. The production of multiplex ge-netically modified mice, for instance, was reduced from up to two years to few weeks,9 as CRISPR/Cas9 has the unique advantage over earlier genome editing methods, that multi-plexable targeting is easily achieved by co-expressing Cas9 with multiple single-guide RNAs simultaneously. Conse-quently, within few years after its discovery, it evolved to be the routine procedure for genome modification of virtually all model plants and animals.

The availability of such a method evokes medical and botanical development interests. A plethora of possible medical applications are discussed and researched, among which is healing cancer or treating genetic disorders. For cancer research it is imaginable to induce a multitude of deliberate mutations to artificially form cells similar to can-cerous cell, study the caused modification to the cells, and thus learn to inhibit their reproduction or the original muta-tion. In the clinical research focus now are blood diseases or those related to haematopoietic cells, such as leukaemia, HBV, HIV, or haemophilia.13,14 This is because for the treatment of those diseases, the cells (blood cells or bone marrow) can be extracted from the body in a known way, their genome can be edited in vitro by the CRISPR/Cas9 method, and finally the cells can be reintroduced to the body. The advantage of the extraction is that no additional vector (agent to help finding the right cells in vivo) is re-quired, and the genomic modification can be controlled ex vivo. While the editing efficiency with CRISPR-Cas9 can be extremely high, the resulting cell population will be inherently heterogeneous, both in the percentage of cells that were edited and in the specific genotype of the edited cells. Potentially problematic for in vivo application is the bacterial origin of the endonuclease Cas9. A large portion of humans show humoral and cell-mediated immune re-sponses to the Cas9 protein complex,10 most likely because of prior infection with related bacteria.

Although clinical applications of CRISPR/Cas9 grab a lot of media attention, agricultural applications draw even more commercial interest. Prospects here are the faster, cheaper and more targeted development of crops than by traditional methods of mutagenesis, which are extremely more aggressive in comparison. The main aim is unchanged though: improve plants regarding yield, resistance to dis-eases or vermin, and resilience to aridity, heat, cold, humid-ity, or acidity.15,16 CRISPR/Cas9 is therefore considered an important method to ameliorate agricultural food produc-tion to feed the earth’s ever-growing human population.

Regulations of thusly modified products vary largely be-tween countries. While Canada considers such plants equal to not genetically modified if no transgene was inserted, the USA assesses CRISPR plants on a case by case basis, gauging whether the modification would have been possible by natural mutation. This way they chose to not regulate mushrooms that do not turn brown and maize with an al-tered starch contend. Last year the European court of justice ruled all CRISPR/Cas9 modified plants as genetically mod-ified organisms, reasoning that the risks of such a novel method are unknown, compared to traditional mutagenesis as an established method of plant breeding.

Instigated by genome editing in human-embryonic cells in 201518 a group of scientists called for a moratorium to dis-cuss the possible risks and impact of the wide usage of the CRISPR/Cas9 technology, especially when it comes to mu-tations in humans.19 On the 2015 International Summit on Human Gene Editing leading international scientists con-sidered the scientific and societal implications of genome editing. The discussed issues span clinical, agricultural and environmental applications, with most attention focused on human-germline editing, owing to the potential for this application to eradicate genetic diseases and, ultimately, to alter the course of evolution. Some scientists advise to ban CRISPR/Cas9 based human genomic editing research for the foreseeable future, whereas others favour a rapid progress in developing it.20 A line of argument of support-ers of the latter viewpoint is, that the majority of ethical concerns are effectively based on methodical uncertainties of the CRISPR/Cas9 method at its current status, which can be overcome only with extensive research. Those methodical uncertainties include possible cleavage at undesired sites of the DNA, or insertion of wrong sequences at the cleavage site, resulting in the disabling of the wrong genes or even the creation of new genetic diseases.

Whilst a total ban is considered impractical because of the widespread accessibility and ease of use of this technology,21 the summit statement says, that “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and effi-cacy issues have been resolved . . . and (ii) there is broad societal consensus about the appropriateness of the pro-posed application.” The moral concerns about embryonic or germline treatment base on the fact that CRISPR/Cas9 not only would allow the elimination of genetic diseases, but also enable genetic human enhancement, from simple tweaks like eye colour or non-balding to severe modifica-tions relating bone density, muscular strength or sensory and mental capabilities.

Although most scientist echo the summit statement, in 2018 a biochemist claimed to have created the first genetically edited human babies, two twin sisters. After in vitro fertil-ization, he targeted a gene that codes for a protein that one HIV variant uses to enter cells, enforcing a kind of HIV immunity, which is a very rare trait among humans.22 His conduct was harshly criticised in the scientific community, widely condemned, and-after enormous public pressure-redoing forbidden by the responsible regulatory offices.

Ultimately the CRIPSR/Cas9 technology is a paramount example of real world societal implications of basic re-search and demonstrates researchers’ responsibilities. This also raises the question whether basic ethical schooling should be part of every researcher’s education.

— Alexander Kronenberg

Read more:

[1] K. M. Gleason (2017) “Hermann Joseph Muller’s Study of X-rays as a Mutagen”

[2] Muller, H. J. (1927). Science. 66 (1699): 84–87.

[3] Stadler, L. J.; G. F. Sprague (1936). Proc. Natl. Acad. Sci. U.S.A. US Department of Agriculture and Missouri Agricul-tural Experiment Station. 22 (10): 572–8.
[4] Auerbach, C.; Robson, J.M.; Carr, J.G. (March 1947). Sci-ence. 105 (2723): 243–7.

[5] M. Jinek, K. Chylinski, I. Fonfara, M. Hauer, J. A. Doudna, E. Charpentier. Science, 337, 2012, p. 816–821.
[6] R. Sorek, V. Kunin, P. Hugenholtz. Nature reviews. Micro-biology. 6, 3, (2008), p. 181–186.

[7] Cong, L., et al., (2013). Science. 339 (6121) p. 819–823.

[8] https://commons.wikimedia.org/wiki/File:GRNA-Cas9.png

[9] H. Wang, et al., Cell. Band 153, 4, (2013), S. 910–918.

[10] D. L. Wagner, et al., Nature medicine. (2018).

[11] O. Shalem, N. E. Sanjana, F. Zhang; Nature reviews. Genet-ics 16, 5, (2015), p. 299–311.

[12] T. R. Sampson, D. S. Weiss; BioEssays 36, 1, (2014), p. 34–38.

[13] G. Lin, K. Zhang, J. Li; International journal of molecular sciences 16, 11, (2015), p. 26077–26086.