Issues

Sep 182019
 
Spread the love

Alex Steffen[1] makes enterprises future-proof. He is an expert for business strategy and innovation. He is also a no.1 Best-Selling Author and Speaker. His mission by 2025 is to empower 150,000 business leaders to future-proof their enterprise with ease. How? Alex turns business leaders into entrepreneurs. Alex Steffen was named Management Thought Leader 2019 by Change X and his book “Die Orbit Organisation” was nominated for the getAbstract International Book Award. His Keynotes “The Atlas of Innovation” and “Unstoppable Human” are international hits. Learn about Alex at https://alextsteffen.com.

[1] info@alextsteffen.com

Alex Steffen

JUnQ: What is digital citizenship? Should there be a basic education in responsible handling of digital tools in (early) schools?

Alex T. Steffen: Let’s pick a narrow definition. I understand digital citizenship as a human’s ability to be a more rounded part of society thanks to information technology. The truth is: technology often simply emphasizes the existing design.

Digital schooling isn’t better schooling, as long as schools fail to teach us the central skills required in the modern world: thinking for ourselves. In my opinion, that’s what the society and workplace of the future needs. We’re trying to stitch digital onto an outdated paradigm, which tells us that memorizing facts is fundamental to a successful career. And then we’re surprised to find that machines take away jobs.

The truth: a rounded human, well-equipped to play his or her part in society combines a unique blend of complex skills. Uniqueness is an advantage, not a disadvantage. I see micro degrees, potent mentoring, and real exposure to the world as essential ingredients of education towards digital citizenship. We don’t need any more homogenous machine workers. The new standard for humans and businesses is hyper-customization. A smart country isn’t a country that has advanced to digital citizen services only.

A smart country is a society where its citizens can create a career and life on their own terms using highly customizable (education) resources. That will make them uniquely trained and attractive according to their strengths and inclinations. Look around, the top talents are already living this very design. Now it’s our responsibility to take it from niche to commonplace.

JUnQ: What are the general problems and dangers that arise with (global) digitalization and what are possible solutions?

Alex T. Steffen: This begs the exploration of the new relationship between digital processes and human habits. Let’s first crush a myth: our problem isn’t the technology disrupting our lives. Humans will create what’s possible. They always have. The problem lies in our own comfort to reconsider what we see as “normal”, “customary” and “acceptable”. Our problem is: we think that most of what we look at is permanent when in fact, the world is in constant change.

We underestimate our need for validation and our inability to accept outside perspectives. Those are the real causes of resistance. I am convinced that if we could measure the real damage of business as usual, it would vastly outweigh the so-called threats of digitization. I would like to see an approach where anything new is met with a cool-headed evaluation. Reactive resistance contra change based on individual discomfort stands in the way of realizing beneficial trends.

These trends often end up as part of our lives anyway, built by others, who were open-minded in the first place. And, equally important, a lack of engagement with trends prevents us from making them safe and aligned with our values. I suggest training leaders on emotional intelligence and on staying curious. As soft as this sounds to our logical minds, it’s the vastly underestimated skill that nourishes our ability to be competitive. Innovation starts with the very subject in question: rethinking (innovating) the way we train our leaders, so that change can be embraced .

JUnQ: Data processing, communication, and research have become impossible without digital tools, especially in the field of technology and science. A regression has become unthinkable. Are there limitations to further digital progress?

Alex T. Steffen: Every society comfortable enough to explore this philosophical question faces a dilemma between two seemingly exclusive ideas.

Idea 1: we’ve arrived at the pinnacle of innovation. Further innovation seems unthinkable or unethical. Further innovation causes more harm than good.

Idea 2: awe-inspiring science fiction scenarios that look completely absurd but encapsulate even more human optimization potential.

The two ideas are not exclusive. Rather, they lie on opposite poles of a scale. I’m always curious where a person or society sits on that scale. In other words, how much of each idea do they express. My take is that we often ignore the bigger picture. History can provide data for a more realistic standpoint, namely that humans will continue innovating indefinitely. It’s like that because with new capabilities come ever new desires. These trigger our ingenuity anew.

This begs the question: will we be able to find a healthy balance between a paralyzing public debate about the implications of change on the one hand and co-creating the inevitable changes, so that they end up in favor of future generations? Let’s look at an example: In Sweden the question of female equality at work has largely been resolved for a few years. “We focus on doing rather than talking” an executive at Volvo shared with me. In Germany, after years of debate this is still a hot topic.

JUnQ: How will the future digital workplace look like?

Alex T. Steffen: I love this question and yet I’ll keep my answer deliberately vague. Nobody can predict the future with 100% accuracy. I sincerely hope that for most people the future workplace will be driven by vitality, intuition, and self-actualization. This will mean better health and quality of life for the individual as well as higher competitiveness for business. [1]

JUnQ: In Germany, digitalization appears to proceed more slowly than in other industrial countries. What are possible troubles and how can we overcome this gap?

Alex T. Steffen: All innovation starts in the mind. History is full of examples where German ingenuity put us in the pole position, only to be halted by doubt and cumbersome processes. We wake up and find ourselves late in the game. No question, their intention is good. But after some time of business as usual, further resistance to creative destruction creates more harm than good. In 2019 German car giant Volkswagen came out with its car for the future. Unfortunately the car is not an exponential innovation at all. It’s traditional car with an electric engine. Major improvements still require a garage.

Tesla Motors on the other hand, has shown us what a disruption of the automotive industry really looks like. Tesla has built a digital platform on which major improvements are performed over the internet via digital upgrades. The result: the need for a garage drops drastically. So does the dependency on a complex web of stakeholders, turning Tesla Motors into the more flexible player. This example shows that Germany’s industry still loves its traditions. They are safe. Planning and due diligence is our fetish. But safe does not make our designs future-proof. The key competitive edge for the future is flexibility. Sooner or later we need to start killing our legacy darlings and commit to real change.

JUnQ: How important do you see 5G in general?

Alex T. Steffen: Humans have great difficulty perceiving change that is happening right now. Change is always seen from the understanding of the past. For example, the first movies were recorded in the style of plays. Only after some time directors developed the unique movie style we know today. I see 5G as an essential building block of the future, both for business and private. The debate about the why is holding up the potential to work on the how.

JUnQ: What could be the next big step in digitalization after smart devices, AI and augmented reality?

Alex T. Steffen: I heard a fascinating statement the other day: In the last two years we have undergone more change than the previous ten. The discomfort of uncertainty makes us ask questions like this. Just like a cigarette drag they are just dangerous fixes that ignore the root problem: anxiety. We cannot trust any so-called futurists because nobody actually knows the future. Many experts’ predictions have been dramatic errors costing businesses large sums of money. Other predictions have never reached the mainstream, leaving everyone unprepared. Instead I suggest us all to take on a calm and confident attitude towards the future:

1. Being optimistic. Not all of the future is great but there’s more good than bad.

2. Embracing uncertainty. Accepting the fact that for the rest of our lives we’ll be newbies.

Build our very own ability to separate what’s important from the noise, based on concrete data points. Then decide for ourselves without taking dangerous shortcuts. To help with this I recommend three books: “The Inevitable” by Kevin Kelly, “Factfulness” by Hans Rosling, “The Rise of The Creative Class” by Richard Florida.3-5

JUnQ: The data flood is growing evermore, and coherencies seem to become impenetrable with every new discovery. How applicable is “fail fast, fail often” for the digital learning processes in terms of time and resources?

Alex T. Steffen: In the late 1800s, as economic activity grew, people were debating solutions for the drastic increase of horse dung in the streets. It was becoming a huge issue and no solutions in sight. The advent of the combustion engine solved that pressing issue within one decade. As humans evolve they design capabilities for pressing challenges. These days we’re addressing the issues caused by the combustion engine and other contributors to global heating.

In the same fashion, we’ll come up with technology that can manage and interpret existing and new data for our needs. Because of the increase of speed and complexity, prototyping in a fail fast, fail often fashion as we know it from startups remains highly relevant in my view.

JUnQ: Can you give future leaders a piece of advice to take along?

Alex T. Steffen: There’s only one, but it means everything: embrace discomfort. In order to go further we often need to tolerate some discomfort. A trampoline requires a downward strain in order to gain the force that can shoot a person up in the air. Without the down there’s no up. In most cases the internal resistance is much greater than the external struggle. In other words: it’s easier than we think. If we have a good reason to act we’ll do it. So here’s mine: if we want to leave a better world for our kids, we have to get better at embracing change.

JUnQ: Inspiring words, thank you very much for the interview, Mr. Steffen!

— Tatjana Daenzer


You can find some perspectives on how to design a future-proof workplace in Alex’ book “The Orbit Organisation” and on Alex’ blog (http://www.alextsteffen.com/blog).[1,2]

Read more:

[1] A.M. Schüller, A.T. Steffen, Die Orbit-Organisation, 2019, Gabal
[2] http://www.alextsteffen.com/blog.
[3] K. Kelly, The Inevitable, 2017, Penguin Books
[4] H. Rosling , O. Rosling, et al., Factfullness, 2018, Sceptre
[5] R. Florida, The Rise of The Creative Class, 2014, Basic Books
Sep 182019
 
Spread the love

Haydn Belfield [1] is a Research Associate and Academic Project Manager at the University of Cambridge’s Centre for the Study of Existential Risk. He is also an Associate Fellow at the Leverhulme Centre for the Future of Intelligence. He works on the international security applications of emerging technologies, especially artificial intelligence. He has a background in policy and politics, including as a Senior Parliamentary Researcher to a British Shadow Cabinet Minister, as a Policy Associate to the University of Oxford’s Global Priorities Project, and a degree in Philosophy, Politics and Economics from Oriel College, University of Oxford.
[1]hb492@cam.ac.uk

Haydn Belfield

Artificial intelligence (AI) is beginning to change our world – for better and for worse. Like any other powerful and useful technology, it can be used both to help and to harm. We explored this in a major Febuary 2018 report The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.[1] We co-authored this report with 26 international experts from academia and industry to assess how criminals, terrorists and rogue states could maliciously use AI over the next five years, and how these misuses might be prevented and mitigated. In this piece I will cover recent advances in artificial intelligence, some of the new threats these pose, and what can be done about it.

In this piece I will cover recent advances in artificial intelligence, some of the new threats these pose, and what can be done about it.

AI, according to Nilsson, “is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment”.[2] It has been a field of study from at least Alan Turing in the 1940s, and perhaps from Ada Lovelace in the 1840s. Most of the interest in recent years has come from the subfield of ‘machine learning’, in which instead of writing lots of explicit rules, one trains a system (or ‘model’) on data and the system ‘learns’ to carry out a particular task. Over the last few years there has been a notable increase in the capabilities of AI systems, and an increase in access to those capabilities.

The increase in AI capabilities is often dated from 2012’s seminal Alexnet paper.[3] This system achieved a big jump in capabilities on an image recognition task. This task has now been so comprehensively beaten that it has become a benchmark for new systems – “this method achieves state-of-the-art in less time, or at a lower cost”. Advances in natural language processing (NLP) have led to systems capable of advanced translation, comprehension and analysis of text and audio – and indeed the creation of synthetic text (OpenAI’s GPT-2) and audio (Google’s Duplex). Generative Adversarial Networks (GANs) are capable of creating incredibly convincing synthetic images and videos. The UK company DeepMind achieved fame within the AI field with their systems capable of beating Atari games from the 1980s such as Pong. But they broke into the popular imagination with their AlphaGo systems defeat of Lee Sedol at Go. AlphaGo Zero, the successor program, was also superhuman at Chess and Shogi. AI systems have continued to match or surpass human performance at more games, and more complicated games: fast-paced, complex, ‘real-time strategy’ games such as DOTA II and Starcraft II.

This increase has been driven by key conceptual breakthroughs, the application of lots of money and talented people, and an increase in computing power (or ‘compute’). For example, training AlphaGo Zero used 300,000 times as much compute as AlexNet.[4]

Access to AI systems has also increased. Most ML papers are freely, openly published by default on the online depository arXiv. Often the code or trained AI system can be freely downloaded from open source software libraries like GitHub or TensorFlow, which also tend to standardise programming methods. People new to the field can get up to speed through online courses such as Coursera, or the many tutorials available on YouTube. Instead of training their systems on their own computers, people can easily and cheaply train them on cloud computing providers such as Amazon Web Services or Microsoft Azure. Indeed the computer chips best suited to machine learning (GPUs and TPUs) are so expensive that it normally makes more sense to use a cloud provider, and only rent the time one needs. Overall then, it has become much easier, quicker and cheaper for someone to get up to speed, and create a working system of their own.

These two processes have had many benefits: new scientific advances, better and cheaper goods and services, and access to advanced capabilities from around the world. However they have also uncovered new vulnerabilities. One is the discovery of ‘adversarial examples’ – adjustments to input data so minor to be imperceptible to humans, but that cause a system to misclassify an input. For example, misclassifying a picture of a stop sign as a 45 mph speed limit sign.

These vulnerabilities has prompted some important work on ‘AI safety’, that is, reducing the risk of accidents involving AI systems in the short-term [6,7] and long-term.[8] Our report focussed, however, on AI security: reducing the risk of malicious use of AI by humans. We looked at the short-term: systems either currently or soon to be in use in the next five years.

AI is a ‘dual-use’ technology – it can be used for good or ill. Indeed it has been described as an ‘omni-use’ technology as it can be used in so many settings. Across many different areas however, common threat factors emerge. Existing threats are expanding, as automation allows a greater scale of attacks. The skill transfer and diffusion of capabilities described above will allow a wider range of people to carry out attacks that currently the preserve of experts. Novel threats are emerging, using the superhuman performance and speed of AI systems, or attacking the unique vulnerabilities of AI systems. The character of threats is being altered as attacks become more customised to particular targets, and the distance between target and attacker makes attacks harder to attribute.

These common factors will affect security in different ways – we split them into three domains.

In ‘digital security’, for example, current ‘spear phishing’ emails are tailor-made for a particular victim. An attacker trawls through all the information they can find on a target, and drafts a message aimed at that target. This process could be automated through the use of AI. An AI could trawl social media profiles for information, and draft tailored synthetic text. Attacks shift from being handcrafted to mass-produced.

In ‘physical security’, for example, civilian drones are likely to be repurposed for attacks. The Venezuelan regime claims to have been targeted by a drone assassination. Even if, as is most likely, this is propaganda, it gives an indication of threats to come. The failure of British police for several days to deal with a remote-controlled drone over Gatwick airport does not bode well.

In ‘political security’ or ‘epistemic security’, the concern is both that in repressive societies governments are using advanced data analytics to better surveil their populations and profile dissidents; and that in democratic societies polities are being polarised and manipulated through synthetic media and targeted political advertising.

We made several recommendations for policy-makers, technical researchers and engineers, company executives, and wide range of other stakeholders. Since we published the report, it has received global media coverage and was welcomed by experts in different domains, such as AI policy, cybersecurity, and machine learning. We have subsequently consulted several governments, companies and civil society groups on the recommendations of this report. It was featured in the House of Lords Select Committee on AI’s Report. We have run a workshop series on Epistemic Security with the Alan Turing Institute. The topic has received a great deal of coverage, due in part to the Cambridge Analytica scandal and Zuckerberg’s testimony to Congress. The Association for Computing Machinery (ACM) has called for impact assessment in the peer review process. OpenAI decided not to publish the full details of their GPT-2 system due to concerns about synthetic media. On physical security, the topic of Lethal Autonomous Weapons Systems has burst into the mainstream with the controversy around Google’s Project MAVEN.

Despite these promising developments, there is a lot still more to be done to research and develop policy around the malicious use of artificial intelligence, so that we can reap the benefits and avoid the misuse of this transformative technology. The technology is developing rapidly, and malicious actors are quickly adapting it to malicious ends. There is no time to wait.

Read more:

[1] Brundage, M., Avin, S., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, arXiv:1802.07228.
[2] Nilsson, N. J. (2009). The quest for artificial intelligence. Cambridge University Press.
[3] Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems (pp. 1097-1105).
[4] Amodei, D. Hernandez, D. (2018). AI and Compute. OpenAI: https://blog.openai.com/ai-and-compute/.
[5] Karpathy, A. (2015) Breaking Convnets. http://karpathy.github.io/2015/03/30/breaking-convnets.
[6] Amodei, D., Olah, C. et al. (2016) Concrete Problems in AI Safety.
[7] Leike, J. et al. (2017) AI Safety Gridworlds. DeepMind.
[8] Bostrom, N. (2014) Superintelligence. Oxford University Press.
[9] House of Lords Select Committee on Artificial Intelligence (2018). Report of Session AI in the UK: ready, willing and able? 2017–19 HL Paper 100.
Sep 182019
 
Spread the love

Anton Bogomolov[1] is a data scientist with PhD in Physics, currently working in IoT branch. He is passionate for artificial intelligence with ten years of experience in automated data analysis and machine learning.

[1]abogomolov86@gmail.com

Anton Bogomolov

JUnQ: The everlasting technological progress is aimed to fulfill many needs of humans: most of them are physical, informational and commercial. In particular, robots were created to perform tasks that were too dangerous for humans or that humans could not or did not want to do. But what do we need intelligent machines for and what is implied by “Artificial Intelligence” (AI)?

Anton Bogomolov: The answer was already said – we need AI to make our life simpler, i.e. to simplify some routine work that humans have to do. Generally, we are heading towards automation, and in the ideal case, we want to automize everything, every kind of work. So far, the processes we are capable of automizing have been prioritized.

Now, what is understood by the term “AI”? Over the course of this interview we will go deeper in the discussion, so let’s start with a fairly broad definition: AI is something that is able to accomplish certain tasks with the help of self-learning.

JUnQ: Does it imply that AI is not meant to create anything, like art or music?

Anton Bogomolov: There is a number of definitions of AI. Indeed, the term “intelligence” implies that it can do creative work as well. It is not a simple calculator. You don’t just tell it what you want it to calculate, and then it does exactly what has been asked. It does something more complicated and, thus, it also involves some learning experience. In this context, the creative work does not necessarily mean being an artist or a musician, or a composer. A chatbot, as an example of an AI feature, is also a kind of creative work, because it is required to react accordingly or ask appropriate questions, in other words to be engaged into a conversation as a human would be i.e. express creativity.

Generally, yes, AI can generate art. For example, “Deep Dream”1 was popular a few years back. This algorithm uses AI to generate the dream-like appearance of the uploaded images. Another one is “Neural style transfer”2 which allows one to compose an image in the style of another image. Should one ever want to paint like Van Gogh or Picasso, this can be easily done, using this algorithm. There is also AI-composed music already creeping into the background of games, film, and media. With AI it is now possible to create music in different genres just at the push of a button.

JUnQ: In the news or podcasts, the term “machine learning” often seems to come together with AI. What is, simply put, machine learning and how does it relate to AI?

Anton Bogomolov: As I mentioned before there are many definitions of AI. In simple words, AI is a broader term than the machine learning (ML), i.e. AI includes ML. Being sort of an advanced algorithm, AI achieves specific goals by means of ML, at the same time it is able to adapt to its environment, just like humans. ML is also an algorithm, but a simpler one, with the key feature – the ability to learn (thus the name). It is not meant to achieve a global goal, its goal is to eventually enable programs to automatically improve through experience, without the programmer having to change the code. ML relies on working with data sets, that one needs to input first. It then examines and analyses the data to find common patterns, so that eventually it becomes possible to make experience-driven predictions or decisions.

JUnQ: So what it means is that AI does not exist without ML?

Anton Bogomolov: Right. Machine learning is a subset of AI, more like a tool to achieve AI. One example might be the first chatbots from the 90s. They had hardcoded “intelligence”, i.e hardcoded answers to possible questions. If such bot sees certain keywords it outputs accordingly relevant keywords. These did not have machine learning. But the intelligence of these was doubtful since the algorithm did not adapt. And as we discussed previously the key asset of AI is the ability to adapt.

JUnQ: Since we are on this page, how can one tell the difference between the AI system and a more “conventional” program?

Anton Bogomolov: There are “intelligence” tests for AI, among which the most renown one is the Turing Test.3 But this is more to test whether or not a system is capable of thinking like a human being. However, no AI technology today has passed the Turing test, i.e. that has shown to be convincingly intelligent and able to think. So, this is the main goal of this AI branch – we want to create a machine that will be indistinguishable from a human, in particular, that will be self-aware and act somewhat mindfully. In the end, such a machine will be able to pass the Turing test. Once again, so far, they do not exist. Self-awareness tuned out to be tough to realize.

Now, back to what was asked. I believe, no one is interested in differentiating AI from a mindless linear algorithm. Because as long as the desired goal is achieved no one cares what type of algorithm was used for it.

JUnQ: AI is no longer a futuristic concept, as some may naively think. Can you name some examples where is AI being used already? Are there any AI applications used in the everyday life of ordinary people?

Anton Bogomolov: The most straightforward example is our smartphones. The more recent ones can recognize the owner’s face. This is known to use neural networks. Also, in smartphones, there is Google assistant. Spoken inquiries are transferred to a server where neural network-based algorithms convert them to text, and which is then processed to deliver the relevant information. These are the simplest examples. We all watch Youtube where based on one’s watch history the system suggests what else one might be interested in. These AI-based recommendation engines now seem to know us to an uncanny degree.

If we now go further from everyday life, I would say AI is used pretty much in every field. In finance – there are already automatic trading robots. Some use AI for analysing financial markets to generate profitable trading strategies or make market predictions.

Autonomous driving has become very popular recently. There are even toys for children that make use of a variety of AI and ML technologies, including voice and image recognition, to identify the child and other people around, based on their voices and appearance. This all is owing to the computation power we currently have, which has advanced in the last years.

AI has found its application in medicine as well. As AI demonstrated remarkable progress in image-recognition tasks it is now widely used in medical radiology and computer tomography. One example is that there are neural networks that are trained to analyze tumours and do it as well as the top-class specialists in the field. Just as radiologists are trained to identify abnormalities based on changes in imaging intensities or the appearance of unusual patterns, AI can automatically find these features, and many others, based on its experience from the previous radiographic images, coupled with data on clinical outcomes. This also yields a more quantitative outcome, while radiologists perform only a quantitative assessment.4

JUnQ: As AI develops further is it going to make human jobs obsolete? And what will people be doing if there is nothing else to do?

Anton Bogomolov: Ideally, this is what we aim for – to have everything automized. But this can be achieved, in my opinion, only when so-called artificial general intelligence is realized. This will be a machine capable of experiencing consciousness and think autonomously and thus will be able to accomplish any intellectual task that a human being can.

What will happen to humans after all? There is a concept of universal basic income. The idea is that the robot replacing you is working on your behalf and you are given an income sufficient to meet basic needs, with zero conditions on that income. Because in the end the job is being done and the resources are being produced while you are free for other pursuits.

There has been a lot of research interest in this regard. Back in the 60’s, there was a researcher, John Bumpass Calhoun, who reported on an experiment with rats, the experiment is also known as “Universe 25”. The researchers provided rats with unlimited resources, such as water and food. Besides, they eliminated the danger otherwise coming from nature, like predators, climate, etc. Thus, the rats were said to be in “rat utopia”. At first, the population peaked but shortly after it started to exhibit a variety of abnormal, often destructive behaviours. After some time of the experiment, the rats became too lazy to reproduce and the population was on its way to extinction. There is, of course, the controversy over the implications of the experiment but it can be perceived as one of the possible scenarios of the future.

JUnQ: What about the programming jobs? And scientists?

Anton Bogomolov: Well, first we automize what we can do – so far, the simplest work. AI is now partly replacing the jobs of translators and customer service work. The next in line are self-driving cars that will automize the entire transportation industry, bus, and taxi drivers and so on. But programming jobs are of a different kind, they are creative. Programs that develop other programs exist already, but they are rather limited in what they can do.

Eventually, all jobs will be replaced. Programming jobs will be among the last ones though. Just as other creative jobs, including scientists.

One day we will have a super-intelligent machine, that develops further programs similar to itself at less expense and much faster compared to when supervised by humans. At some point we might not be able to follow its advances anymore and here comes the term “technological singularity”. This is believed to occur when AI starts discovering new science at enormous rates while always learning and evolving on top of it uncontrollably from human’s side.

JUnQ: Is the “singularity” inevitable?

Anton Bogomolov: There is an everlasting argument whether at all it is possible to realize a self-aware AI, that will act mindfully, much like a human. Therefore, depending on “yes” or “no” there will be a technological singularity or not. It can as well occur for other reasons, it is just that among others AI is more likely to bring us to the technological singularity.

On the other hand, it is not proven that such AI can ever be created, to be able to run autonomously and replace all of us. In this case, there will be no AI-induced singularity.

So, this is now a really hot topic in the community.

JUnQ: Does it mean that self-awareness is prerequisite for a possible singularity to occur and we are not yet passed the point of no return?

Anton Bogomolov: Right. The algorithms that exist now and are known to beat the world-class champions in chess and Go are harmless. They are just trained extraordinary well on one particular subject, to achieve a well-defined goal. They are not able to think outside of the box, like “what else is there that I could do”.

Once we create a machine that will be able to think this way, to exhibit human-level consciousness, it is expected to bring us to the singularity. Because it will be able to operate and develop without any supervision. All existing AI technologies do develop themselves but only to a certain degree, they do not have this freedom yet.

JUnQ: Speaking about self-awareness. For example, Sophia – the social humanoid robot developed by Hanson Robotics – realizes itself (herself) as being a programmed female robot. Does it mean that she is self-aware? How did they manage to program “her” self-realization?

Anton Bogomolov: As far as I understand she is programmed to answer this way. If there comes a question about what she thinks she is, her answer will be according to what has been built in her program. Most likely she was trained on thousands of real dialogs among people about their self-awareness. Like other AI systems, she also has machine learning that, if you feed it with enough data, will enable her to learn how to answer and how to behave, as people would.

Sophia communicates very well on a topic known in advance. Because in this case she can get trained in advance: they provide her with enough information about a given topic to get trained. Then she is able to have a sensible conversation because she has the statistics on what is typically answered when. Nevertheless, it is not as simple as when you say X, she replies Y. Thanks to machine learning what she says is a result of rather complicated non-linear connections.

I did not have a chance to speak with her personally though, but I think she is certainly not self-aware. Otherwise, the singularity would have been just around the corner by now. If she had a human-level consciousness, there would be nothing that she would need people for. She would be able to program herself to increase her memory. In just a few days she would reach the level of intelligence of all the people on Earth. In a few more days we would not be able to comprehend what level of intelligence she would have – again the exponential progress.

So, there is nothing we should worry about. She is still just a robot – more about illusion than intelligence. The shocking effect is also due to the fact that she looks like a human, has emotions and facial expressions. This unique combination of her features might make us a bit alert. And for sure Sophia is a great representation of all the advances of AI technology.

In fact, to able to realize human-level AI we essentially need to model a human’s brain. The human brain contains around 10 neurons. On the other hand, functional neural networks have in the order of tens of millions of neurons. These four orders of magnitude difference are sizeable. Moreover, it also takes quite some time to train a system with a large number of neurons. At the end of the day, we do not yet have the capacity to realize a human-level AI.

JUnQ: In case something goes wrong, will we able to “unplug” the machine. Do autonomous AI systems exist yet?

Autonomous systems do exist. Think of a toy-dog, that we have discussed already, or a vacuum cleaner, they are programmed to charge when needed. These are completely autonomous as long as the power source is available. Military branch sure has got some as well. I can imagine an armed flying drone, self-charging, and self-rechargeable.

But the existing autonomous AI systems are not a threat to humans. Despite having all the advantages of machine learning they follow a defined program to accomplish a specific task. It can be the best in recognizing people’s faces, shooting targets or avoiding bullets. But it is still a mindless machine, that we can destroy, or fool or at least hide all the power stations from it.

As long as any of these do not have human-level intelligence, as long as they are not smarter than us, they should not be considered as a potential threat.

JUnQ: So reaching human-level intelligence would be the point from which on AI can potentially live without us.

Anton Bogomolov: Correct. There is an opinion that biological life is just a means to create an electronic life. In other words, some believe that this is our mission, to give birth to an electronic conscious creature, surpassing our capacity, that will develop much faster than humans. In some sense, it is similar to the early times of our planet. Life on Earth began relatively early. But the first living creatures – unicellular organisms – were progressing very slowly, until the multicellular organism occurred, which boosted the progress tremendously. And the progress always seems to be exponential. Thus, the idea of this theory is that we create something to keep up to this exponential progress. And if we look at it globally, like in the scale of the Universe, if this should ever happen that AI takes over the world, it would make sense. Because AI would go further exploring the Universe much faster than we would. Thus, from the point of view of global progress, it would be more advantageous.

JUnQ: Now, when you put it this way the technological singularity does not sound so frustrating anymore. Are you optimistic overall? Will we make it to the end of the 21st century?

Anton Bogomolov: To me, it feels great to witness the progress and to be a part of it. But we will see how it goes. We live within a self-organized system, where everything has got a direction to go. Even though humans are all independent creatures, we still obey the same laws of synergy, we self-organize as well, we cluster forming cities, etc. And sure we also have something to move towards, thus we develop and evolve. So, this progress is so natural.

In fact, experts expect the technological singularity to occur already in the 21st century. But it is not trivial to give a correct estimate. On the other hand, not related to AI, there is research going on in the field of so-called negligible senescence. The idea is that by engineering the reversal of all the major molecular and cellular changes that occur with age we would enable us to constantly rejuvenate ourselves. The researchers believe that negligible aging for humans will be achieved in this century. There even exists a provocative opinion that the first human beings who will live to 1,000 years old are already alive.

At the end of the day, there has been tremendous progress in many fields, not only AI. Along with AI, we may succeed in developing other technologies, which will help us to prolong our lives as well as humans’ in general.

JUnQ: Thank you very much for the interview!

— Mariia Filianina

Read more:

[1] http://deepdreamgenerator.com
[2] L.A. Gatys, A.S. Ecker and M. Bethge arXiv1508.06576 (2015).
[3] https://en.wikipedia.org/wiki/Turing_test
[4] A. Hosny, C. Parmar, J. Quackenbush, L.H. Schwartz and H.J.W.L. Aerts Nature Reviews Cancer 18, 500 (2018).
[5] https://www.ted.com/speakers/aubrey_de_grey

Sep 102019
 
Spread the love

Curious things happen around us all the time – and sometimes we are so familiar with them that we do not even notice them anymore.

If you read the title you might now think that this article was about the Leidenfrost effect [1], that is, this little funny dance water droplets perform on a hot surface such as a frying pan. It is not, though. The Leidenfrost effect occurs when a material – usually a liquid – meets a surface far above its boiling temperature. A thin layer of the droplet’s surface will then evaporate rapidly, causing a protective gas coating to appear that effectively insulates the droplet and lets it last longer on the hot surface. Similar effects can also be seen with liquid nitrogen on a material at room temperature. These droplets appear to travel around due to ejected gasses. But does a similar phenomenon also occur without the necessity of a hot surface?

There is in fact a location where such an effect occurs regularly without us usually noticing: The bathroom. Under certain conditions water droplets can be seen moving on a surface of water as if they had hydrophobic properties. The easiest way to see them is in the shower, when the shower floor is already covered in a thin layer of water. If new water droplets now impact on this surface at certain angles and speeds, they can be seen rushing around for a while before disappearing. It turns out that in recent years a few scientific publications were dedicated to investigating this effect more closely. [2,3] With a high-speed camera, the bouncing effect can be visualized rather easily, as shown in Fig. 1: The droplet appears to cause a dent in the water surface and then bounce off without merging with the rest of the liquid. Of course, the first idea that comes into mind now is the Leidenfrost effect, where a similar behavior can be seen caused by a layer of vapor. However, here no high temperatures are involved and thus the generation of water vapor is negligible.

Figure 2: A schematic depiction of the resistance time phenomenon. On impact, a thin layer of gas (air) is compressed on the surface, causing a protection from immediate coalescence. However, eventually, the air escapes and the lower periphery of the droplet merges with the rest of the liquid. The surface tension can then rapidly squeeze the edges of the droplet together, causing the upper half of the droplet to be cut off from the rest. It can then repeat the bouncing process if the conditions are right. Reproduced from [4].

The first intuition of an air coating to protect the water droplet is still standing, though, and thus the scientists tried to model the behavior. It turns out that there is indeed a protective coating of air, which can get compressed when the droplet approaches the surface of the liquid underneath. The air simply cannot escape quickly enough and therefore protects the droplet on impact and pushes away from the water surface. This phenomenon causes what is called the residence time of a droplet, that is, the time a droplet can sit on top of a pool of the same liquid before coalescing (see Fig. 2). The theory was confirmed by lowering the ambient air pressure around the experiment, which caused the residence time to decrease. [4] However, one would expect that this thin layer of gas should not withstand a heavy impact of a droplet coming from e.g. the shower head with a lot of speed and thus kinetic energy.

An explanation can be found using a simple speaker membrane: When the droplets are put in contact with an oscillation surface, like water on an oscillating speaker, the bouncing is facilitated, and the droplets can remain intact for much longer. Moreover, the droplets now travel around just like they do in a shower! High-speed camera footage can show the reason for this change in behavior: The surface of the water pool, excited into periodic up- and down-movement patterns, gently catches the droplet if the surface is moving downwards in the moment of impact and therefore prevents the impact from destroying the protective gas layer. It is just like gently catching a water balloon with your hand by grabbing it in motion and then slowing it down. Additionally, the continuous movement of the surface seems to stabilize the gas layer and therefore massively increases the residence time, all while allowing the droplet to travel from minimum to minimum, thus creating the “walking water” effect. [6] In a shower, the impact of many, many droplets cause the surface of the water pool on the ground to oscillate in a similar manner, creating landing spots for some droplets that then move around the surface. The phenomenon can thus be explained by the residence time of a droplet together with an oscillating surface.

Finally, one can reproduce a similar behavior in space, where microgravity does not pull the droplets down. An air bubble inside of a water bubble can thus act like an isolated system where droplets can form and move… excited by the sound of a cello! If you got curious, please check out the beautiful footage in Ref. [6] where much of the inspiration of this article came from.

As stated initially, the most curious things happen around us and we simply have to notice them.

— Kai Litzius

References:

[1] https://www.engineersedge.com/physics/leidenfrost_effect_13089.htm

[2] Y. Couder et al., From Bouncing to Floating: Noncoalescence of Drops on a Fluid Bath, Phys. Rev. Lett. 94, 177801 (2005).

[3] J. Molácek & J. W. M. Bush, Drops bouncing on a vibrating bath, J. Fluid Mech. 727, 582-611 (2013).

[4] I. Klyuzhin et al., Persisting Water Droplets on Water Surfaces, J. Phys. Chem. B 114, 14020-14027 (2010).

[5] https://upload.wikimedia.org/wikipedia/commons/1/1d/Bouncing_droplets.gif

[6] https://www.youtube.com/watch?v=KJDEsAy9RyM (Water bubble in space at time index 8:18).
 

 Tagged with:
Sep 042019
 
Spread the love

Superstitions are having hard times in our modern always progressing world. It is no longer easy to fool someone with a myth or a beautiful legend from childhood. But how about this one: have you ever heard that a thunderstorm could curdle milk

A correlation between thunderstorms and the souring or curdling of milk has been observed for centuries. As early as in 1685 the first clue was written down in the book “The Paradoxal Discourses of F. M. Van Helmont: Concerning the Macrocosm and Microcosm, Or the Greater and Lesser World, and Their Union” [1]:

“Now that the Thunder hath its peculiar working, may be partly perceived from hence, that at the time when it thunders, Beer, Milk, &c. turn sower in the Cellars … the Thunder doth everywhere introduce corruption and putrefaction”.

By the beginning of the 19th century there had been numerous attempts to find theories of a causal relationship. [2-7] They all were not plausible, many even contradicting. Later, after refrigeration and pasteurization became widespread, eliminating bacteria growth, interest in this phenomenon almost disappeared. While the most popular explanation remains that these occasions are only a correlation, we would like to draw the reader’s attention to some of the suggested theories. 

In order to understand what actually happens with milk during a thunderstorm we would need to know (i) what processes are behind the milk souring and (ii) what accompanies thunderstorm, e.g. lightning. While the latter is not yet entirely clear to scientists, [8] the simplified picture of the first point we will cover in the next few paragraphs.     


Figure1: Schematic image of casein micelles covering fat globules within milk as a colloid solution.

Fresh milk is a textbook example of colloid – a solution consisting of fat and protein molecules, mainly casein, floating in a water-based fluid. [9] The structure of milk is schematically illustrated in Fig. 1. Fat globules are coated with protein and charged phospholipids. Such a formation protects the fat from being quickly digested by bacteria, which also exist in milk. Casein proteins under normal conditions are negatively charged and repel each other so that these formations naturally distribute evenly through the liquid. Normally, milk is slightly acidic (pH ca. 6.4-6.8), [10] being sweet at the same time due to lactose, one of the other carbohydrates within the milk. When the acidity increases to pH lower than 4, proteins denature and are no longer charged. Thus, they bind to each other or coagulate into the clumps known as curds. The watery liquid that remains is called whey.

The acidity of milk is determined by the bacteria which produce lactic acid. The acids lower the pH of milk so the proteins can clump together. The bacteria living in milk naturally produce lactic acid as they digest lactose so they can grow and reproduce. This occurs for raw milk as well as for pasteurized milk. Refrigerating milk slows the growth of bacteria. Similarly, warm milk accommodates bacteria thrive and also increases the rate of the clumping reaction.

Now, we can think of a few things that may speed up the souring process. The first one could be ozone that is formed during a thunderstorm. In one of the works it was shown that a sufficient amount of ozone is generated at such times to coagulate milk by direct oxidation and a consequent production of lactic acids. [2] However, if this were the case, a similar effect would occur for sterilized milk. The corresponding studies were carried out by A. L. Treadwell, reporting that, indeed, the action of oxygen or oxygen and ozone coagulated milk faster Ref. [2]. But the effect was not observed if the milk had been sterilized. The conclusion drawn from this study was that the souring was produced by unusually rapid growth of bacteria in an oxygen rich environment.

In the meantime, a number of other investigations suggested that a rapid souring of milk was most likely due to the atmosphere that is well known to become sultry or hot just prior to a thunderstorm. This warm condition of the air is very favourable for the development of lactic acid in the milk. [3, 4] Thus, these studies were also in favour of thunderstorms affecting the bacteria.

A fundamentally different explanation was tested by e.g. A. Chizhevsky in Ref. [5]. It was suggested that the electric fields with particular characteristics produced during thunderstorms could stimulate a souring process. To check this hypothesis the coagulation of milk was studied under the influence of electric discharges of different strength. Importantly, in these experiments the electric pulses were kept short to eliminate any thermal phenomena. Eventually, the observed coagulation for certain parameter ranges was explained by breaking of protein-colloid system in milk due to the influence of the electric field.

Other experiments investigating the effect of electricity on the coagulation process in milk turned out to be astonishing. [6] When an electric current was passed directly through milk in a container, in all the test variations, the level of acidity rose less quickly in the ‘electrified’ milk samples compared with the ‘control’ sample. Which contradicted all the previous reports.

To conclude, while there is no established theory explaining why milk turns sour during thunderstorms, we cannot disregard numerous occasions of this curious phenomenon. [7] What scientists definitely know is that milk goes sour due to bacteria – bacilli acidi lactici – which produce lactic acid. These bacteria are known to be fairly inactive at low temperatures. Which is why having a fridge is very convenient for milk-lovers. However, when the temperature rises, the bacteria multiply with increasing rapidity until at ca. 50°C it becomes too hot for them to survive. Thus, in pre-refrigerator days, when this phenomenon was most popular, in thundery weather with its anomalous conditions the milk would often go off within a short time after being opened. Independently of the exact mechanism, i.e. increased bacteria activity or breaking of the protein-colloid system, the result is – curdled milk.

Should you ever witness this phenomenon yourself, do not be sad immediately. Try adding a bit brown sugar into your fresh milk curds…

— Mariia Filianina

Read more:

[1] F. M. van Helmont Franciscus “The Paradoxal Discourses of F. M. Van Helmont, Concerning the Macrocosm And Microcosm, Or The Greater and Lesser World, And their Union” set down in writing by J.B. and now published, London, 1685.

[2] A. L. Treadwell, “The Souring of Milk During Thunder-StormsScience Vol. XVIII, No. 425, 178 (1891).

[3] “Lightning and Milk”, Scientific American 13, 40, 315 (1858). doi:10.1038/scientificamerican06121858-315

[4] H. McClure, “Thunder and Sour Milk.” British Medical Journal vol. 2, 651 (1890).

[5]V. V. Fedynskii (Ed.), The earth in the universe” (orig. “Zemlya vo vselnnoi”), Moscow 1964, Translated from Russian by the Israel Program for Scientific Translations in 1968.

[6] W. G. Duffield and J. A. Murray, “Milk and Electrical Discharges”, Journal of the Röntgen Society 10(38), 9 (1914). doi:10.1459/jrs.194.0004

[7] “Influence of Thunderstorms on MilkThe Creamery and Milk Plant Monthly 11, 40 (1922).

[8] K. Litzius, “How does a lightning bolt find its target?” Journal of Unsolved Questions 9(2) (2019).

[9] R. Jost (Ed.), “Milk and Dairy Products.” In Ullmann’s Encyclopedia of Industrial Chemistry (2007). doi: 10.1002/14356007.a16_589.pub3

[10] https://en.wikipedia.org/wiki/Milk

May 222019
 
Spread the love

Once, thunderstorms with thunder and lightning were interpreted as signs of the god’s wrath; nowadays, we are taught the mechanics behind a thunderstorm in school. You are probably already thinking about ice crystals that are smashed together by strong winds inside clouds, creating static charges in the process. How does a lightning bolt, though, find its way from the cloud to the ground? This question still keeps scientists awake at night – and there is still not a clear answer to how exactly the formation and movement of a lightning bolt work. This Question of the Month will give a brief summary on how a lightning bolt selects its target.

Lightning [1,2] occurs always when a large thunderstorm cloud with strong winds generates sufficient electrostatic charge that it must discharge towards the ground. The discharge itself occurs (simplified) in a twostep process, consisting of a main lightning bold and a preflash: The preflash travels as comparably weak (but still dangerous!) current downwards from the cloud. This usually happens in little jumps, which have been investigated with high-speed cameras. They show that the current path is apparently selected randomly by slowing down at a given position and then randomly selecting the next to jump to. This random selection appears to happen within a sphere of a few tens of meters in diameter around the tip of the growing lightning bolt. The process also involves growing many tendrils with individual tips and thus covers a large area (see also Fig. 1). With this procedure, the lightning bold eventually “feels” its way to the ground until it reaches it either directly or via a structure connected to it.

Figure 1: Lightning bolts are branching off into many tendrils. [3]

Therefore, if a conductive object reaches into such a sphere, the bolt will immediately jump to it and use it as a low-resistance shortcut to the ground – as a result, if possible, shortening the path for the discharge. This behavior leads to the curious effect of exclusion areas around structures that are protected with lightning rods, in which practically no ground strike will occur, and a person will not be hit directly. Unfortunately, this will not completely protect the person, as the electricity can still be dangerous within the ground.

Now that the preflash has found a path to the ground, the second phase starts, and the majority of the charge starts to flow with up to 20 000 A along the path found by the preflash. This is also the portion of the discharge that is visible by bare eye. It can consist of several distinct discharges that all follow the path of ionized air of the previous one, creating the characteristic flickering of a lightning bolt.

How the entire process from preflash to main discharge works is still not completely understood today and much of the presented insights were simply gathered phenomenologically by camera imaging. Additionally, there are many more types of and effects related to lightning bolts, which are relevant for our understanding of a variety of weather phenomena. All in all, thunderstorms are still something magical today, even if only figuratively.

— Kai Litzius

Further reading:

[1] http://stormhighway.com/cgdesc.php#part1

[2] https://what-if.xkcd.com/16/

[3] https://commons.wikimedia.org/wiki/File:Lightning_over_Oradea_Romania_2.jpg

[4] Chem. Unserer Zeit, 2019, 53. DOI: 10.1002/ciuz.201980045

Mar 052019
 
Spread the love

Genetic information is encoded in the deoxyribonucleic acid (DNA). In form of a long double-helix molecule, lo-cated in living cells, it governs most of the organisms traits. Explicitly, information from genes is used to form func-tional gene products such as proteins. This process of gene expression is used by all known forms of life on earth to generate the macromolecular machinery for life. Thus, it poses the fundamental level of how the genotype causes the phenotype, i.e. the composite of organisms’ observ-able characteristics. Genomic modification is a powerful tool to amend those characteristics. Reproductional and environmentally caused changes to the DNA is a substrate for evolution. In nature, those changes happen and may cause favourable or unfavourable changes to the phenotype, which allow the cell or organism to improve or reduce the ability to survive and reproduce, respectively.

In the first half of the 20th century, several methods to alter the genetic structure of cells were discovered, which include exposing it to heat, X-rays, UV-light, and chemicals1-4. A significant number of crop cultivated today were developed using those methods of traditional muta-genesis, an example of which is Durum wheat, the most prevalent wheat for pasta production. With traditional mu-tagenesis thousands of mutations are introduced at random within the DNA of the plant. A subsequent screening iden-tifies and separates cells with favourable mutations in their DNA, followed by attempts to remove or reduce possible unfavourable mutations in those by mutagenesis or cross-breeding.

As those methods are usually unspecific and complex, researchers have developed site-determined gene editing techniques, the most successful of which is the so called CRISPR/Cas9 method (clustered regularly interspaced short palindromic repeats). This method borrows from how bacteria defend viral invasion.6 When the bacterium detects virus DNA invasion, it forms two strands of RNA (single helix molecules), one of which contains a sequence that matches that of the invading virus DNA and is hence called guide RNA. These two RNAs form a complex with a Cas9 protein, which, as a nuclease enzyme, can cleave DNA. When the guide RNA finds the target in the viral genome, the RNA-Cas9 complex will lock to a short se-quence known as the PAM, the Cas9 unzippes the viral DNA to which the RNA will match. Cas9 then cleaves the viral DNA, forcing the cell to repair the DNA.6 As this repair process is error prone, it may lead to mutations that might disable certain genes, changing the phenotype. In 2012 and 2013 it was discovered that the guide RNA can be considerably modified for the system to work site-determined5, and that by modifying the enzyme it not only works in bacteria and archaea, but also in eukaryotes (plants and animals), respectively.7

Figure 1: CRISPR/Cas9 working principle.8

Research published since demonstrated the method’s poten-tial for RNA-programmable genome editing. Modifications can be made so during the repair an artificially designed DNA sequence pairs with the cleaved ends, recombines and replaces the original sequence, introducing new genes to the genome.11,12 The advantages of this technique over tra-ditional gene editing methods is multifold. It can act very targeted, i.e. site- and therefore gene-specific in any form of known life. It is comparatively inexpensive, simple enough to be conducted in basic labs, effective, and fast regarding preparation and realisation. The production of multiplex ge-netically modified mice, for instance, was reduced from up to two years to few weeks,9 as CRISPR/Cas9 has the unique advantage over earlier genome editing methods, that multi-plexable targeting is easily achieved by co-expressing Cas9 with multiple single-guide RNAs simultaneously. Conse-quently, within few years after its discovery, it evolved to be the routine procedure for genome modification of virtually all model plants and animals.

The availability of such a method evokes medical and botanical development interests. A plethora of possible medical applications are discussed and researched, among which is healing cancer or treating genetic disorders. For cancer research it is imaginable to induce a multitude of deliberate mutations to artificially form cells similar to can-cerous cell, study the caused modification to the cells, and thus learn to inhibit their reproduction or the original muta-tion. In the clinical research focus now are blood diseases or those related to haematopoietic cells, such as leukaemia, HBV, HIV, or haemophilia.13,14 This is because for the treatment of those diseases, the cells (blood cells or bone marrow) can be extracted from the body in a known way, their genome can be edited in vitro by the CRISPR/Cas9 method, and finally the cells can be reintroduced to the body. The advantage of the extraction is that no additional vector (agent to help finding the right cells in vivo) is re-quired, and the genomic modification can be controlled ex vivo. While the editing efficiency with CRISPR-Cas9 can be extremely high, the resulting cell population will be inherently heterogeneous, both in the percentage of cells that were edited and in the specific genotype of the edited cells. Potentially problematic for in vivo application is the bacterial origin of the endonuclease Cas9. A large portion of humans show humoral and cell-mediated immune re-sponses to the Cas9 protein complex,10 most likely because of prior infection with related bacteria.

Although clinical applications of CRISPR/Cas9 grab a lot of media attention, agricultural applications draw even more commercial interest. Prospects here are the faster, cheaper and more targeted development of crops than by traditional methods of mutagenesis, which are extremely more aggressive in comparison. The main aim is unchanged though: improve plants regarding yield, resistance to dis-eases or vermin, and resilience to aridity, heat, cold, humid-ity, or acidity.15,16 CRISPR/Cas9 is therefore considered an important method to ameliorate agricultural food produc-tion to feed the earth’s ever-growing human population.

Regulations of thusly modified products vary largely be-tween countries. While Canada considers such plants equal to not genetically modified if no transgene was inserted, the USA assesses CRISPR plants on a case by case basis, gauging whether the modification would have been possible by natural mutation. This way they chose to not regulate mushrooms that do not turn brown and maize with an al-tered starch contend. Last year the European court of justice ruled all CRISPR/Cas9 modified plants as genetically mod-ified organisms, reasoning that the risks of such a novel method are unknown, compared to traditional mutagenesis as an established method of plant breeding.

Instigated by genome editing in human-embryonic cells in 201518 a group of scientists called for a moratorium to dis-cuss the possible risks and impact of the wide usage of the CRISPR/Cas9 technology, especially when it comes to mu-tations in humans.19 On the 2015 International Summit on Human Gene Editing leading international scientists con-sidered the scientific and societal implications of genome editing. The discussed issues span clinical, agricultural and environmental applications, with most attention focused on human-germline editing, owing to the potential for this application to eradicate genetic diseases and, ultimately, to alter the course of evolution. Some scientists advise to ban CRISPR/Cas9 based human genomic editing research for the foreseeable future, whereas others favour a rapid progress in developing it.20 A line of argument of support-ers of the latter viewpoint is, that the majority of ethical concerns are effectively based on methodical uncertainties of the CRISPR/Cas9 method at its current status, which can be overcome only with extensive research. Those methodical uncertainties include possible cleavage at undesired sites of the DNA, or insertion of wrong sequences at the cleavage site, resulting in the disabling of the wrong genes or even the creation of new genetic diseases.

Whilst a total ban is considered impractical because of the widespread accessibility and ease of use of this technology,21 the summit statement says, that “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and effi-cacy issues have been resolved . . . and (ii) there is broad societal consensus about the appropriateness of the pro-posed application.” The moral concerns about embryonic or germline treatment base on the fact that CRISPR/Cas9 not only would allow the elimination of genetic diseases, but also enable genetic human enhancement, from simple tweaks like eye colour or non-balding to severe modifica-tions relating bone density, muscular strength or sensory and mental capabilities.

Although most scientist echo the summit statement, in 2018 a biochemist claimed to have created the first genetically edited human babies, two twin sisters. After in vitro fertil-ization, he targeted a gene that codes for a protein that one HIV variant uses to enter cells, enforcing a kind of HIV immunity, which is a very rare trait among humans.22 His conduct was harshly criticised in the scientific community, widely condemned, and-after enormous public pressure-redoing forbidden by the responsible regulatory offices.

Ultimately the CRIPSR/Cas9 technology is a paramount example of real world societal implications of basic re-search and demonstrates researchers’ responsibilities. This also raises the question whether basic ethical schooling should be part of every researcher’s education.

— Alexander Kronenberg

Read more:

[1] K. M. Gleason (2017) “Hermann Joseph Muller’s Study of X-rays as a Mutagen”

[2] Muller, H. J. (1927). Science. 66 (1699): 84–87.

[3] Stadler, L. J.; G. F. Sprague (1936). Proc. Natl. Acad. Sci. U.S.A. US Department of Agriculture and Missouri Agricul-tural Experiment Station. 22 (10): 572–8.
[4] Auerbach, C.; Robson, J.M.; Carr, J.G. (March 1947). Sci-ence. 105 (2723): 243–7.

[5] M. Jinek, K. Chylinski, I. Fonfara, M. Hauer, J. A. Doudna, E. Charpentier. Science, 337, 2012, p. 816–821.
[6] R. Sorek, V. Kunin, P. Hugenholtz. Nature reviews. Micro-biology. 6, 3, (2008), p. 181–186.

[7] Cong, L., et al., (2013). Science. 339 (6121) p. 819–823.

[8] https://commons.wikimedia.org/wiki/File:GRNA-Cas9.png

[9] H. Wang, et al., Cell. Band 153, 4, (2013), S. 910–918.

[10] D. L. Wagner, et al., Nature medicine. (2018).

[11] O. Shalem, N. E. Sanjana, F. Zhang; Nature reviews. Genet-ics 16, 5, (2015), p. 299–311.

[12] T. R. Sampson, D. S. Weiss; BioEssays 36, 1, (2014), p. 34–38.

[13] G. Lin, K. Zhang, J. Li; International journal of molecular sciences 16, 11, (2015), p. 26077–26086.

Mar 052019
 
Spread the love

Dr. Roman Stilling

Disclaimer: The opinions, views, and claims expressed in this essay are those of the author and do not necessarily reflect any opinion whatsoever of the members of the editorial board. The editorial board further reserves the right not to be responsible for the correctness of the information provided. Liability claims regarding damage caused by the use of any information provided will therefore be rejected.

Roman Stilling graduated with a B.Sc. in Biosciences from the University of Mün-ster in 2008 and received a Ph.D. degree from the International Max Planck Re-search School for Neurosciences / University of Göttingen in 2013. Afterwards he joined the APC Microbiome Ireland in Cork, Ireland, as postdoctoral researcher. Since 2016 he is the scientific officer for for the information initiative “Tierver-suche verstehen”1, coordinated by the Alliance of Science Organisations in Germany.


Ethical concerns on using animals in biomedical research have been raised since the 19th century. For example, in England the “Cruelty to Animals Act” was passed in 1876 as a result of a debate especially on the use of dogs un-der inhumane conditions such as invasive physiological experiments or demonstrations without general anaesthe-sia. Interestingly, it was Charles Darwin who put in all his scientific and political gravitas to push for regulation by the law while at the same time providing highly differen-tiated argumentation towards using animals for advancing knowledge, especially in the quickly developing field of physiology 1,2. In an 1881 letter to a Swedish colleague he wrote:

“[. . . ]I fear that in some parts of Europe little regard is paid to the sufferings of animals, and if this be the case I should be glad to hear of legislation against inhumanity in any such country. On the other hand, I know that physiology cannot possibly progress except by means of experiments on living animals, and I feel the deepest conviction that he

who retards the progress of physiology commits a crime against mankind.”3

Animal research as a moral dilemma

In this letter Darwin succinctly summarized the ethical dilemma that is the core of the debate on using animals for research: whether we may cause harm to animals if it is necessary to advance science and medicine.

In fact, the ability to suffer is generally accepted as the sin-gle most morally relevant criterion when animals are con-sidered as subjects of moral worth. This reasoning is based on the philosophies of Jeremy Bentham who’s thoughts on this matter culminated in the aphorism: “The question is not, Can they reason? nor, Can they talk? but, Can they suffer?”4

Today, animal welfare legislation is based on this notion in most countries, which has fundamental consequences on how different species of animals are protected by these reg-ulations. For example, in the EU, only the use of animals within the taxonomical subphylum Vertebrata (i.e. verte-brates) are covered by the respective EU directive.5 More recently also the use of Decapoda (e.g. crayfish, crabs, lob-sters) and Cephalopoda (e.g. squids, octopuses) falls within this regulation since it is assumed that these animals have a complex enough nervous system to perceive pain and expe-rience suffering.

Most current legislation in industrialized countries ac-knowledges that animals (not exclusively, but especially those able to suffer) have intrinsic value and a moral sta-tus that is different from other biological forms of life such as plants, fungi or bacteria and inanimate matter. At the same time no country has established legislation that con-siders the moral status of any animal the same as the moral status of a human being – irrespective of the developmental state or status of health of that human being.

Together this reasoning has led to the appreciation, that leg-islation cannot reflect a general rule of “one size fits all”, but a compromise needs to be implemented, where ethical and scientific judgment for each individual experiment or study is made on a case-by-case basis.

Adherence to the 3R-principle is necessary but not suf-ficient for ethical justification of laboratory animal use

The moral dilemma of inflicting harm on animals to ad-vance knowledge and medical progress was addressed in more detail in 1959, when William Russell and Rex Burch published “The principles of humane experimental technique”, in which they formulated the now famous 3R-principle for the first time: Replace, reduce, refine.6. This principle acknowledges human benefit from animal exper-iments but provides a guideline to minimize suffering in animals: Only if there is no alternative method to achieve the scientific goal, all measures to reduce the necessary number of animals in a given study, and the best possible conditions to confine suffering to the necessary minimum have been established, an experiment can be considered as potentially ethically justifiable. Meeting the 3R criteria is, however, a necessary but not sufficient requirement for eth-ical justification of a particular experiment.

Today the 3R-principle is well accepted worldwide7 as a formula to minimize animal suffering and has become an integral part of EU animal welfare regulations, which have been translated to national law in all EU member states.

Responsibility towards human life and safety – lessons from history

Another key aspect of research involving the use of ani-mals is human safety, especially in the context of medical research on humans. The atrocities of medical experiments on humans in Nazi Germany has led the international com-munity to implement strong protection of human subjects and patients. In addition, drug scandals like the thalidomide birth defect crisis in the 1950s and 1960s have led to pro-found changes in drug regulations. The results of this pro-cess have been condensed in the “Declaration of Helsinki”

adopted by the World Medical Association (WMA) in 1964. Importantly, this declaration states that medical research on human subjects is only justified if all other possible sources haven been utilised for gaining information about efficacy and potential adverse effects of any new experimental ther-apy, prevention or treatment. This explicitly includes infor-mation gained from experiments with animals,8 which has additionally been addressed in a dedicated statement by the WMA on animal use in biomedical research.9.

In analogy to the Helsinki Declaration, which has effec-tively altered the ethical landscape of human clinical re-search, members of the international research community have adopted the Basel Declaration to acknowledge their re-sponsibility towards research animals by further advancing the implementation of ethical principles whenever animals are being used in research.10 Further goals of this initiative are to foster trust, transparency and communication on ani-mal research.

Fostering an evidence-based public debate on the ethics of animal research

Transparency and public dialogue is a critical prerequisite for a thoughtful and balanced debate on the ethical implica-tions of using animals in potentially harmful experiments.

However, a meaningful public debate about ethical consid-erations is only worthwhile, if we agree on the facts regard-ing the usefulness of research on animals for scientific and medical progress.

Yet, the contribution of animal models and toxicology testing to scientific and medical progress as well as sub-ject/patient safety is sometimes doubted by animal rights activists. Certainly, in most biomedical research areas, in-cluding those that involve animal experimentation, there is room for improvement, e.g. on aspects of reproducibility or translation of results from bench to bedside. However, there is widespread agreement among researchers and med-ical professionals, together with a large body of published evidence, on the principal usefulness of animal models in general. As for all science, constant improvement of mod-els and careful consideration of whether any model used is still state of the scientific art at any given point of time is crucial for scientific advancement. Also the responsibility to avoid animal suffering as much as possible dictates that new scientific methods and models free of animal suffering are developed with both vigour and rigour.

A fruitful debate needs to be based on these insights and evidence-based common ground needs to be established when discussing ethical considerations and stimulating new ideas. Finally, we need to acknowledge that we are always in the middle of a continuing thought process, in which we very democratically and carefully need to negotiate the importance of different views, values and arguments.

Read more:

[1] Johnson, E. M. Charles Darwin and the Vivisection Outrage. The Primate Diaries (2011).

[2] Feller, D. Dog fight: Darwin as animal advocate in the anti-vivisection controversy of 1875. Stud. Hist. Philos. Sci. Part C Stud. Hist. Philos. Biol. Biomed. Sci. 40, 265-271 (2009).

[3] Darwin, C. R. 1881. Mr. Darwin on Vivisection.

The Times. (18 April): 10. (1881). Available

at: http://darwin-online.org.uk/content/frameset?pageseq= 1&itemID=F1352&viewtype=text. (Accessed: 25th October 2017)

[4] Bentham, J. An Introduction to the Principles of Morals and Legislation. (W. Pickering, 1823).

[5] DIRECTIVE 2010/63/EU OF THE EUROPEAN PARLIA-MENT AND OF THE COUNCIL on the protection of animals used for scientific purposes. 2010/63/EU, (2010).

[6] Russell, W. M. S. & Burch, R. L. The principles of humane experimental technique. (Methuen, 1959).

[7] Guidelines for Researchers. ICLAS Available at: http://iclas.

org/guidelines-for-researchers. (Accessed: 29th November 2018)

[8] WMA – The World Medical Association-WMA Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects. Available at: https://www. wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. (Accessed: 29th November 2018)
[9] WMA – The World Medical Association-WMA State-ment on Animal Use in Biomedical Research. Avail-able at: https://www.wma.net/policies-post/wma-statement-on-animal-use-in-biomedical-research/. (Accessed: 29th November 2018)

[10] Basel Declaration | Basel Declaration. Available at: https://www.basel-declaration.org/. (Accessed: 30th November 2018)

Oct 232017
 
Spread the love

Cueillette Urbaine, meaning « Urban gathering » in French, is a society commited to turn green the cities, by producing local organic food on the available buildings roofs.

Cueillette Urbaine also aims to associate local urban production and restoration in the same space, where customers could gather and choose their own fruits and vegetables to be cooked afterwards. Thus, it removes the environment costs of the transport, but it also enables to recycle organic food waste, to improve the biodiversity in the cities, to manage rainwaters, etc…

Cueillette Urbaine belongs to the new wave of urban farming. Nonetheless, growing out of the soil and creating new ecosystems in the core of the cities is a real urban challenge. Therefore, scientific research work is needed to develop new cultivation technologies, to assure a high quality production. Indeed, bringing soils from elsewhere is not a sustainable solution, as transport environmental costs could be higher than carrying food from the rural areas to the cities. Therefore, developing hydroponics, aquaponics or like Cueillette Urbaine new cultivate substrates is essential for sustainable food production in the cities. For instance, Cueillette Urbaine is leading a research and development project to evaluate the effects of different types of substrates (coffee ground, lawn cuts, compost…) on the plant growth. Secondly we focus our research on vegetal association benefice in particular to avoid diseases, ameliorate the pollination and finally to create an equilibrate ecosystem. Finally, we work on wicking systems to avoid hydric stress.

Growth pots (©Cueillette Urbaine)

Besides, during the past 10 years, policy and science have worked in pairs in order to develop urban agriculture. There is a current need to define a proper institutional frame for urban agriculture, and this requires the collaborative research work of different types of scientists: geographers, economists, agronomists, urbanists, sociologists, etc. Cueillette Urbaine chooses to foster urban agriculture development by doing action-oriented research, which means combining research and practical work. Transforming practice into knowledge is also a way to close the gap between policies and urban farming by providing the policy makers information based on evidence. Thanks to these encouraging results and all the proved benefits of urban farming, city administrations pay increasing attention to urban agriculture development. For instance, Paris city aims at enhancing its urban food production area to 120 ha by 2020!

Harvested vegetables (©Ceuillette Urbaine)

During many years urban agriculture has emerged as a fashion effect. Today, the massive use of chemical fertilizers and pesticides has made a lot of land infertile, in addition to that the expansion of cities causing the disappearance of arable land. We believe that a production of fruit and vegetables in the city will not replace the conventional agriculture but it is a necessity to supply local and fresh products to the city dwellers without any transport.

— By courtesy of Urban gathering compagny. Edited by Adrien Thurotte.

Contact: info@cueilletteurbaine.com

Website: www.cueilletteurbaine.com

Oct 232017
 
Spread the love

Samantha Jakuboski graduated as Bachelor of Science at Columbia University (Barnard College) in cellular and molecular biology. She dedicated much time during her studies promoting eco-friendly acting and explaining major climate issues on blogs like Nature Journal Scitable [1] and EcoPlum [2].

JUnQ: You started to write for Green Science Nature blog six years ago, in ninth-grade. This is pretty uncommon to have such a sensibility about climate and green science at that age. Why did you start writing?

Samantha: I believe that climate change is a major global threat and that action must be taken to mitigate its effects. But, in order to act, we must first educate. This is why I decided to start writing. I wanted to create a source where people my own age, the next generation of leaders, could go to learn about climate change. So, I wrote a blog proposal to Nature Journal detailing my plans, and they accepted it!

As a ninth-grader, I was by no means an expert on climate change. In fact, I was learning about climate change through my research for the blog posts. In a way, I believe that this naivety worked to my advantage. Since I was learning as I was going along, I first had to explain concepts to myself before explaining them to my readers. As a result, I had a sense of what worked and didn’t work when explaining a concept to someone who is not very familiar on the topic. By writing at a level that was easy to understand, I hoped that students my age, as well as people of all background and ages, would be able to read my posts with ease, learn about climate change, and hopefully take steps to lead greener lives.

JUnQ: According to data cited in a blog article that you published in EcoPlum, 64% of American people believe that the earth is warming, and among then, only 52% agreed that the warning is caused by human activity. Do you have the feeling of being left alone struggling to convince people or that the word does not start being spread out?

Samantha: Since this 2014 poll was taken, the numbers have shifted upward only slightly. According to the May 2017 “Climate Change in the American Mind” survey conducted by the Yale Program on Climate Change Communication, 70% of Americans believe in climate change, with 58% of Americans believing that is it caused by human activity.

As someone who writes about climate change in the hope of raising awareness, I do find the 58% statistic to be low and a bit discouraging. However, I think it is also important to realize that we are making progress; 58% is the highest percentage recorded since the Yale survey was started in 2008.

JUnQ: Position of President Trump on climate change is to deny it. Immediately, governors, mayors, etc. rose up against it, and promise to fulfill engagement that the climate would benefit. Do you think that these engagements would compensate, at least, or overbalance the bad things Trump’s politic about climate could/will engender?

Samantha: While President Trump has accepted that climate change is indeed happening, he still, unfortunately, does not believe it is rooted in man-made activity. As a result of his weak stance, I definitely think that climate change believers on both the individual and corporate level are now more vocal, as evidenced by the We are Still In [3] Paris climate agreement coalition, and the People’s Climate March on Trump’s 100th day in office.

While our president may refuse to accept the anthropogenic roots of climate change, I think that if states, local governments, and businesses, establish and work toward individual green goals, our nation can continue to make strides toward the 26-28% reduction in national greenhouse gases by 2025 that we pledged in the Paris Peace Accords.

JUnQ: Does being aware mean acting toward climate change for everyday life of an American people (e.g. garbage sorting, water and/or energy saving, ecological cars, eat less to eat better)?

Samantha: Absolutely. If one is truly aware and educated, I don’t see how they can not incorporate little acts of “greenness” into their daily lives.

JUnQ: How to live green as U.S. citizen, what has been done and what remains to be done at personal point of view?

Samantha: In my household, we recycle, use LED light-bulbs and energy efficient appliances, compost, and try to reduce the amount of disposable paper and plastic items we purchase. We also unplug appliances, such as phone chargers and TVs, when we are not using them, since they can contribute to “vampire energy”— energy that is consumed even when the devices are not in use. Further, I love to run, and my father enjoys riding his bike, so rather than hopping in our car and driving, we take a more active approach when we need to get places (I guess it helps that we also live in New York City, where everything is so close!) While these little life-style changes are small, they do allow us to reduce our individual and household carbon footprints. When people ask me what they can do to live greener lives, I name these examples and tell them that small actions do add up and make a difference. However, there is still a lot of work that needs to be done in motivating people to make these easy daily changes. Some people I know still don’t recycle!

JUnQ: And at a larger scale (cities, companies, state)?

Samantha: It is now up to businesses and local governments to lead the charge against climate change. And already, over 1,200 governors, mayors, colleges, businesses, and investors have signed the We Are Still In [3] agreement to ensure that the United States continues to reduce its carbon emissions.

Further, I think that our colleges and universities must prepare our students, especially business school students, to deal with the consequences of climate change so that our future leaders can realize their corporate social responsibility and make smart eco-friendly business decisions.

JUnQ: Among all the consequences of climate change, which one is the most unexpected and worrying?

Samantha: While few people may link climate change to conflict and terrorism, it appears that there may be some direct correlations. One of my friends at Barnard College recently wrote a dissertation on climate change as a precursor to conflict– specifically on how anthropogenic climate change and drought induced the Syrian Civil War. As resources, such as water, become scarcer, and agriculture becomes depressed due to drought and rising temperatures, the prospect of future conflict does worry me.

Another unexpected consequence of climate change is the economic impact. When people think of climate change, they think of numbers such as the rise in temperatures or ocean levels. However, climate change will also affect the finances of future generations. In September, I wrote a post for EcoPlum called “Pay Up, Millenials.” In this post, I explained that people are less productive at extreme temperatures, thus causing a decrease in national GDP. Furthermore, as extreme weather caused by climate change continues to wreck havoc and cause billions of dollars in damage, taxpayers can expect to face higher taxes to pay for these costs. As a result of both lower GDP and increased taxes, a Demos and NexGen Climate analysis found that if no action is taken to combat climate change, a 21-year-old 2015 college graduate earning a median income can expect their lifetime income and wealth to decrease by $126,000 and $187,000, respectively. The predicted loss in wealth jumps to $764,000 for a college graduate born in 2015 earning a median income. Ouch.

JUnQ: Thank you very much for this interview!

— Adrien Thurotte

 

References
[1] https://www.nature.com/scitable/blog/green-science
[2] https://shop.ecoplum.com/blogs/sustainable-living/
[3] http://wearestillin.com