{"id":3824,"date":"2019-09-18T10:40:51","date_gmt":"2019-09-18T08:40:51","guid":{"rendered":"http:\/\/junq.info\/?p=3824"},"modified":"2019-09-29T18:57:01","modified_gmt":"2019-09-29T16:57:01","slug":"how-to-respond-to-the-potential-malicious-uses-of-artificial-intelligence","status":"publish","type":"post","link":"http:\/\/junq.info\/?p=3824","title":{"rendered":"How to respond to the potential malicious uses of artificial intelligence?"},"content":{"rendered":"\n<p>  Haydn Belfield <a href=\"http:\/\/junq.info\/wp-admin\/post.php?post=3824&amp;action=edit#_ftn1\">[1]<\/a>  is a Research Associate and Academic Project Manager at the University  of Cambridge\u2019s Centre for the Study of Existential Risk. He is also an  Associate Fellow at the Leverhulme Centre for the Future of  Intelligence. He works on the international security applications of  emerging technologies, especially artificial intelligence. He has a  background in policy and politics, including as a Senior Parliamentary  Researcher to a British Shadow Cabinet Minister, as a Policy Associate  to the University of Oxford\u2019s Global Priorities Project, and a degree in  Philosophy, Politics and Economics from Oriel College, University of  Oxford.<br \/><a href=\"http:\/\/junq.info\/wp-admin\/post.php?post=3824&amp;action=edit#_ftnref1\">[1]<\/a><a href=\"http:\/\/junq.info\/wp-admin\/hb492@cam.ac.uk\">hb492@cam.ac.uk<\/a><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/junq.info\/wp-content\/uploads\/2019\/09\/Haydn.jpg\" alt=\"\" class=\"wp-image-3825\" width=\"278\" height=\"297\" srcset=\"http:\/\/junq.info\/wp-content\/uploads\/2019\/09\/Haydn.jpg 810w, http:\/\/junq.info\/wp-content\/uploads\/2019\/09\/Haydn-281x300.jpg 281w, http:\/\/junq.info\/wp-content\/uploads\/2019\/09\/Haydn-768x821.jpg 768w\" sizes=\"(max-width: 278px) 100vw, 278px\" \/><figcaption>  Haydn Belfield <\/figcaption><\/figure><\/div>\n\n\n\n<p>Artificial intelligence (AI) is beginning to change our world \u2013 for better and for worse. Like any other powerful and useful technology, it can be used both to help and to harm. We explored this in a major Febuary 2018 report <em>The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation<\/em>.[1] We co-authored this report with 26 international experts from academia and industry to assess how criminals, terrorists and rogue states could maliciously use AI over the next five years, and how these misuses might be prevented and mitigated. In this piece I will cover recent advances in artificial intelligence, some of the new threats these pose, and what can be done about it.<\/p>\n\n\n\n<p>In this piece I will cover recent advances in artificial\nintelligence, some of the new threats these pose, and what can be done about\nit.<\/p>\n\n\n\n<p>AI, according to Nilsson, \u201cis that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment\u201d.[2] It has been a field of study from at least Alan Turing in the 1940s, and perhaps from Ada Lovelace in the 1840s. Most of the interest in recent years has come from the subfield of \u2018machine learning\u2019, in which instead of writing lots of explicit rules, one trains a system (or \u2018model\u2019) on data and the system \u2018learns\u2019 to carry out a particular task. Over the last few years there has been a notable increase in the capabilities of AI systems, and an increase in access to those capabilities.<\/p>\n\n\n\n<p>The increase in AI capabilities is often dated from 2012\u2019s seminal Alexnet paper.[3] This system achieved a big jump in capabilities on an image recognition task. This task has now been so comprehensively beaten that it has become a benchmark for new systems &#8211; \u201cthis method achieves state-of-the-art in less time, or at a lower cost\u201d. Advances in natural language processing (NLP) have led to systems capable of advanced translation, comprehension and analysis of text and audio \u2013 and indeed the creation of synthetic text (OpenAI\u2019s GPT-2) and audio (Google\u2019s Duplex). Generative Adversarial Networks (GANs) are capable of creating incredibly convincing synthetic images and videos. The UK company DeepMind achieved fame within the AI field with their systems capable of beating Atari games from the 1980s such as Pong. But they broke into the popular imagination with their AlphaGo systems defeat of Lee Sedol at Go. AlphaGo Zero, the successor program, was also superhuman at Chess and Shogi. AI systems have continued to match or surpass human performance at more games, and more complicated games: fast-paced, complex, \u2018real-time strategy\u2019 games such as DOTA II and Starcraft II. <\/p>\n\n\n\n<p>This increase has been driven by key conceptual breakthroughs, the application of lots of money and talented people, and an increase in computing power (or \u2018compute\u2019). For example, training AlphaGo Zero used 300,000 times as much compute as AlexNet.[4]<\/p>\n\n\n\n<p>Access to AI systems has also increased. Most ML papers are\nfreely, openly published by default on the online depository <em>arXiv<\/em>.\nOften the code or trained AI system can be freely downloaded from open source\nsoftware libraries like GitHub or TensorFlow, which also tend to standardise\nprogramming methods. People new to the field can get up to speed through online\ncourses such as Coursera, or the many tutorials available on YouTube. Instead\nof training their systems on their own computers, people can easily and cheaply\ntrain them on cloud computing providers such as Amazon Web Services or\nMicrosoft Azure. Indeed the computer chips best suited to machine learning\n(GPUs and TPUs) are so expensive that it normally makes more sense to use a\ncloud provider, and only rent the time one needs. Overall then, it has become\nmuch easier, quicker and cheaper for someone to get up to speed, and create a\nworking system of their own.<\/p>\n\n\n\n<p>These two processes have had many benefits: new scientific\nadvances, better and cheaper goods and services, and access to advanced\ncapabilities from around the world. However they have also uncovered new vulnerabilities.\nOne is the discovery of \u2018adversarial examples\u2019 \u2013 adjustments to input data so\nminor to be imperceptible to humans, but that cause a system to misclassify an\ninput. For example, misclassifying a picture of a stop sign as a 45 mph speed\nlimit sign.<\/p>\n\n\n\n<p>These vulnerabilities has prompted some important work on \u2018AI safety\u2019, that is, reducing the risk of accidents involving AI systems in the short-term [6,7] and long-term.[8] Our report focussed, however, on AI security: reducing the risk of malicious use of AI by humans. We looked at the short-term: systems either currently or soon to be in use in the next five years.<\/p>\n\n\n\n<p>AI is a \u2018dual-use\u2019 technology &#8211; it can be used for good or\nill. Indeed it has been described as an \u2018omni-use\u2019 technology as it can be used\nin so many settings. Across many different areas however, common threat factors\nemerge. Existing threats are expanding, as automation allows a greater scale of\nattacks. The skill transfer and diffusion of capabilities described above will\nallow a wider range of people to carry out attacks that currently the preserve\nof experts. Novel threats are emerging, using the superhuman performance and\nspeed of AI systems, or attacking the unique vulnerabilities of AI systems. The\ncharacter of threats is being altered as attacks become more customised to\nparticular targets, and the distance between target and attacker makes attacks\nharder to attribute.<\/p>\n\n\n\n<p>These common factors will affect security in different ways\n&#8211; we split them into three domains.<\/p>\n\n\n\n<p>In \u2018digital security\u2019, for example, current \u2018spear phishing\u2019\nemails are tailor-made for a particular victim. An attacker trawls through all\nthe information they can find on a target, and drafts a message aimed at that\ntarget. This process could be automated through the use of AI. An AI could\ntrawl social media profiles for information, and draft tailored synthetic text.\nAttacks shift from being handcrafted to mass-produced.<\/p>\n\n\n\n<p>In \u2018physical security\u2019, for example, civilian drones are\nlikely to be repurposed for attacks. The Venezuelan regime claims to have been\ntargeted by a drone assassination. Even if, as is most likely, this is\npropaganda, it gives an indication of threats to come. The failure of British\npolice for several days to deal with a remote-controlled drone over Gatwick airport\ndoes not bode well.<\/p>\n\n\n\n<p>In \u2018political security\u2019 or \u2018epistemic security\u2019, the concern\nis both that in repressive societies governments are using advanced data\nanalytics to better surveil their populations and profile dissidents; and that\nin democratic societies polities are being polarised and manipulated through\nsynthetic media and targeted political advertising.<\/p>\n\n\n\n<p>We made several recommendations for policy-makers, technical\nresearchers and engineers, company executives, and wide range of other\nstakeholders. Since we published the report, it has received global media\ncoverage and was welcomed by experts in different domains, such as AI policy,\ncybersecurity, and machine learning. We have subsequently consulted several\ngovernments, companies and civil society groups on the recommendations of this\nreport. It was featured in the House of Lords Select Committee on AI\u2019s Report.\nWe have run a workshop series on Epistemic Security with the Alan Turing\nInstitute. The topic has received a great deal of coverage, due in part to the\nCambridge Analytica scandal and Zuckerberg\u2019s testimony to Congress. The\nAssociation for Computing Machinery (ACM) has called for impact assessment in\nthe peer review process. OpenAI decided not to publish the full details of\ntheir GPT-2 system due to concerns about synthetic media. On physical security,\nthe topic of Lethal Autonomous Weapons Systems has burst into the mainstream\nwith the controversy around Google\u2019s Project MAVEN.<\/p>\n\n\n\n<p>Despite these promising developments, there is a lot still\nmore to be done to research and develop policy around the malicious use of\nartificial intelligence, so that we can reap the benefits and avoid the misuse\nof this transformative technology. The technology is developing rapidly, and\nmalicious actors are quickly adapting it to malicious ends. There is no time to\nwait.<\/p>\n\n\n\n<p><strong>Read more:<\/strong><\/p>\n\n\n\n<table class=\"wp-block-table is-style-stripes\"><tbody><tr><td>\n  [1]\n  <\/td><td>\n  Brundage, M., Avin, S., <em>et al<\/em>. (2018). The Malicious Use of\n  Artificial Intelligence: Forecasting, Prevention, and Mitigation,\n  arXiv:1802.07228. \n  <\/td><\/tr><tr><td>\n  [2]\n  <\/td><td>\n  Nilsson, N. J. (2009). The quest for artificial intelligence.\n  Cambridge University Press.\n  <\/td><\/tr><tr><td>\n  [3]\n  <\/td><td>\n  Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet\n  classification with deep convolutional neural networks. <em>Advances in neural\n  information processing systems<\/em> (pp. 1097-1105).\n  <\/td><\/tr><tr><td>\n  [4]\n  <\/td><td>\n  Amodei, D. Hernandez, D. (2018). AI and Compute. OpenAI:\n  https:\/\/blog.openai.com\/ai-and-compute\/.\n  <\/td><\/tr><tr><td>\n  [5]\n  <\/td><td>\n  Karpathy, A. (2015) Breaking Convnets.\n  http:\/\/karpathy.github.io\/2015\/03\/30\/breaking-convnets.\n  <\/td><\/tr><tr><td>\n  [6]\n  <\/td><td>\n  Amodei, D., Olah, C. <em>et al.<\/em> (2016) Concrete Problems in AI\n  Safety.\n  <\/td><\/tr><tr><td>\n  [7]\n  <\/td><td>\n  Leike, J. et al. (2017) AI Safety Gridworlds. DeepMind.\n  <\/td><\/tr><tr><td>\n  [8]\n  <\/td><td>\n  Bostrom, N. (2014) Superintelligence. Oxford University Press.\n  <\/td><\/tr><tr><td>\n  [9]\n  <\/td><td>\n  House of Lords Select Committee on Artificial Intelligence (2018).\n  Report of Session AI in the UK: ready, willing and able? 2017\u201319 HL Paper\n  100.\n  <\/td><\/tr><\/tbody><\/table>\n","protected":false},"excerpt":{"rendered":"<p>Haydn Belfield [1] is a Research Associate and Academic Project Manager at the University of Cambridge\u2019s Centre for the Study of Existential Risk. He is also an Associate Fellow at the Leverhulme Centre for the Future of Intelligence. He works on the international security applications of emerging technologies, especially artificial intelligence. He has a background&hellip;&nbsp;<a href=\"http:\/\/junq.info\/?p=3824\" class=\"\" rel=\"bookmark\">Read More &raquo;<span class=\"screen-reader-text\">How to respond to the potential malicious uses of artificial intelligence?<\/span><\/a><\/p>\n","protected":false},"author":12,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[84],"tags":[],"aioseo_notices":[],"_links":{"self":[{"href":"http:\/\/junq.info\/index.php?rest_route=\/wp\/v2\/posts\/3824"}],"collection":[{"href":"http:\/\/junq.info\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/junq.info\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/junq.info\/index.php?rest_route=\/wp\/v2\/users\/12"}],"replies":[{"embeddable":true,"href":"http:\/\/junq.info\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=3824"}],"version-history":[{"count":5,"href":"http:\/\/junq.info\/index.php?rest_route=\/wp\/v2\/posts\/3824\/revisions"}],"predecessor-version":[{"id":3855,"href":"http:\/\/junq.info\/index.php?rest_route=\/wp\/v2\/posts\/3824\/revisions\/3855"}],"wp:attachment":[{"href":"http:\/\/junq.info\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=3824"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/junq.info\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=3824"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/junq.info\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=3824"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}