{"id":721,"date":"2021-04-26T12:34:25","date_gmt":"2021-04-26T16:34:25","guid":{"rendered":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/?p=721"},"modified":"2022-02-20T13:36:55","modified_gmt":"2022-02-20T18:36:55","slug":"what-is-more-urgent-for-ai-research-long-term-or-short-term-concerns-experts-disagree","status":"publish","type":"post","link":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/science\/what-is-more-urgent-for-ai-research-long-term-or-short-term-concerns-experts-disagree\/","title":{"rendered":"What is more urgent for AI research: long-term or short-term concerns? Experts disagree"},"content":{"rendered":"<p><span style=\"font-weight: 400\"> In a 2015 <\/span><a href=\"https:\/\/youtu.be\/MnT1xgZgkpk\"><span style=\"font-weight: 400\">TED Talk<\/span><\/a><span style=\"font-weight: 400\">, philosopher and Founding Director of the Oxford Future of Humanity Institute Nick Bostrom discusses the prospect of machine superintelligence: AI that would supersede human-level general intelligence. He begins by noting that with the advent of machine-learning models, we have shifted into a new paradigm of algorithms that learn\u2014often from raw data, similar to the human infant (Bostrom, \u201cWhat Happens\u201d 3:26 &#8211; 3:49).<\/span><\/p>\n<p><span style=\"font-weight: 400\">We are, of course, still in the era of <\/span><a href=\"https:\/\/medium.com\/mapping-out-2050\/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22\"><span style=\"font-weight: 400\">narrow AI<\/span><\/a><span style=\"font-weight: 400\">: the human brain possesses many capabilities beyond those of the most powerful AI. However, Bostrom notes that artificial general intelligence (AGI)\u2014AI that can perform any intellectual task a human can\u2014has been <\/span><a href=\"https:\/\/nickbostrom.com\/papers\/survey.pdf\"><span style=\"font-weight: 400\">projected<\/span><\/a><span style=\"font-weight: 400\"> by many experts to arrive around mid- to late-century (M\u00fcller and Bostrom, 1) and that the period in between the development of AGI and whatever comes next may not be long at all.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Of course, Bostrom notes, the ultimate limits to information processing in the machine substrate lie far outside the limits of biological tissue due to factors such as size and speed difference (\u201cWhat Happens\u201d 5:05 &#8211; 5:43). So, Bostrom says, the potential for superintelligence lies dormant for now, but in this century, scientists may unlock a new path forward in AI. We might then see an intelligence explosion constituting a new shift in the knowledge substrate, and resulting in superintelligence (6:00 &#8211; 6:09).<\/span><\/p>\n<p><span style=\"font-weight: 400\">What we should worry about, Bostrom explains, are the consequences (which reach as far as existential risk) of creating an immensely powerful intelligence guided wholly by processes of optimization. Bostrom imagines that a superintelligent AI tasked with, for example, solving a highly complex mathematical problem, might view human morals as threats to a strictly mathematical approach. In this scenario, our future would be shaped by the preferences of the AI, for better or for worse (Bostrom, \u201cWhat Happens\u201d 10:02 &#8211; 10:28).<\/span><\/p>\n<p><span style=\"font-weight: 400\">For Bostrom, then, the answer is to figure out how to create AI that uses its intelligence to learn what we value and is motivated to perform actions that it would predict we will approve of. We would thus leverage this intelligence as much as possible to solve the control problem: \u201cthe initial conditions for the intelligence explosion might need to be set up in just the right way, if we are to have a controlled detonation,\u201d Bostrom says (\u201cWhat Happens\u201d 14:33 &#8211; 14:41).\u00a0<\/span><\/p>\n<p><b>Thinking too far ahead?<\/b><\/p>\n<figure id=\"attachment_726\" aria-describedby=\"caption-attachment-726\" style=\"width: 300px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"size-medium wp-image-726\" src=\"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/neonbrand-KYxXMTpTzek-unsplash-300x197.jpg\" alt=\"\" width=\"300\" height=\"197\" srcset=\"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/neonbrand-KYxXMTpTzek-unsplash-300x197.jpg 300w, https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/neonbrand-KYxXMTpTzek-unsplash-1024x673.jpg 1024w, https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/neonbrand-KYxXMTpTzek-unsplash-768x505.jpg 768w, https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/neonbrand-KYxXMTpTzek-unsplash-1536x1010.jpg 1536w, https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/neonbrand-KYxXMTpTzek-unsplash-2048x1347.jpg 2048w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><figcaption id=\"caption-attachment-726\" class=\"wp-caption-text\">Experts disagree about what solutions are urgently needed in AI<\/figcaption><\/figure>\n<p><span style=\"font-weight: 400\">Many academics think that concerns about superintelligence are too indefinite and too far in the future to merit much discussion. These thinkers <\/span><span style=\"font-weight: 400\">usually also argue that our energies are better spent focused on short-term AI concerns, given that AI is already reshaping our lives in profound and not always positive ways. In a 2015 <\/span><a href=\"https:\/\/www.academia.edu\/15037984\/Singularitarians_AItheists_and_Why_the_Problem_with_Artificial_Intelligence_is_H_A_L_Humanity_At_Large_not_HAL\"><span style=\"font-weight: 400\">article<\/span><\/a><span style=\"font-weight: 400\">, Oxford Internet Institute professor Luciano Floridi called discussions about a possible intelligence explosion \u201cirresponsibly distracting,\u201d arguing that we need to take care of the \u201cserious and pressing problems\u201d of present-day digital technologies (\u201cSingularitarians\u201d 9-10).<\/span><\/p>\n<p><b>Beneficence versus non-maleficence<\/b><\/p>\n<p><span style=\"font-weight: 400\">In conversations about how we can design AI systems that will better serve the interests of humanity and promote the common good, a distinction is often made between the negative principle (\u201cdo no harm\u201d) and the positive principle (\u201cdo good\u201d). Put another way, approaches toward developing principled AI can be either about ensuring that those systems are beneficent or ensuring they are non-maleficent. In the news, as <\/span><a href=\"https:\/\/towardsdatascience.com\/ai-ethics-first-do-no-harm-23fbff93017a\"><span style=\"font-weight: 400\">one article<\/span><\/a><span style=\"font-weight: 400\"> points out, the two mindsets can mean the difference between headlines like \u201cUsing AI to eliminate bias from hiring\u201d and \u201cAI-assisted hiring is biased. Here\u2019s how to make it more fair\u201d (Bodnari, 2020).<\/span><\/p>\n<p><span style=\"font-weight: 400\">Thinkers, like Bostrom, concerned with long-term AI worries such as superintelligence tend to structure their arguments more around the negative principle of non-maleficence. Though Bostrom does present a \u201ccommon good principle\u201d (312) in his 2014 book, <\/span><i><span style=\"font-weight: 400\">Superintelligence: Paths, Dangers, Strategies<\/span><\/i><span style=\"font-weight: 400\">, suggestions like this one are made more alongside the broader consideration that we need to be very careful with AI development in order to avoid the wide-ranging harm possible with general machine intelligence.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">In an <\/span><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s13347-020-00396-6\"><span style=\"font-weight: 400\">article<\/span><\/a><span style=\"font-weight: 400\"> from last year, Floridi once again accuses those concerned with superintelligence of alarmism and irresponsibility, arguing that their worries mislead public opinion to be fearful of AI progress rather than knowledgeable about the potential and much-needed solutions AI could bring about. Echoing the beneficence principle, Floridi writes, \u201cwe need all the good technology that we can design, develop, and deploy to cope with these challenges, and all human intelligence we can exercise to put this technology in the service of a better future\u201d (\u201cNew Winter\u201d 2).<\/span><\/p>\n<p><span style=\"font-weight: 400\">In his afterword, Bostrom echoes the non-maleficence principle when he writes, \u201cI just happen to think that, at this point in history, whereas we might get by with a vague sense that there are (astronomically) great things to hope for if the machine intelligence transition goes well, it seems more urgent that we develop a precise detailed understanding of what specific things could go wrong\u2014so that we can make sure to avoid them\u201d (<\/span><i><span style=\"font-weight: 400\">Superintelligence <\/span><\/i><span style=\"font-weight: 400\">324).<\/span><\/p>\n<p><span style=\"font-weight: 400\">Considerations regarding the two principles within the field of bioethics (where they originated), can be transferred to conversations about AI. In taking the beneficence approach (do good = help the patient), one worry in the medical community is that doctors risk negatively interfering in their patients\u2019 lives or overstepping boundaries such as privacy. Similarly, with the superintelligence debate, perhaps the short-term, \u201cdo good now\u201d camp risks sidelining, for example, preventative AI safety mechanisms in the pursuit of other more pressing beneficent outcomes such as problem-solving or human rights compliance.<\/span><\/p>\n<p><span style=\"font-weight: 400\">There are many other complications involved. If we take the beneficence approach, the loaded questions of \u201cwhose common good?\u201d and of who is making the decisions are paramount. On the other hand, taking an approach that centers doing good arguably also centers humanity and compassion, whereas non-maleficence may lead to more mathematical or impersonal calculations of how best to avoid specific risks or outcomes.\u00a0<\/span><\/p>\n<p><b>Bridging the gap<\/b><\/p>\n<p><span style=\"font-weight: 400\">The different perspectives around hopes for AI and possible connections between them are outlined in a <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-018-0003-2\"><span style=\"font-weight: 400\">2019 paper<\/span><\/a><span style=\"font-weight: 400\"> by Stephen Cave and Se\u00e1n S. \u00d3h\u00c9igeartaigh from the Leverhulme Centre for the Future of Intelligence at the University of Cambridge called \u201cBridging near- and long-term concerns about AI.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400\">The authors explain that researchers focused on the near-term prioritize immediate or imminent challenges such as privacy, accountability, algorithmic bias, and the safety of systems that are close to deployment. On the other hand, those working on the long-term examine concerns that are less certain, such as wide-scale job loss, superintelligence, and \u201cfundamental questions about humanity\u2019s place in a world with intelligent machines\u201d (Cave and \u00d3h\u00c9igeartaigh, 5).<\/span><\/p>\n<p><span style=\"font-weight: 400\">Ultimately, Cave and \u00d3h\u00c9igeartaigh argue that the disconnect between the two groups is a mistake, and that thinkers focused on one set of issues have good reasons to take seriously work done on the other.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The authors point to many possible benefits available to long-term research with insight from the present. For example, they write that immediate AI concerns will grow in importance as increasingly powerful systems are deployed. Technical safety research done now, they explain, could provide fundamental frameworks for future systems (5).<\/span><\/p>\n<p><span style=\"font-weight: 400\">In considering what the long-term conversation has to offer us today, the authors write that \u201cperhaps the most important point is that the medium to long term has a way of becoming the present. And it can do so unpredictably\u201d (6). They emphasize that the impacts of both current and future AI systems might depend more on tipping points than even progressions, writing, \u201cwhat the mainstream perceives to be distant-future speculation could therefore become reality sooner than expected\u201d (6).<\/span><\/p>\n<p><span style=\"font-weight: 400\">Regardless of the controversies over whether we should take the prospect of superintelligence seriously, support for investments in AI safety research unites many experts across the board. At the least, simply joining the conversation means asking one question which we might all agree is important: What does it mean to be human in a world increasingly shaped by the internet, digital technologies, algorithms, and machine-learning?<\/span><\/p>\n<p><b>Works Cited<\/b><\/p>\n<p><span style=\"font-weight: 400\">Bodnari, Andreea. \u201cAI Ethics: First Do No Harm.\u201d <\/span><i><span style=\"font-weight: 400\">Towards Data Science<\/span><\/i><span style=\"font-weight: 400\">, Sep 7, 2020, <\/span><a href=\"https:\/\/towardsdatascience.com\/ai-ethics-first-do-no-harm-23fbff93017a\"><span style=\"font-weight: 400\">https:\/\/towardsdatascience.com\/ai-ethics-first-do-no-harm-23fbff93017a<\/span><\/a><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Bostrom, Nick. <\/span><i><span style=\"font-weight: 400\">Superintelligence: Paths, Dangers, Strategies<\/span><\/i><span style=\"font-weight: 400\">, 2014, Oxford University Press.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Bostrom, Nick. \u201cWhat happens when our computers get smarter than we are?\u201d <\/span><i><span style=\"font-weight: 400\">Ted<\/span><\/i><span style=\"font-weight: 400\">, March 2015, video,\u00a0<\/span><a href=\"https:\/\/www.ted.com\/talks\/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are\"><span style=\"font-weight: 400\">https:\/\/www.ted.com\/talks\/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are<\/span><\/a><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Cave, Stephen and \u00d3h\u00c9igeartaigh, Se\u00e1n S. \u201cBridging near- and long-term concerns about AI,\u201d <\/span><i><span style=\"font-weight: 400\">Nature <\/span><\/i><i><span style=\"font-weight: 400\">Machine\u00a0<\/span><\/i><i><span style=\"font-weight: 400\">Intelligence, <\/span><\/i><span style=\"font-weight: 400\">vol. 1, 2019, pp. 5-6. <\/span><a href=\"https:\/\/www.nature.com\/articles\/s42256-018-0003-2\"><span style=\"font-weight: 400\">https:\/\/www.nature.com\/articles\/s42256-018-0003-2<\/span><\/a><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Floridi, Luciano. \u201cAI and Its New Winter: from Myths to Realities,\u201d <\/span><i><span style=\"font-weight: 400\">Philosophy &amp; Technology, <\/span><\/i><span style=\"font-weight: 400\">vol. 33, <\/span><span style=\"font-weight: 400\">2020, pp. 1-3,\u00a0<\/span><span style=\"font-weight: 400\">SpringerLink. <\/span><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s13347-020-00396-6\"><span style=\"font-weight: 400\">https:\/\/link.springer.com\/article\/10.1007\/s13347-020-00396-6<\/span><\/a><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Floridi, Luciano. \u201cSingularitarians, AItheists, and Why the Problem with Artificial Intelligence Is H.A.L. <\/span><span style=\"font-weight: 400\">(Humanity At\u00a0<\/span><span style=\"font-weight: 400\">Large), Not HAL.\u201d <\/span><i><span style=\"font-weight: 400\">APA Newsletter on Philosophy and Computers<\/span><\/i><span style=\"font-weight: 400\">, vol. 14, no. 2, Spring 2015, pp. 8-10.\u00a0<\/span><a href=\"https:\/\/www.academia.edu\/15037984\/\"><span style=\"font-weight: 400\">https:\/\/www.academia.edu\/15037984\/<\/span><\/a><span style=\"font-weight: 400\">\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">M\u00fcller, Vincent C. and Bostrom, Nick. \u2018Future progress in artificial intelligence: A Survey of Expert <\/span><span style=\"font-weight: 400\">Opinion.\u201d\u00a0<\/span><i><span style=\"font-weight: 400\">Fundamental Issues of Artificial Intelligence<\/span><\/i><span style=\"font-weight: 400\">. Synthese Library; Berlin: Springer, 2014, <\/span><a href=\"http:\/\/www.nickbostrom.com\"><span style=\"font-weight: 400\">www.nickbostrom.com<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In a 2015 TED Talk, philosopher and Founding Director of the Oxford Future of Humanity Institute Nick Bostrom discusses the prospect of machine superintelligence: AI that would supersede human-level general intelligence. He begins by noting that with the advent of machine-learning models, we have shifted into a new paradigm of algorithms that learn\u2014often from raw [&hellip;]<\/p>\n","protected":false},"author":91,"featured_media":724,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[65,1],"tags":[85,87,86],"class_list":{"0":"post-721","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-csci-tech","8":"category-science","9":"tag-ai","10":"tag-ai-ethics","11":"tag-superintelligence","12":"entry"},"featured_image_src":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/insung-yoon-w2JtIQQXoRU-unsplash-600x400.jpg","featured_image_src_square":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2021\/04\/insung-yoon-w2JtIQQXoRU-unsplash-600x600.jpg","author_info":{"display_name":"Micaela Simeone '22","author_link":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/author\/msimeone\/"},"_links":{"self":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/posts\/721","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/users\/91"}],"replies":[{"embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/comments?post=721"}],"version-history":[{"count":0,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/posts\/721\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/media\/724"}],"wp:attachment":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/media?parent=721"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/categories?post=721"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/tags?post=721"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}