• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

AI

Mimicking the Human Brain: The Role of Heterogeneity in Artificial Intelligence

April 10, 2022 by Jenna Albanese '24

Picture this: you’re in the passenger seat of a car, weaving through an urban metropolis – say New York City. As expected, you see plenty of people: those who are rushed, lingering, tourists, locals, old, young, et cetera. But let’s zoom in: take just about any one of those individuals in the city, and you will find 86 billion nerve cells, or neurons, in their brain carrying them through daily life. For comparison, this means that the number of neurons in the human brain is about ten thousand times the number of residents in New York City.

But let’s zoom in even further: each one of those 86 billion neurons in the brain is ever-so-slightly different from one another. For example, while some neurons work extremely quickly in making decisions that guide basic processes in the brain, others work more slowly, basing their decisions off surrounding neurons’ activity. This difference in decision-making time among our neurons is called heterogeneity. Previously, researchers were unsure of heterogeneity’s importance in our lives, but its existence was certain. This is just one example of the almost incomprehensible detail of the brain that makes human thinking so complex, and even difficult to fully understand for modern researchers.

Now, let’s zoom in again, but this time not on the person’s brain. Instead, let’s zoom into the cell phone this individual might have in their pocket or their hand. While a cell phone does not function exactly the same as the human brain, aspects of the device are certainly modeled after human thinking. Virtual assistants, like Siri or Cortana, for instance, compose responses to general inquiries that resemble human interaction.

This type of highly advanced digital experience is the result of artificial intelligence. Since the 1940s, elements of artificial intelligence have been modeled after features of the human brain,  fashioned as a neural network composed of nodes, some serving as inputs and others as outputs. The nodes are comparable to brain cells, and they communicate with each other through a series of algorithms to produce outputs. However, in these technological brain models, every node is typically modeled in the same way in terms of the time they take to respond to a given situation (Science Daily 2021). This is quite unlike the human brain, where heterogeneity ensures that each neuron responds to stimuli at different speeds. But does this even matter? Do intricate qualities of the brain like heterogeneity really make a difference in our thinking, or in digital functioning if incorporated into artificial intelligence?

The short answer is yes, at least in the case of heterogeneity. Researchers have recently investigated how heterogeneity influences an artificial neural network’s performance on visual and auditory information classification tasks. In the study, each neural network had a different “time constant,” which is how long the cells take to respond to a situation given the responses of nearby cells. In essence, the researchers varied the heterogeneity of the artificial neural networks. The results were astonishing: once heterogeneity was introduced, the artificial neural networks completed the tasks more efficiently and accurately. The strongest result revealed a 15-20% improvement on auditory tasks as soon as heterogeneity was introduced to the artificial neural network (Perez-Nieves et al. 2021).

This result indicates that heterogeneity helps us think systematically, improve our task performance, and learn in changing conditions (Perez-Nieves et al. 2021). So perhaps it would be advantageous to incorporate heterogeneity into standard artificial intelligence models. With this change, technology’s way of “thinking” will come one step closer to functioning like a human brain, adopting a similar level of complexity and intricacy.

So, why does this matter? If parts of artificial intelligence are modeled closer and closer to how the human brain works, real-world benefits abound, and we’re talking on a level grander than virtual assistants. One prominent example is in head and neck cancer prognosis. Clinical predictors of head and neck cancer prognosis include factors like age, pathological findings, HPV status, and tobacco and alcohol consumption (Chinnery et al. 2020). With a multitude of factors at play, physicians spend excessive amounts of time analyzing head and neck cancer patients’ lifestyles in order to deduce an accurate prognosis. Alternatively, artificial intelligence could be used to model this complex web of factors for these cancer patients, and physicians’ time could be spent on other endeavors.

This type of clinical application is still far from implementation, but remains in sight for modern researchers. As the brain is further explored and understood, more and more of the elements that comprise advanced human thinking can be incorporated into technology. Now, put yourself in the shoes of our New York City passerby: how would you feel if the small cell phone in your pocket was just as intelligent and efficient as the 86 billion neurons in your head? How about if that cell phone solved problems like you do and thought like you think, in essence serving as a smaller version of your own brain? It is almost unfathomable! Yet, by harnessing heterogeneity, researchers have come one step closer toward realizing this goal.

 

References

Chinnery, T., Arifin, A., Tay, K. Y., Leung, A., Nichols, A. C., Palma, D. A., Mattonen, S. A., & Lang, P. (2020). Utilizing artificial intelligence for head and neck cancer outcomes prediction from imaging. Canadian Association of Radiologists Journal, 72(1), 73–85. https://doi.org/10.1177/0846537120942134.

Perez-Nieves, N., Leung, V. C. H., Dragotti, P. L., & Goodman, D. F. M. (2021). Neural heterogeneity promotes robust learning. Nature Communications, 12(1). https://doi.org/10.1038/s41467-021-26022-3. 

ScienceDaily. (2021, October 6). Brain cell differences could be key to learning in humans and ai. ScienceDaily. Retrieved February 27, 2022, from https://www.sciencedaily.com/releases/2021/10/211006112626.htm.  

Filed Under: Computer Science and Tech, Psychology and Neuroscience Tagged With: AI, heterogeneity, neural network

What is more urgent for AI research: long-term or short-term concerns? Experts disagree

April 26, 2021 by Micaela Simeone '22

In a 2015 TED Talk, philosopher and Founding Director of the Oxford Future of Humanity Institute Nick Bostrom discusses the prospect of machine superintelligence: AI that would supersede human-level general intelligence. He begins by noting that with the advent of machine-learning models, we have shifted into a new paradigm of algorithms that learn—often from raw data, similar to the human infant (Bostrom, “What Happens” 3:26 – 3:49).

We are, of course, still in the era of narrow AI: the human brain possesses many capabilities beyond those of the most powerful AI. However, Bostrom notes that artificial general intelligence (AGI)—AI that can perform any intellectual task a human can—has been projected by many experts to arrive around mid- to late-century (Müller and Bostrom, 1) and that the period in between the development of AGI and whatever comes next may not be long at all.

Of course, Bostrom notes, the ultimate limits to information processing in the machine substrate lie far outside the limits of biological tissue due to factors such as size and speed difference (“What Happens” 5:05 – 5:43). So, Bostrom says, the potential for superintelligence lies dormant for now, but in this century, scientists may unlock a new path forward in AI. We might then see an intelligence explosion constituting a new shift in the knowledge substrate, and resulting in superintelligence (6:00 – 6:09).

What we should worry about, Bostrom explains, are the consequences (which reach as far as existential risk) of creating an immensely powerful intelligence guided wholly by processes of optimization. Bostrom imagines that a superintelligent AI tasked with, for example, solving a highly complex mathematical problem, might view human morals as threats to a strictly mathematical approach. In this scenario, our future would be shaped by the preferences of the AI, for better or for worse (Bostrom, “What Happens” 10:02 – 10:28).

For Bostrom, then, the answer is to figure out how to create AI that uses its intelligence to learn what we value and is motivated to perform actions that it would predict we will approve of. We would thus leverage this intelligence as much as possible to solve the control problem: “the initial conditions for the intelligence explosion might need to be set up in just the right way, if we are to have a controlled detonation,” Bostrom says (“What Happens” 14:33 – 14:41). 

Thinking too far ahead?

Experts disagree about what solutions are urgently needed in AI

Many academics think that concerns about superintelligence are too indefinite and too far in the future to merit much discussion. These thinkers usually also argue that our energies are better spent focused on short-term AI concerns, given that AI is already reshaping our lives in profound and not always positive ways. In a 2015 article, Oxford Internet Institute professor Luciano Floridi called discussions about a possible intelligence explosion “irresponsibly distracting,” arguing that we need to take care of the “serious and pressing problems” of present-day digital technologies (“Singularitarians” 9-10).

Beneficence versus non-maleficence

In conversations about how we can design AI systems that will better serve the interests of humanity and promote the common good, a distinction is often made between the negative principle (“do no harm”) and the positive principle (“do good”). Put another way, approaches toward developing principled AI can be either about ensuring that those systems are beneficent or ensuring they are non-maleficent. In the news, as one article points out, the two mindsets can mean the difference between headlines like “Using AI to eliminate bias from hiring” and “AI-assisted hiring is biased. Here’s how to make it more fair” (Bodnari, 2020).

Thinkers, like Bostrom, concerned with long-term AI worries such as superintelligence tend to structure their arguments more around the negative principle of non-maleficence. Though Bostrom does present a “common good principle” (312) in his 2014 book, Superintelligence: Paths, Dangers, Strategies, suggestions like this one are made more alongside the broader consideration that we need to be very careful with AI development in order to avoid the wide-ranging harm possible with general machine intelligence. 

In an article from last year, Floridi once again accuses those concerned with superintelligence of alarmism and irresponsibility, arguing that their worries mislead public opinion to be fearful of AI progress rather than knowledgeable about the potential and much-needed solutions AI could bring about. Echoing the beneficence principle, Floridi writes, “we need all the good technology that we can design, develop, and deploy to cope with these challenges, and all human intelligence we can exercise to put this technology in the service of a better future” (“New Winter” 2).

In his afterword, Bostrom echoes the non-maleficence principle when he writes, “I just happen to think that, at this point in history, whereas we might get by with a vague sense that there are (astronomically) great things to hope for if the machine intelligence transition goes well, it seems more urgent that we develop a precise detailed understanding of what specific things could go wrong—so that we can make sure to avoid them” (Superintelligence 324).

Considerations regarding the two principles within the field of bioethics (where they originated), can be transferred to conversations about AI. In taking the beneficence approach (do good = help the patient), one worry in the medical community is that doctors risk negatively interfering in their patients’ lives or overstepping boundaries such as privacy. Similarly, with the superintelligence debate, perhaps the short-term, “do good now” camp risks sidelining, for example, preventative AI safety mechanisms in the pursuit of other more pressing beneficent outcomes such as problem-solving or human rights compliance.

There are many other complications involved. If we take the beneficence approach, the loaded questions of “whose common good?” and of who is making the decisions are paramount. On the other hand, taking an approach that centers doing good arguably also centers humanity and compassion, whereas non-maleficence may lead to more mathematical or impersonal calculations of how best to avoid specific risks or outcomes. 

Bridging the gap

The different perspectives around hopes for AI and possible connections between them are outlined in a 2019 paper by Stephen Cave and Seán S. ÓhÉigeartaigh from the Leverhulme Centre for the Future of Intelligence at the University of Cambridge called “Bridging near- and long-term concerns about AI.”

The authors explain that researchers focused on the near-term prioritize immediate or imminent challenges such as privacy, accountability, algorithmic bias, and the safety of systems that are close to deployment. On the other hand, those working on the long-term examine concerns that are less certain, such as wide-scale job loss, superintelligence, and “fundamental questions about humanity’s place in a world with intelligent machines” (Cave and ÓhÉigeartaigh, 5).

Ultimately, Cave and ÓhÉigeartaigh argue that the disconnect between the two groups is a mistake, and that thinkers focused on one set of issues have good reasons to take seriously work done on the other.

The authors point to many possible benefits available to long-term research with insight from the present. For example, they write that immediate AI concerns will grow in importance as increasingly powerful systems are deployed. Technical safety research done now, they explain, could provide fundamental frameworks for future systems (5).

In considering what the long-term conversation has to offer us today, the authors write that “perhaps the most important point is that the medium to long term has a way of becoming the present. And it can do so unpredictably” (6). They emphasize that the impacts of both current and future AI systems might depend more on tipping points than even progressions, writing, “what the mainstream perceives to be distant-future speculation could therefore become reality sooner than expected” (6).

Regardless of the controversies over whether we should take the prospect of superintelligence seriously, support for investments in AI safety research unites many experts across the board. At the least, simply joining the conversation means asking one question which we might all agree is important: What does it mean to be human in a world increasingly shaped by the internet, digital technologies, algorithms, and machine-learning?

Works Cited

Bodnari, Andreea. “AI Ethics: First Do No Harm.” Towards Data Science, Sep 7, 2020, https://towardsdatascience.com/ai-ethics-first-do-no-harm-23fbff93017a 

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies, 2014, Oxford University Press. 

Bostrom, Nick. “What happens when our computers get smarter than we are?” Ted, March 2015, video, https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are 

Cave, Stephen and ÓhÉigeartaigh, Seán S. “Bridging near- and long-term concerns about AI,” Nature Machine Intelligence, vol. 1, 2019, pp. 5-6. https://www.nature.com/articles/s42256-018-0003-2 

Floridi, Luciano. “AI and Its New Winter: from Myths to Realities,” Philosophy & Technology, vol. 33, 2020, pp. 1-3, SpringerLink. https://link.springer.com/article/10.1007/s13347-020-00396-6 

Floridi, Luciano. “Singularitarians, AItheists, and Why the Problem with Artificial Intelligence Is H.A.L. (Humanity At Large), Not HAL.” APA Newsletter on Philosophy and Computers, vol. 14, no. 2, Spring 2015, pp. 8-10. https://www.academia.edu/15037984/ 

Müller, Vincent C. and Bostrom, Nick. ‘Future progress in artificial intelligence: A Survey of Expert Opinion.” Fundamental Issues of Artificial Intelligence. Synthese Library; Berlin: Springer, 2014, www.nickbostrom.com

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, superintelligence

‘The Scariest Deepfake of All’: AI-Generated Text & GPT-3

March 1, 2021 by Micaela Simeone '22

Recent advances in machine-learning systems have led to both exciting and unnerving technologies—personal assistance bots, email spam filtering, and search engine algorithms are just a few omnipresent examples of technology made possible through these systems. Deepfakes (deep learning fakes), or, algorithm-generated synthetic media, constitute one example of a still-emerging and tremendously consequential development in machine-learning. WIRED recently called AI-generated text “the scariest deepfake of all”, turning heads to one of the most powerful text generators out there: artificial intelligence research lab OpenAI’s Generative Pre-Trained Transformer (GPT-3) language model.

GPT-3 is an autoregressive language model that uses its deep-learning experience to produce human-like text. Put simply, GPT-3 is directed to study the statistical patterns in a dataset of about a trillion words collected from the web and digitized books. GPT-3 then uses its digest of that massive corpus to respond to text prompts by generating new text with similar statistical patterns, endowing it with the ability to compose news articles, satire, and even poetry. 

GPT-3’s creators designed the AI to learn language patterns and immediately saw GPT-3 scoring exceptionally well on reading-comprehension tests. But when OpenAI researchers configured the system to generate strikingly human-like text, they began to imagine how these generative capabilities could be used for harmful purposes. Previously, OpenAI had often released full code with its publications on new models. This time, GPT-3s creators decided to hide its underlying code from the public, not wanting to disseminate the full model or the millions of web pages used to train the system. In OpenAI’s research paper on GPT-3, authors note that “any socially harmful activity that relies on generating text could be augmented by powerful language models,” and “the misuse potential of language models increases as the quality of text synthesis improves.” 

Just like humans are prone to internalizing the belief systems “fed” to us, machine-learning systems mimic what’s in their training data. In GPT-3’s case, biases present in the vast training corpus of Internet text led the AI to generate stereotyped and prejudiced content. Preliminary testing at OpenAI has shown that GPT-3-generated content reflects gendered stereotypes and reproduces racial and religious biases. Because of already fragmented trust and pervasive polarization online, Internet users find it increasingly difficult to trust online content. GPT-3-generated text online would require us to be even more critical consumers of online content. The ability for GPT-3 to mirror societal biases and prejudices in its generated text means that GPT-3 online might only give more voice to our darkest emotional, civic, and social tendencies.

Because GPT-3’s underlying code remains in the hands of OpenAI and its API (the interface where users can partially work with and test out GPT-3) is not freely accessible to the public, many concerns over its implications steer our focus towards a possible future where its synthetic text becomes ubiquitous online. Due to GPT-3’s frighteningly successful “conception” of natural language as well as creative capabilities and bias-susceptible processes, many are worried that a GPT-3-populated Internet could do a lot of harm to our information ecosystem. However, GPT-3 exhibits powerful affordances as well as limitations, and experts are asking us not to project too many fears about human-level AI onto GPT-3 just yet.

GPT-3: Online Journalist

GPT-3-generated news article that research participants had the greatest difficulty distinguishing from a human-written article

Fundamentally, concerns about GPT-3-generated text online come from an awareness of just how different a threat synthetic text poses versus other forms of synthetic media. In a recent article, WIRED contributor Renee DiResta writes that, throughout the development of photoshop and other image-editing CGI tools, we learned to develop a healthy skepticism, though without fully disbelieving such photos, because “we understand that each picture is rooted in reality.” She points out that generated media, such as deepfaked video or GPT-3 output, is different because there is no unaltered original, and we will have to adjust to a new level of unreality. In addition, synthetic text “will be easy to generate in high volume, and with fewer tells to enable detection.” Right now, it is possible to detect repetitive or recycled comments that use the same snippets of text to flood a comment section or persuade audiences. However, if such comments had been generated independently by an AI, DiResta notes, these manipulation campaigns would have been much harder to detect:

“Undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister … The ability to manufacture a majority opinion, or create a fake commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns.” – Renee DiResta, WIRED

In their paper “Language Models are Few-Shot Learners,” GPT-3’s developers discuss the potential for misuse and threat actors—those seeking to use GPT-3 for malicious or harmful purposes. The paper states that threat actors can be organized by skill and resource levels, “ranging from low or moderately skilled and resourced actors who may be able to build a malicious product to … highly skilled and well resourced (e.g. state-sponsored) groups with long-term agendas.” Interestingly, OpenAI researchers write that threat actor agendas are “influenced by economic factors like scalability and ease of deployment” and that ease of use is another significant incentive for malicious use of AI. It seems that the very principles that guide the development of many emerging AI models like GPT-3—scalability, accessibility, and stable infrastructure—could also be what position these models as perfect options for threat actors seeking to undermine personal and collective agency online.

Staying with the projected scenario of GPT-3-text becoming widespread online, it is useful to consider the already algorithmic nature of our interactions online. In her article, DiResta writes about the Internet that “algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.” Introducing an AI “voice” into this environment could make our online interactions even less human. One example of possible algorithmic accomplices of GPT-3 are Google Autocomplete algorithms which internalize queries and often reflect “-ism” statements and biases while processing suggestions based on common searches. The presence of AI-generated texts could populate Google algorithms with even more problematic content and continue to narrow our ability to have control over how we acquire neutral, unbiased knowledge.

An Emotional Problem

Talk of GPT-3 passing The Turing Test reflects many concerns about creating increasingly powerful AI. GPT-3 seems to hint at the possibility of a future where AI is able to replicate those attributes we might hope are exclusively human—traits like creativity, ingenuity, and, of course, understanding language. As Microsoft AI Blog contributor Jennifer Langston writes in a recent post, “designing AI models that one day understand the world more like people starts with language, a critical component to understanding human intent.” 

Of course, as a machine-learning model, GPT-3 relies on a neural network (inspired by neural pathways in the human brain) that can process language. Importantly, GPT-3 represents a massive acceleration in scale and computing power (rather than novel ML techniques), which give it the ability to exhibit something eerily close to human intelligence. A recent Vox article on the subject asks the question, “is human-level intelligence something that will require a fundamentally new approach, or is it something that emerges of its own accord as we pump more and more computing power into simple machine learning models?” For some, the idea that the only thing distinguishing human intelligence from our algorithms is our relative “computing power” is more than a little uncomfortable. 

As mentioned earlier, GPT-3 has been able to exhibit creative and artistic qualities, generating a trove of literary content including poetry and satire. The attributes we’ve long understood to be distinctly human are now proving to be replicable by AI, raising new anxieties about humanity, identity, and the future.

GPT-3’s recreation of Allen Ginsberg’s “Howl”

GPT-3’s Limitations

While GPT-3 can generate impressively human-like text, most researchers maintain that this text is often “unmoored from reality,” and, even with GPT-3, we are still far from reaching artificial general intelligence. In a recent MIT Technology Review article, author Will Douglas Heaven points out that GPT-3 often returns contradictions or nonsense because its process is not guided by any true understanding of reality. Ultimately, researchers believe that GPT-3’s human-like output and versatility are the results of excellent engineering, not genuine intelligence. GPT-3 uses many of its parameters to memorize Internet text that doesn’t generalize easily, and essentially parrots back “some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative,” according to Douglas. As it stands today, GPT-3 is just an early glimpse of AI’s world-altering potential, and remains a narrowly intelligent tool made by humans and reflecting our conceptions of the world.

A final point of optimism is that the field around ethical AI is ever-expanding, and developers at OpenAI are looking into the possibility of automatic discriminators that may have greater success than human evaluators at detecting AI model-generated text. In their research paper, developers wrote that “automatic detection of these models may be a promising area of future research.” Improving our ability to detect AI-generated text might be one way to regain agency in a possible future with bias-reproducing AI “journalists” or undetectable deepfaked text spreading misinformation online.

Ultimately, GPT-3 suggests that language is more predictable than many people assume, and challenges common assumptions about what makes humans unique. Plus, exactly what’s going on inside GPT-3 isn’t entirely clear, challenging us to continue to think about the AI “black box” problem and methods to figure out just how GPT-3 reiterates natural language after digesting millions of snippets of Internet text. However, perhaps GPT-3 gives us an opportunity to decide for ourselves whether even the most powerful of future text generators could undermine the distinctly human conception of the world and of poetry, language, and conversation. A tweet Douglas quotes in his article from user @mark_riedl provides one possible way to frame both our worries and hopes about tech like GPT-3: “Remember…the Turing Test is not for AI to pass, but for humans to fail.”

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, artificial intelligence, GPT-3, online journalists, textfakes

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • The Anti-cancer and Antimicrobial Activity Associated with Sea Sponge Extracts November 11, 2022
  • Targeting the MYC Proto-Oncogene, BHLH Transcription Factor (MYC) interaction network in B-cell lymphoma via histone deacetylase 6 inhibition November 11, 2022
  • Examining the work of 2022 Nobel Prize in Physiology or Medicine Laureate Svante Pääbo November 6, 2022

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics artificial intelligence Bathymetry BDA Beavers biogeochemistry Biology BoNT C. botulinum Cell Biology Chlorofluorocarbons Clostridium botulinum Cytoskeleton Death Prediction dreams ecology Ethics GPT-3 heterogeneity kleptomania Luis Vidali Marine Biology Marine Mammals Marine noise Montreal Protocol Moss neural network neurobiology online journalists Ozone hole Plants REM serotonin SERT sleep superintelligence textfakes

Copyright © 2023 · students.bowdoin.edu