• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

Computer Science and Tech

Telepathic Technology: Novel Brain Implants Help Paralyzed Patients Communicate Again

December 3, 2023 by Alexa Comess '26

Whether through lively classroom discussions, profound conversations with friends, or political debates at family dinners, speech resides at the crux of human connection. Tragic, life-altering events, such as stroke, traumatic brain injury, and the development of neurodegenerative diseases such as ALS can contribute to a loss of speech, often via vocal paralysis. When the nervous system is ravaged by disease or injury, severe paralysis and consequent locked-in syndrome can occur. In cases of locked-in syndrome, a patient loses all motor abilities and can only communicate via blinking or very minimal movements, rendering traditional speech aids such as typing and writing tools useless. While years of research have produced several assistive speech devices targeted at patients with severe paralysis, these devices are often extremely limited in their vocabulary range and only offer output options that are choppy, slow, and inauthentic. Despite allowing patients to communicate to some degree, the shortcomings of these devices remove much of the character and connection a person derives through speech, leaving patients feeling socially and emotionally isolated (Ramsey and Crone, 2023).

A recent study published in Nature has partially succeeded in remedying this issue. Through the development and use of brain-computer interfaces (BCIs), scientists have created a pathway that translates a patient’s neural signals into personalized text, speech audio, and animated, expressive facial avatars. In a case study involving a 47 year-old woman suffering from severe paralysis and complete loss of speech as a result of a brainstem stroke sustained 18 years earlier, researchers Metzger et al. designed and implanted a BCI into the left hemisphere of the patient’s brain, centered on her central sulcus and spanning the regions of her brain associated with speech production and language perception. The first of its kind, this BCI harnesses electrocorticography (ECoG) to decode neural signals for attempted speech and vocal tract movements into corresponding words, phrases, and facial expressions. Similar to the more familiar and established electroencephalography (EEG), which uses small electrodes attached to the scalp to monitor electrophysiological activity in the brain, electrocorticography can directly adhere to exposed surfaces of the brain to decode electric signals (“Electrocorticography”). The specific BCI used in this study contains a high-density array of 253 individual ECoG electrodes, connected to a percutaneous pedestal connector, which allows for the signals to be decoded and displayed on a computer interface (Metzger et al., 2023).

Following surgical implantation, the BCI and its connector were hooked up to a computer, which contained deep learning models that had already been “trained” via probability data to have predictive abilities that enable them to decode certain phones, silences, and articulatory gestures into words and phrases. An additional probability-based feature called a connectionist temporal classification loss function was added to the neural decoding network to distinguish the timing of the attempted speech and signals, which allowed for order and pauses between words to be identified. After being processed through this system, the decoded signals could either be translated directly into letters, discrete vocal speech units, or discrete articulatory gestures, which respectively produce artificial text, speech, and facial expressions (Fig. 1, Metzger et al., 2023). 

 

Figure 1. Overview of multimodal speech decoding in a patient with vocal paralysis. a. Overview of the speech-decoding pathway, from neural signal to speech output. b. Magnetic resonance imaging scan (MRI) of the patient’s brain, showing stroke-induced atrophy in the brainstem resulting in severe paralysis. c. MRI of the patient’s brain, overlaid with the implanted BCI in its actual location. d. Examples of simple articulatory movements attempted by the patient, coupled with their corresponding electrode-activation maps. Bottom graphs depict evoked cortical activity for each type of movement, along with the mean ± standard error across trials.

 

This novel neuroprosthesis provides an unprecedented sense of personalization and authenticity to communication for patients with extreme paralysis. While previous assistive speech technology could at best reach a rate of 14 words per minute, the BCI used in the case study averages 78 words per minute, which is much closer to the average adult rate of speaking, roughly 150 words per minute. Additionally, the speech function can be personalized to the patient’s voice prior to their vocal paralysis, and the facial expressions can be projected onto an avatar resembling the patient, adding another layer of personalization (Metzger et al., 2023).

Innovations in speech-targeted neuroprosthetic technology have the potential to change the lives of thousands of people dealing with severe paralysis. Naturally, these solutions are not perfect– the error rate of this particular BCI is approximately 20% for direct text decoding, and 50% for direct speech decoding (Ramsey and Crone, 2023). Despite sounding high, these numbers are a remarkable improvement from more established technologies and point to the vast potential of BCIs a few years down the road. Additionally, while this case is still in its early stages, there are several other similar BCIs being developed for the same purpose (Willett et al., 2023).  As research continues, it will be fascinating to observe the continued effects of this revolutionary technology on patients’ lives.

 

References

Electrocorticography—An overview | sciencedirect topics. (n.d.). Retrieved November 4, 2023, from https://www.sciencedirect.com/topics/neuroscience/electrocorticography

Metzger, S. L., Littlejohn, K. T., Silva, A. B., Moses, D. A., Seaton, M. P., Wang, R., Dougherty, M. E., Liu, J. R., Wu, P., Berger, M. A., Zhuravleva, I., Tu-Chan, A., Ganguly, K., Anumanchipalli, G. K., & Chang, E. F. (2023). A high-performance neuroprosthesis for speech decoding and avatar control. Nature, 620(7976), 1037–1046. https://doi.org/10.1038/s41586-023-06443-4

Ramsey, N. F., & Crone, N. E. (2023). Brain implants that enable speech pass performance milestones. Nature, 620(7976), 954–955. https://doi.org/10.1038/d41586-023-02546-0

Willett, F. R., Kunz, E. M., Fan, C., Avansino, D. T., Wilson, G. H., Choi, E. Y., Kamdar, F., Glasser, M. F., Hochberg, L. R., Druckmann, S., Shenoy, K. V., & Henderson, J. M. (2023). A high-performance speech neuroprosthesis. Nature, 620(7976), 1031–1036. https://doi.org/10.1038/s41586-023-06377-x

Filed Under: Computer Science and Tech, Psychology and Neuroscience

ChatGPT Beats Humans in Emotional Awareness Test: What’s Next?

December 3, 2023 by Nicholas Enbar-Salo '27

In recent times, it can seem like everything revolves around artificial intelligence (AI). From AI-powered robots performing surgery to facial recognition on smartphones, AI has become an integral part of modern life. While AI has affected nearly every industry, most have been slowly adapting AI into their field while trying to minimize the risks involved with AI. One such field with particularly great potential is the mental health care industry. Indeed, some studies have already begun to study the uses of AI to assist mental health work. For instance, one study used AI to predict the probability of suicide through users’ health insurance records (Choi et al., 2018), while another showed that AI could identify people with depression based on their social media posts (Aldarwish & Ahmed, 2017). 

Perhaps the most wide-spread AI technology is ChatGPT, a public natural language processor chatbot that can help you with a plethora of tasks, from writing an essay to playing chess. Much discussion has been done about the potential of such chatbots in mental health care and therapy, but few studies have been published on the matter. However, a study by Zohar Elyoseph has started the conversation of chatbots’ potential, specifically ChatGPT, in therapy. In this study, Elyoseph and his team gave ChatGPT the Levels of Emotional Awareness Scale (LEAS) to measure ChatGPT’s capability for emotional awareness (EA), a core part of empathy and an essential skill of therapists (Elyoseph et al., 2023). The LEAS gives you 20 scenarios, in which someone experiences an event that supposedly elicits a response in the person in the scenario, and the test-taker must describe what emotions the person is likely feeling. Two examinations of the LEAS, one month apart, were done on ChatGPT to test two different versions of ChatGPT. This was done to see if updates during that month would improve its ability on the LEAS. On both examinations, two licensed psychologists scored the responses from ChatGPT to ensure reliability of its score. On the first examination in January 2023, ChatGPT achieved a score of 85 out of 100, compared to the French men’s and female’s averages of 56.21 and 58.94 respectively. On the second examination in February 2023, ChatGPT achieved a score of 98: nearly a perfect score, a significant improvement from the already high score of 85 a month prior, and a score that is higher than most licensed psychologists (Elyoseph et al., 2023).

This study shows that, not only is ChatGPT more capable than humans at EA, but it is also rapidly improving at it. This has massive implications for in-person therapy. While there is more to being a good therapist than just emotional awareness, it is a major part of it. Therefore, based on this study, there is potential for chatbots like ChatGPT to rival, or possibly even replace, therapists if developers are able to develop the other interpersonal traits of good therapists. 

However, ChatGPT and AI needs more work to be done before it can really be implemented into the mental health field in this manner. To start, while AI is capable of the technical aspects of therapy, such as giving sound advice and validating a client’s emotions, ChatGPT and other chatbots sometimes give “illusory responses”, or fake responses that it claims are legitimate (Hagendorff et al., 2023). For example, ChatGPT will sometimes say “5 + 5 = 11” if you ask what 5 + 5 is, even though the answer is clearly wrong. While this is a very obvious example of an illusory response, harm can be done if the user is not able to distinguish between the real and illusory responses for more complex subjects. These responses can be extremely harmful in situations such as therapy, as clients rely on a therapist for guidance, and if such guidance were fake, it could harm rather than help the client. Furthermore, there are concerns regarding the dehumanization of therapy, the loss of jobs for therapists, and the breach of a client’s privacy if AI was to replace therapists (Abrams, 2023). ​​

Fig 1. Sample conversation with Woebot, which provides basic therapy to users. Adapted from Darcy et al., 2021. 

However, rudimentary AI programs are already sprouting that try to bolster the mental health infrastructure. Replika, for instance, is an avatar-based chatbot that offers therapeutic conversation with the user, and saves previous conversations to remember them in the future. Woebot provides a similar service (Figure 1), providing cognitive-behavioral therapy (CBT) for anxiety and depression to users (Pham et al., 2022). While some are scared about applications such as these, these technologies should be embraced since, as they become more refined, they could provide a low-commitment, accessible source of mental health care for those who are unable to reach out to a therapist, such as those who are nervous about reaching out to a real therapist, those who live in rural environments without convenient access to a therapist, or those who lack the financial means for mental health support. AI can also be used as a tool for therapists in the office. For example, an natural language processing application, Eleos, can take notes and highlight themes and risks for therapists to review after the session (Abrams, 2023). 

There are certainly some drawbacks of AI in therapy, such as the dehumanization of therapy, that may not have a solution and could therefore limit AI’s influence in the field. There is certainly a chance that some people would never trust AI to give them empathetic advice. However, people said the same when robotic surgeries began being used in clinical settings, but most people seem to have embraced that due to its superb success rate. Regardless of whether these problems are resolved, AI in the mental health industry has massive potential, and we must make sure to ensure that the risks and drawbacks of such technology are addressed and refined so that we can make the most of this potential in the future and bring better options to those who need it. 

 

Citations

Abrams, Z. (2023, July 1). AI is changing every aspect of psychology. Here’s what to watch for. Monitor on Psychology, 54(5). https://www.apa.org/monitor/2023/07/psychology-embracing-ai

 

Aldarwish MM, Ahmad HF. Predicting Depression Levels Using Social Media Posts. Proc – 2017 IEEE 13th Int Symp Auton Decentralized Syst ISADS 2017 2017;277–80.

 

Choi SB, Lee W, Yoon JH, Won JU, Kim DW. Ten-year prediction of suicide death using Cox regression and machine learning in a nationwide retrospective cohort study in South Korea. J Affect Disord. 2018;231(January):8–14.

 

Darcy, Alison & Daniels, Jade & Salinger, David & Wicks, Paul & Robinson, Athena. (2021). Evidence of Human-Level Bonds Established With a Digital Conversational Agent: Cross-sectional, Retrospective Observational Study. JMIR Formative Research. 5. e27868. 10.2196/27868. 

 

Elyoseph, Z., Hadar-Shoval, D., Asraf, K., & Lvovsky, M. (2023). ChatGPT outperforms humans in emotional awareness evaluations. Frontiers in psychology, 14, 1199058. 

 

Hagendorff, T., Fabi, S. & Kosinski, M. Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nat Comput Sci 3, 833–838.

 

Pham K. T., Nabizadeh A., Selek S. (2022). Artificial intelligence and chatbots in psychiatry. Psychiatry Q. 93, 249–253.



Filed Under: Computer Science and Tech, Psychology and Neuroscience, Science Tagged With: AI, AI ethics, ChatGPT, therapy

Breakthrough in Gene Sequencing and Identification of Leukemia-causing Genes in Iran

November 6, 2022 by Ruby Pollack '25

A research group based in Iran conducted a study to confirm the legitimacy of gene sequencing technology for discovering Chronic Myeloid Leukemia causing genes within three pre-existing cancer patients. Chronic Myeloid Leukemia (CML) is an example of a monoclonal disease, meaning deriving from a single cell in this blood cell (hematopoietic).  CML represents around 15% of Leukemia in adults. Leukemia represents the most common blood cancer for adults older than 55. The blast cycle is the stage in chronic leukemia where tiredness, fever, and enlarged spleen are present. A blast crisis occurs when 20% of all blood or bone marrow has blasts or immature white blood cells. These blasts multiply uncontrollably, causing a blockage and stopping the production of red blood cells and platelets, which are necessary for survival. White cell crowding red blood cells often causes a lower immune system. This article focuses on using integrated genomic sequences to access common gene variants associated with CML, to understand the fundamental mechanisms behind the blast crisis.

Researchers used the Whole Exome Sequences (WES): integrated sequencing, chromosome sequences, and Rna sequences. Using a blood sample from the blast cycle patients, they used WES technology to identify modified, deleted, or incorrectly copied genes. Genes have extraordinary power over the way our body regulates. Sometimes cancer occurs when genes do not work as they are intended to. There are five classes of cancer-causing genes. Mutations in genes change how they activate other cells. A mutation in the process of spreading information through signal pathway components can lead cells to multiply erratically. Another class is transcription factors; proteins bind to a part of DNA to start or stop replication.  If there are modifications within those repressor/activator genes, that could lead to an unregulated abundance of cells—unsuppressed replication results in Tumors due to unregulated cell growth. It is common for some mutations in the replication process, but some proteins repair the genes; sometimes, if a gene gets mis-replicated, it can lead to other classes. 

CML cells are derived from the bone marrow and are progenitor cells, meaning they can turn into white, red blood cells or platelets, depending on the body’s purpose. The researcher’s goal with using WES on these pre-existing blast crisis patients was to discover essential variants and find similarities and differences with their gene makeup.  Researchers then divided their findings into PIF ( potentially significant findings) and PAFs(potentially actionable results). Using WES detected, 16 PIFS affected all five known classes of cancer-causing genes.  

Researchers conducted integrated sequencing on three patients using an in-house filtering algorithm to discover how leukemia develops. Discover that combining Integrated genomic sequencing and Rna sequencing is an accurate way to find and confirm leukemia variants. All patients are based in Iran; patient one was a 66-year-old female whose blast level was 25%. Patient 2 is a 55-year-old female whose blast level is 35%. And the third patient was a 45-year-old male with a 40% blast level. The percentage of blast level indicates how many white blood cells make up entirely of the person’s blood. The higher the blast level, the more immunocompromised you become, as CML’s white blood cells block up the flow of red blood cells stopping your immune system’s response. Using WES and RNA sequencing, researchers discovered multiple patient similarities and differences. Both patients one and two have abnormal karyotypes, or a person’s complete set of chromosomes,  in the form of a Philadelphia Chromosome. A Philadelphia Chromosome is when the 9 and 22 chromosomes break off and pair. A direct consequence is that chromosome 22 will be minimal, resulting in CML. Patients two and three both had multiple chromosome deletions, duplications, and modifications, all with prior research that points to the contribution in Leukemia.

 

The study affirms the importance of being able to model and analyze CML’s leukemogenesis process in terms of more timely treatment and effective management of blood-borne illness. Using gene-sequencing technology, CML’s transition to the blast phase can be detected more accurately and effectively than in previous studies.  The researchers identified variants in all classes, with an important finding that a  shared deletion was a Transcription Factor 17p, which observed that 45% of blast phase patients have such defect or mutation of a gene, creating a marker of identification and possible treatment. This study points to the importance of identifying and developing more speed-lined processes that make detection more accurate and timely. 

 

Kazemi-Sefat, Golnaz Ensieh (12/2022). “Integrated genomic sequencing in myeloid blast crisis chronic myeloid leukemia (MBC-CML), identified potentially important findings in the context of leukemogenesis model”. Scientific reports (2045-2322), 12 (1), p. Article.

Filed Under: Biology, Computer Science and Tech, Science

Mimicking the Human Brain: The Role of Heterogeneity in Artificial Intelligence

April 10, 2022 by Jenna Albanese '24

Picture this: you’re in the passenger seat of a car, weaving through an urban metropolis – say New York City. As expected, you see plenty of people: those who are rushed, lingering, tourists, locals, old, young, et cetera. But let’s zoom in: take just about any one of those individuals in the city, and you will find 86 billion nerve cells, or neurons, in their brain carrying them through daily life. For comparison, this means that the number of neurons in the human brain is about ten thousand times the number of residents in New York City.

But let’s zoom in even further: each one of those 86 billion neurons in the brain is ever-so-slightly different from one another. For example, while some neurons work extremely quickly in making decisions that guide basic processes in the brain, others work more slowly, basing their decisions off surrounding neurons’ activity. This difference in decision-making time among our neurons is called heterogeneity. Previously, researchers were unsure of heterogeneity’s importance in our lives, but its existence was certain. This is just one example of the almost incomprehensible detail of the brain that makes human thinking so complex, and even difficult to fully understand for modern researchers.

Now, let’s zoom in again, but this time not on the person’s brain. Instead, let’s zoom into the cell phone this individual might have in their pocket or their hand. While a cell phone does not function exactly the same as the human brain, aspects of the device are certainly modeled after human thinking. Virtual assistants, like Siri or Cortana, for instance, compose responses to general inquiries that resemble human interaction.

This type of highly advanced digital experience is the result of artificial intelligence. Since the 1940s, elements of artificial intelligence have been modeled after features of the human brain,  fashioned as a neural network composed of nodes, some serving as inputs and others as outputs. The nodes are comparable to brain cells, and they communicate with each other through a series of algorithms to produce outputs. However, in these technological brain models, every node is typically modeled in the same way in terms of the time they take to respond to a given situation (Science Daily 2021). This is quite unlike the human brain, where heterogeneity ensures that each neuron responds to stimuli at different speeds. But does this even matter? Do intricate qualities of the brain like heterogeneity really make a difference in our thinking, or in digital functioning if incorporated into artificial intelligence?

The short answer is yes, at least in the case of heterogeneity. Researchers have recently investigated how heterogeneity influences an artificial neural network’s performance on visual and auditory information classification tasks. In the study, each neural network had a different “time constant,” which is how long the cells take to respond to a situation given the responses of nearby cells. In essence, the researchers varied the heterogeneity of the artificial neural networks. The results were astonishing: once heterogeneity was introduced, the artificial neural networks completed the tasks more efficiently and accurately. The strongest result revealed a 15-20% improvement on auditory tasks as soon as heterogeneity was introduced to the artificial neural network (Perez-Nieves et al. 2021).

This result indicates that heterogeneity helps us think systematically, improve our task performance, and learn in changing conditions (Perez-Nieves et al. 2021). So perhaps it would be advantageous to incorporate heterogeneity into standard artificial intelligence models. With this change, technology’s way of “thinking” will come one step closer to functioning like a human brain, adopting a similar level of complexity and intricacy.

So, why does this matter? If parts of artificial intelligence are modeled closer and closer to how the human brain works, real-world benefits abound, and we’re talking on a level grander than virtual assistants. One prominent example is in head and neck cancer prognosis. Clinical predictors of head and neck cancer prognosis include factors like age, pathological findings, HPV status, and tobacco and alcohol consumption (Chinnery et al. 2020). With a multitude of factors at play, physicians spend excessive amounts of time analyzing head and neck cancer patients’ lifestyles in order to deduce an accurate prognosis. Alternatively, artificial intelligence could be used to model this complex web of factors for these cancer patients, and physicians’ time could be spent on other endeavors.

This type of clinical application is still far from implementation, but remains in sight for modern researchers. As the brain is further explored and understood, more and more of the elements that comprise advanced human thinking can be incorporated into technology. Now, put yourself in the shoes of our New York City passerby: how would you feel if the small cell phone in your pocket was just as intelligent and efficient as the 86 billion neurons in your head? How about if that cell phone solved problems like you do and thought like you think, in essence serving as a smaller version of your own brain? It is almost unfathomable! Yet, by harnessing heterogeneity, researchers have come one step closer toward realizing this goal.

 

References

Chinnery, T., Arifin, A., Tay, K. Y., Leung, A., Nichols, A. C., Palma, D. A., Mattonen, S. A., & Lang, P. (2020). Utilizing artificial intelligence for head and neck cancer outcomes prediction from imaging. Canadian Association of Radiologists Journal, 72(1), 73–85. https://doi.org/10.1177/0846537120942134.

Perez-Nieves, N., Leung, V. C. H., Dragotti, P. L., & Goodman, D. F. M. (2021). Neural heterogeneity promotes robust learning. Nature Communications, 12(1). https://doi.org/10.1038/s41467-021-26022-3. 

ScienceDaily. (2021, October 6). Brain cell differences could be key to learning in humans and ai. ScienceDaily. Retrieved February 27, 2022, from https://www.sciencedaily.com/releases/2021/10/211006112626.htm.  

Filed Under: Computer Science and Tech, Psychology and Neuroscience Tagged With: AI, heterogeneity, neural network

What is more urgent for AI research: long-term or short-term concerns? Experts disagree

April 26, 2021 by Micaela Simeone '22

In a 2015 TED Talk, philosopher and Founding Director of the Oxford Future of Humanity Institute Nick Bostrom discusses the prospect of machine superintelligence: AI that would supersede human-level general intelligence. He begins by noting that with the advent of machine-learning models, we have shifted into a new paradigm of algorithms that learn—often from raw data, similar to the human infant (Bostrom, “What Happens” 3:26 – 3:49).

We are, of course, still in the era of narrow AI: the human brain possesses many capabilities beyond those of the most powerful AI. However, Bostrom notes that artificial general intelligence (AGI)—AI that can perform any intellectual task a human can—has been projected by many experts to arrive around mid- to late-century (Müller and Bostrom, 1) and that the period in between the development of AGI and whatever comes next may not be long at all.

Of course, Bostrom notes, the ultimate limits to information processing in the machine substrate lie far outside the limits of biological tissue due to factors such as size and speed difference (“What Happens” 5:05 – 5:43). So, Bostrom says, the potential for superintelligence lies dormant for now, but in this century, scientists may unlock a new path forward in AI. We might then see an intelligence explosion constituting a new shift in the knowledge substrate, and resulting in superintelligence (6:00 – 6:09).

What we should worry about, Bostrom explains, are the consequences (which reach as far as existential risk) of creating an immensely powerful intelligence guided wholly by processes of optimization. Bostrom imagines that a superintelligent AI tasked with, for example, solving a highly complex mathematical problem, might view human morals as threats to a strictly mathematical approach. In this scenario, our future would be shaped by the preferences of the AI, for better or for worse (Bostrom, “What Happens” 10:02 – 10:28).

For Bostrom, then, the answer is to figure out how to create AI that uses its intelligence to learn what we value and is motivated to perform actions that it would predict we will approve of. We would thus leverage this intelligence as much as possible to solve the control problem: “the initial conditions for the intelligence explosion might need to be set up in just the right way, if we are to have a controlled detonation,” Bostrom says (“What Happens” 14:33 – 14:41). 

Thinking too far ahead?

Experts disagree about what solutions are urgently needed in AI

Many academics think that concerns about superintelligence are too indefinite and too far in the future to merit much discussion. These thinkers usually also argue that our energies are better spent focused on short-term AI concerns, given that AI is already reshaping our lives in profound and not always positive ways. In a 2015 article, Oxford Internet Institute professor Luciano Floridi called discussions about a possible intelligence explosion “irresponsibly distracting,” arguing that we need to take care of the “serious and pressing problems” of present-day digital technologies (“Singularitarians” 9-10).

Beneficence versus non-maleficence

In conversations about how we can design AI systems that will better serve the interests of humanity and promote the common good, a distinction is often made between the negative principle (“do no harm”) and the positive principle (“do good”). Put another way, approaches toward developing principled AI can be either about ensuring that those systems are beneficent or ensuring they are non-maleficent. In the news, as one article points out, the two mindsets can mean the difference between headlines like “Using AI to eliminate bias from hiring” and “AI-assisted hiring is biased. Here’s how to make it more fair” (Bodnari, 2020).

Thinkers, like Bostrom, concerned with long-term AI worries such as superintelligence tend to structure their arguments more around the negative principle of non-maleficence. Though Bostrom does present a “common good principle” (312) in his 2014 book, Superintelligence: Paths, Dangers, Strategies, suggestions like this one are made more alongside the broader consideration that we need to be very careful with AI development in order to avoid the wide-ranging harm possible with general machine intelligence. 

In an article from last year, Floridi once again accuses those concerned with superintelligence of alarmism and irresponsibility, arguing that their worries mislead public opinion to be fearful of AI progress rather than knowledgeable about the potential and much-needed solutions AI could bring about. Echoing the beneficence principle, Floridi writes, “we need all the good technology that we can design, develop, and deploy to cope with these challenges, and all human intelligence we can exercise to put this technology in the service of a better future” (“New Winter” 2).

In his afterword, Bostrom echoes the non-maleficence principle when he writes, “I just happen to think that, at this point in history, whereas we might get by with a vague sense that there are (astronomically) great things to hope for if the machine intelligence transition goes well, it seems more urgent that we develop a precise detailed understanding of what specific things could go wrong—so that we can make sure to avoid them” (Superintelligence 324).

Considerations regarding the two principles within the field of bioethics (where they originated), can be transferred to conversations about AI. In taking the beneficence approach (do good = help the patient), one worry in the medical community is that doctors risk negatively interfering in their patients’ lives or overstepping boundaries such as privacy. Similarly, with the superintelligence debate, perhaps the short-term, “do good now” camp risks sidelining, for example, preventative AI safety mechanisms in the pursuit of other more pressing beneficent outcomes such as problem-solving or human rights compliance.

There are many other complications involved. If we take the beneficence approach, the loaded questions of “whose common good?” and of who is making the decisions are paramount. On the other hand, taking an approach that centers doing good arguably also centers humanity and compassion, whereas non-maleficence may lead to more mathematical or impersonal calculations of how best to avoid specific risks or outcomes. 

Bridging the gap

The different perspectives around hopes for AI and possible connections between them are outlined in a 2019 paper by Stephen Cave and Seán S. ÓhÉigeartaigh from the Leverhulme Centre for the Future of Intelligence at the University of Cambridge called “Bridging near- and long-term concerns about AI.”

The authors explain that researchers focused on the near-term prioritize immediate or imminent challenges such as privacy, accountability, algorithmic bias, and the safety of systems that are close to deployment. On the other hand, those working on the long-term examine concerns that are less certain, such as wide-scale job loss, superintelligence, and “fundamental questions about humanity’s place in a world with intelligent machines” (Cave and ÓhÉigeartaigh, 5).

Ultimately, Cave and ÓhÉigeartaigh argue that the disconnect between the two groups is a mistake, and that thinkers focused on one set of issues have good reasons to take seriously work done on the other.

The authors point to many possible benefits available to long-term research with insight from the present. For example, they write that immediate AI concerns will grow in importance as increasingly powerful systems are deployed. Technical safety research done now, they explain, could provide fundamental frameworks for future systems (5).

In considering what the long-term conversation has to offer us today, the authors write that “perhaps the most important point is that the medium to long term has a way of becoming the present. And it can do so unpredictably” (6). They emphasize that the impacts of both current and future AI systems might depend more on tipping points than even progressions, writing, “what the mainstream perceives to be distant-future speculation could therefore become reality sooner than expected” (6).

Regardless of the controversies over whether we should take the prospect of superintelligence seriously, support for investments in AI safety research unites many experts across the board. At the least, simply joining the conversation means asking one question which we might all agree is important: What does it mean to be human in a world increasingly shaped by the internet, digital technologies, algorithms, and machine-learning?

Works Cited

Bodnari, Andreea. “AI Ethics: First Do No Harm.” Towards Data Science, Sep 7, 2020, https://towardsdatascience.com/ai-ethics-first-do-no-harm-23fbff93017a 

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies, 2014, Oxford University Press. 

Bostrom, Nick. “What happens when our computers get smarter than we are?” Ted, March 2015, video, https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are 

Cave, Stephen and ÓhÉigeartaigh, Seán S. “Bridging near- and long-term concerns about AI,” Nature Machine Intelligence, vol. 1, 2019, pp. 5-6. https://www.nature.com/articles/s42256-018-0003-2 

Floridi, Luciano. “AI and Its New Winter: from Myths to Realities,” Philosophy & Technology, vol. 33, 2020, pp. 1-3, SpringerLink. https://link.springer.com/article/10.1007/s13347-020-00396-6 

Floridi, Luciano. “Singularitarians, AItheists, and Why the Problem with Artificial Intelligence Is H.A.L. (Humanity At Large), Not HAL.” APA Newsletter on Philosophy and Computers, vol. 14, no. 2, Spring 2015, pp. 8-10. https://www.academia.edu/15037984/ 

Müller, Vincent C. and Bostrom, Nick. ‘Future progress in artificial intelligence: A Survey of Expert Opinion.” Fundamental Issues of Artificial Intelligence. Synthese Library; Berlin: Springer, 2014, www.nickbostrom.com

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, superintelligence

‘The Scariest Deepfake of All’: AI-Generated Text & GPT-3

March 1, 2021 by Micaela Simeone '22

Recent advances in machine-learning systems have led to both exciting and unnerving technologies—personal assistance bots, email spam filtering, and search engine algorithms are just a few omnipresent examples of technology made possible through these systems. Deepfakes (deep learning fakes), or, algorithm-generated synthetic media, constitute one example of a still-emerging and tremendously consequential development in machine-learning. WIRED recently called AI-generated text “the scariest deepfake of all”, turning heads to one of the most powerful text generators out there: artificial intelligence research lab OpenAI’s Generative Pre-Trained Transformer (GPT-3) language model.

GPT-3 is an autoregressive language model that uses its deep-learning experience to produce human-like text. Put simply, GPT-3 is directed to study the statistical patterns in a dataset of about a trillion words collected from the web and digitized books. GPT-3 then uses its digest of that massive corpus to respond to text prompts by generating new text with similar statistical patterns, endowing it with the ability to compose news articles, satire, and even poetry. 

GPT-3’s creators designed the AI to learn language patterns and immediately saw GPT-3 scoring exceptionally well on reading-comprehension tests. But when OpenAI researchers configured the system to generate strikingly human-like text, they began to imagine how these generative capabilities could be used for harmful purposes. Previously, OpenAI had often released full code with its publications on new models. This time, GPT-3s creators decided to hide its underlying code from the public, not wanting to disseminate the full model or the millions of web pages used to train the system. In OpenAI’s research paper on GPT-3, authors note that “any socially harmful activity that relies on generating text could be augmented by powerful language models,” and “the misuse potential of language models increases as the quality of text synthesis improves.” 

Just like humans are prone to internalizing the belief systems “fed” to us, machine-learning systems mimic what’s in their training data. In GPT-3’s case, biases present in the vast training corpus of Internet text led the AI to generate stereotyped and prejudiced content. Preliminary testing at OpenAI has shown that GPT-3-generated content reflects gendered stereotypes and reproduces racial and religious biases. Because of already fragmented trust and pervasive polarization online, Internet users find it increasingly difficult to trust online content. GPT-3-generated text online would require us to be even more critical consumers of online content. The ability for GPT-3 to mirror societal biases and prejudices in its generated text means that GPT-3 online might only give more voice to our darkest emotional, civic, and social tendencies.

Because GPT-3’s underlying code remains in the hands of OpenAI and its API (the interface where users can partially work with and test out GPT-3) is not freely accessible to the public, many concerns over its implications steer our focus towards a possible future where its synthetic text becomes ubiquitous online. Due to GPT-3’s frighteningly successful “conception” of natural language as well as creative capabilities and bias-susceptible processes, many are worried that a GPT-3-populated Internet could do a lot of harm to our information ecosystem. However, GPT-3 exhibits powerful affordances as well as limitations, and experts are asking us not to project too many fears about human-level AI onto GPT-3 just yet.

GPT-3: Online Journalist

GPT-3-generated news article that research participants had the greatest difficulty distinguishing from a human-written article

Fundamentally, concerns about GPT-3-generated text online come from an awareness of just how different a threat synthetic text poses versus other forms of synthetic media. In a recent article, WIRED contributor Renee DiResta writes that, throughout the development of photoshop and other image-editing CGI tools, we learned to develop a healthy skepticism, though without fully disbelieving such photos, because “we understand that each picture is rooted in reality.” She points out that generated media, such as deepfaked video or GPT-3 output, is different because there is no unaltered original, and we will have to adjust to a new level of unreality. In addition, synthetic text “will be easy to generate in high volume, and with fewer tells to enable detection.” Right now, it is possible to detect repetitive or recycled comments that use the same snippets of text to flood a comment section or persuade audiences. However, if such comments had been generated independently by an AI, DiResta notes, these manipulation campaigns would have been much harder to detect:

“Undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister … The ability to manufacture a majority opinion, or create a fake commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns.” – Renee DiResta, WIRED

In their paper “Language Models are Few-Shot Learners,” GPT-3’s developers discuss the potential for misuse and threat actors—those seeking to use GPT-3 for malicious or harmful purposes. The paper states that threat actors can be organized by skill and resource levels, “ranging from low or moderately skilled and resourced actors who may be able to build a malicious product to … highly skilled and well resourced (e.g. state-sponsored) groups with long-term agendas.” Interestingly, OpenAI researchers write that threat actor agendas are “influenced by economic factors like scalability and ease of deployment” and that ease of use is another significant incentive for malicious use of AI. It seems that the very principles that guide the development of many emerging AI models like GPT-3—scalability, accessibility, and stable infrastructure—could also be what position these models as perfect options for threat actors seeking to undermine personal and collective agency online.

Staying with the projected scenario of GPT-3-text becoming widespread online, it is useful to consider the already algorithmic nature of our interactions online. In her article, DiResta writes about the Internet that “algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.” Introducing an AI “voice” into this environment could make our online interactions even less human. One example of possible algorithmic accomplices of GPT-3 are Google Autocomplete algorithms which internalize queries and often reflect “-ism” statements and biases while processing suggestions based on common searches. The presence of AI-generated texts could populate Google algorithms with even more problematic content and continue to narrow our ability to have control over how we acquire neutral, unbiased knowledge.

An Emotional Problem

Talk of GPT-3 passing The Turing Test reflects many concerns about creating increasingly powerful AI. GPT-3 seems to hint at the possibility of a future where AI is able to replicate those attributes we might hope are exclusively human—traits like creativity, ingenuity, and, of course, understanding language. As Microsoft AI Blog contributor Jennifer Langston writes in a recent post, “designing AI models that one day understand the world more like people starts with language, a critical component to understanding human intent.” 

Of course, as a machine-learning model, GPT-3 relies on a neural network (inspired by neural pathways in the human brain) that can process language. Importantly, GPT-3 represents a massive acceleration in scale and computing power (rather than novel ML techniques), which give it the ability to exhibit something eerily close to human intelligence. A recent Vox article on the subject asks the question, “is human-level intelligence something that will require a fundamentally new approach, or is it something that emerges of its own accord as we pump more and more computing power into simple machine learning models?” For some, the idea that the only thing distinguishing human intelligence from our algorithms is our relative “computing power” is more than a little uncomfortable. 

As mentioned earlier, GPT-3 has been able to exhibit creative and artistic qualities, generating a trove of literary content including poetry and satire. The attributes we’ve long understood to be distinctly human are now proving to be replicable by AI, raising new anxieties about humanity, identity, and the future.

GPT-3’s recreation of Allen Ginsberg’s “Howl”

GPT-3’s Limitations

While GPT-3 can generate impressively human-like text, most researchers maintain that this text is often “unmoored from reality,” and, even with GPT-3, we are still far from reaching artificial general intelligence. In a recent MIT Technology Review article, author Will Douglas Heaven points out that GPT-3 often returns contradictions or nonsense because its process is not guided by any true understanding of reality. Ultimately, researchers believe that GPT-3’s human-like output and versatility are the results of excellent engineering, not genuine intelligence. GPT-3 uses many of its parameters to memorize Internet text that doesn’t generalize easily, and essentially parrots back “some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative,” according to Douglas. As it stands today, GPT-3 is just an early glimpse of AI’s world-altering potential, and remains a narrowly intelligent tool made by humans and reflecting our conceptions of the world.

A final point of optimism is that the field around ethical AI is ever-expanding, and developers at OpenAI are looking into the possibility of automatic discriminators that may have greater success than human evaluators at detecting AI model-generated text. In their research paper, developers wrote that “automatic detection of these models may be a promising area of future research.” Improving our ability to detect AI-generated text might be one way to regain agency in a possible future with bias-reproducing AI “journalists” or undetectable deepfaked text spreading misinformation online.

Ultimately, GPT-3 suggests that language is more predictable than many people assume, and challenges common assumptions about what makes humans unique. Plus, exactly what’s going on inside GPT-3 isn’t entirely clear, challenging us to continue to think about the AI “black box” problem and methods to figure out just how GPT-3 reiterates natural language after digesting millions of snippets of Internet text. However, perhaps GPT-3 gives us an opportunity to decide for ourselves whether even the most powerful of future text generators could undermine the distinctly human conception of the world and of poetry, language, and conversation. A tweet Douglas quotes in his article from user @mark_riedl provides one possible way to frame both our worries and hopes about tech like GPT-3: “Remember…the Turing Test is not for AI to pass, but for humans to fail.”

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, artificial intelligence, GPT-3, online journalists, textfakes

  • « Go to Previous Page
  • Page 1
  • Page 2

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Honors Projects Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • Venom As Medicine: Novel Pathways for Dravet Syndrome Treatment Using Modulatory Peptides from Scorpion Venom January 8, 2026
  • When Distraction Helps: Music as a Tool for Focus in ADHD Cases December 24, 2025
  • Timing Matters: How the menstrual cycle influences ACL injury in Female athletes December 24, 2025

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics Alzheimer's Disease antibiotics artificial intelligence bacteria Bathymetry Biology brain Cancer Biology Cell Biology Chemistry and Biochemistry Chlorofluorocarbons climate change Computer Science and Tech CRISPR Cytoskeleton Depression Dermatology dreams epigenetics Ethics Genes Gut microbiota honors Marine Biology Marine Mammals Marine noise Medicine memory Montreal Protocol Moss neurobiology neuroscience Nutrients Ozone hole Plants Psychology and Neuroscience REM seabirds sleep student Technology therapy Women's health

Copyright © 2026 · students.bowdoin.edu