• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

Science

Breakthrough in Gene Sequencing and Identification of Leukemia-causing Genes in Iran

November 6, 2022 by Ruby Pollack '25

A research group based in Iran conducted a study to confirm the legitimacy of gene sequencing technology for discovering Chronic Myeloid Leukemia causing genes within three pre-existing cancer patients. Chronic Myeloid Leukemia (CML) is an example of a monoclonal disease, meaning deriving from a single cell in this blood cell (hematopoietic).  CML represents around 15% of Leukemia in adults. Leukemia represents the most common blood cancer for adults older than 55. The blast cycle is the stage in chronic leukemia where tiredness, fever, and enlarged spleen are present. A blast crisis occurs when 20% of all blood or bone marrow has blasts or immature white blood cells. These blasts multiply uncontrollably, causing a blockage and stopping the production of red blood cells and platelets, which are necessary for survival. White cell crowding red blood cells often causes a lower immune system. This article focuses on using integrated genomic sequences to access common gene variants associated with CML, to understand the fundamental mechanisms behind the blast crisis.

Researchers used the Whole Exome Sequences (WES): integrated sequencing, chromosome sequences, and Rna sequences. Using a blood sample from the blast cycle patients, they used WES technology to identify modified, deleted, or incorrectly copied genes. Genes have extraordinary power over the way our body regulates. Sometimes cancer occurs when genes do not work as they are intended to. There are five classes of cancer-causing genes. Mutations in genes change how they activate other cells. A mutation in the process of spreading information through signal pathway components can lead cells to multiply erratically. Another class is transcription factors; proteins bind to a part of DNA to start or stop replication.  If there are modifications within those repressor/activator genes, that could lead to an unregulated abundance of cells—unsuppressed replication results in Tumors due to unregulated cell growth. It is common for some mutations in the replication process, but some proteins repair the genes; sometimes, if a gene gets mis-replicated, it can lead to other classes. 

CML cells are derived from the bone marrow and are progenitor cells, meaning they can turn into white, red blood cells or platelets, depending on the body’s purpose. The researcher’s goal with using WES on these pre-existing blast crisis patients was to discover essential variants and find similarities and differences with their gene makeup.  Researchers then divided their findings into PIF ( potentially significant findings) and PAFs(potentially actionable results). Using WES detected, 16 PIFS affected all five known classes of cancer-causing genes.  

Researchers conducted integrated sequencing on three patients using an in-house filtering algorithm to discover how leukemia develops. Discover that combining Integrated genomic sequencing and Rna sequencing is an accurate way to find and confirm leukemia variants. All patients are based in Iran; patient one was a 66-year-old female whose blast level was 25%. Patient 2 is a 55-year-old female whose blast level is 35%. And the third patient was a 45-year-old male with a 40% blast level. The percentage of blast level indicates how many white blood cells make up entirely of the person’s blood. The higher the blast level, the more immunocompromised you become, as CML’s white blood cells block up the flow of red blood cells stopping your immune system’s response. Using WES and RNA sequencing, researchers discovered multiple patient similarities and differences. Both patients one and two have abnormal karyotypes, or a person’s complete set of chromosomes,  in the form of a Philadelphia Chromosome. A Philadelphia Chromosome is when the 9 and 22 chromosomes break off and pair. A direct consequence is that chromosome 22 will be minimal, resulting in CML. Patients two and three both had multiple chromosome deletions, duplications, and modifications, all with prior research that points to the contribution in Leukemia.

 

The study affirms the importance of being able to model and analyze CML’s leukemogenesis process in terms of more timely treatment and effective management of blood-borne illness. Using gene-sequencing technology, CML’s transition to the blast phase can be detected more accurately and effectively than in previous studies.  The researchers identified variants in all classes, with an important finding that a  shared deletion was a Transcription Factor 17p, which observed that 45% of blast phase patients have such defect or mutation of a gene, creating a marker of identification and possible treatment. This study points to the importance of identifying and developing more speed-lined processes that make detection more accurate and timely. 

 

Kazemi-Sefat, Golnaz Ensieh (12/2022). “Integrated genomic sequencing in myeloid blast crisis chronic myeloid leukemia (MBC-CML), identified potentially important findings in the context of leukemogenesis model”. Scientific reports (2045-2322), 12 (1), p. Article.

Filed Under: Biology, Computer Science and Tech, Science

The Kleptomania Connection between Serotonin and Stealing

April 15, 2022 by Luv Kataria '24

Although many people steal in response to economic hardship, either perceived or actual, some individuals only steal to satisfy a powerful urge. These individuals may have an impulse control disorder known as kleptomania. People with kleptomania experience a sense of relief from stealing, so they steal to get rid of their anxiety (Talih, 2011). The prevalence of kleptomania in the U.S. is estimated to be 6 people per 1000, which is equivalent to more than 1.5 million kleptomaniacs in the U.S. population ​​(Aboujaoude et al., 2004).

What exactly causes this impulse to steal? Kleptomania has a range of biological, psychological, and sociological risk factors. One of the main biological factors has to do with neurotransmitters, such as serotonin (Sulthana, 2015). Serotonin plays an important role in our bodies, contributing to emotions and judgment, and low serotonin levels have been linked to impulsive and aggressive behaviors (Williams, 2002). The serotonin system is also thought to be involved in “increased cognitive impulsivity,” as has been observed in individuals with a higher number of kleptomania symptoms (Ascher & Levounis, 2014).

Throughout the nervous system, serotonin transporters (SERT) take up serotonin that is released from neurons (Rudnick, 2007). These transporters can also be found on blood platelets and take up serotonin from the blood plasma (Mercado & Kilic, 2010). We can study these particular transporters to better understand the levels of serotonin in one’s blood and how that relates to their level of impulsiveness.

A 2010 study looked into the relationship between the platelet serotonin transporter, impulsivity, and gender. They found that while women were, in general, more impulsive than men, there was only a positive correlation between the number of transporters and impulsivity in men. This means that higher amounts of platelet serotonin transporters and lower levels of serotonin are related to more impulsivity in men, but not in women. It was also found that higher amounts of SERT transporters were linked to more “aggressive” behaviors. The authors came to the conclusion that, even though women were found to display more impulsivity than men, serotonin plays a larger role in impulsivity with men than it does with women (Marazziti et al., 2010).

Understanding the relationship between serotonin and impulsivity with kleptomania has helped pioneer specific treatments, including Selective Serotonin Reuptake Inhibitors (SSRIs). Impulsivity is linked to low levels of serotonin, so SSRIs fix this by limiting the reuptake of serotonin through the blockage of serotonin transporters, leading to the buildup of serotonin in the synapse (Sulthana, 2015). There is no cure for kleptomania, but SSRIs help to control the impulse to steal. 

Overall, kleptomania is a secretive disorder, for which many people don’t seek help due to the legal system and the social stigma around theft. Thus, very little is known about what causes kleptomania, but trying to understand it through its link with neurotransmitters has uncovered potential causes and helped develop treatments. 

 

References

Ascher, M. S., & Levounis, P. (Eds.). (2014). The behavioral addictions. American Psychiatric Publishing.

Aboujaoude, E., Gamel, N., & Koran, L. M. (2004a). Overview of kleptomania and phenomenological description of 40 patients. Primary Care Companion to The Journal of Clinical Psychiatry, 6(6), 244–247. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC535651/ 

Marazziti, D., Baroni, S., Masala, I., Golia, F., Consoli, G., Massimetti, G., Picchetti, M., Dell’Osso, M. C., Giannaccini, G., Betti, L., Lucacchini, A., & Ciapparelli, A. (2010). Impulsivity, gender, and the platelet serotonin transporter in healthy subjects. Neuropsychiatric Disease and Treatment, 6, 9–15. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2951061/ 

Mercado, C. P., & Kilic, F. (2010). Molecular mechanisms of SERT in platelets: regulation of plasma serotonin levels. Molecular interventions, 10(4), 231–241. https://doi.org/10.1124/mi.10.4.6 

Rudnick, G. (2007). Sert, serotonin transporter. In S. J. Enna & D. B. Bylund (Eds.), XPharm: The Comprehensive Pharmacology Reference (pp. 1–6). Elsevier. https://doi.org/10.1016/B978-008055232-3.60442-8

Sulthana, N., Singh, M., & Vijaya, K. (2015). Kleptomania-the Compulsion to Steal. Am. J. Pharm. Tech. Res, 5(3). 

Talih, F. R. (2011b). Kleptomania and potential exacerbating factors. Innovations in Clinical Neuroscience, 8(10), 35–39. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3225132/ 

Williams, Julie. Pyromania, Kleptomania, and Other Impulse-Control Disorders. Enslow, 2002. 

Filed Under: Biology, Psychology and Neuroscience, Science Tagged With: kleptomania, serotonin, SERT

The Power of Plant Cells: An Interview with Luis Vidali, PhD

December 5, 2021 by Luke Taylor '24

Walking around campus, we are surrounded by plants of various sizes — pines, grass, bushes, mosses. Despite the variety of size and characteristics, all these plants share a similar structure: their cytoskeleton. The cytoskeleton is the protein fibers found within the liquid cytoplasm of plant cells that maintain and modify their physical structure. It performs the same function in plants as the bone skeleton does in animals. But how does it function? How does a tiny seed develop into a large, sturdy tree?  

On October 11th I met with Dr. Luis Vidali, a scientist researching mechanisms in plant cell growth and reproduction, with a focus on studying the cytoskeleton of the moss species Physcomitrium patens. Born in Mexico before moving to the US to continue his studies after college, Dr. Vidali received his doctorate at the University of Massachusetts, Amherst, and is currently Associate Professor of Biology and Biotechnology at the Worcester Polytechnic Institute in Worcester, MA. 

 

Interview Transcript*: 

*At the time of the interview verbatim quotes could not be recorded. This transcript is based on notes taken during the interview and the transcript was submitted to Professor Vidali prior to publication to make sure his words were accurately represented. 

 

Luke Taylor: If you were to explain the implications of your research to the general public in a few words or sentences, what would you say? 

Dr. Luis Vidali: I study how plant cells grow, especially how plant cells take up more space as they grow. Studying the growth of plant cells is important because plants are integral to our everyday lives- from providing food, fibers, and fuels. Plants are responsible for all of these and all plants are made of microscopic cells with defined shapes. If you want to understand how plants grow, you need to understand plant cell growth.  

 

LT: Why is moss such a good model? 

LV: The model I use is Physcomitrium patens: spreading earth moss. We want plant models that grow fast and have a short reproductive cycle, to expedite the pace of researching the cells. Additionally, plants have a reproduction cycle that consists of alternation of generations, where the gametophyte is haploid (has one copy of each chromosome in its cells) and the sporophyte is diploid (has double the number of chromosomes in its cells). The generation we use is primarily the gametophyte, so it having fewer chromosomes in its cell for each generation allows us to identify mutations in its genome and find a demonstrated phenotype much faster than with diploid cells. The moss cells will eventually become identical to each other, allowing for easy control of experimentation without self-breeding techniques.  

 

LT: How do you circumvent the alternation of generations cycle with moss cells if you primarily work with the gametophyte? 

LV: The diploid sporophyte of the moss we work with makes brown capsules. We are not interested in these capsules; we are interested in the more dominant gametophyte. The reason we can circumvent the alternation of generations is because the spores develop protonemata, which are the filaments of cells growing from the moss gametophyte. We grind the moss every week to prevent the sporophyte from developing and propagating spores. The tools we use to perform this grinding are blenders not too unlike the ones in a kitchen blender, but could use a two-shaft, two-probe homogenizer as well.  

 

LT: You state that the cytoskeleton is one of the most conserved structures in plants, animals, and fungi. From what you have researched in plant cytoskeletal structure and function, which functions in plant cytoskeletons do you think may be conserved (paralleled) by fungi and animals, and which structures and functions do you believe diverged? 

LV: Conserved structures in all eukaryotic cells include the separation of chromosomes by the microtubules in the mitotic spindle, and the polarized transport of vesicles  mediated by actin. In evolution, cell division divergence includes the use of the phragmoplast exclusively in some green algae and plants. The phragmoplast is a complex including microtubules and actin which mediates the production of the cell plate during cytokinesis of the plant cell. In contrast, fungi and animal cells use actin and myosin to make the contractile ring, which squeezes the two daughter cells apart. 

 

LT: Myosin is one of the proteins of study in your lab. From my understanding, myosin is associated with animal muscle cells. How does myosin in plant cells relate to myosin use in animal cells? 

LV: Plant cells only have two classes of myosin proteins, whereas animal cells have several more classes, the most abundant one is myosin II,  (Which explains why animal muscle contraction may be the first thing to come to mind when one hears of myosin. Myosin class II in animal cells make contractile filaments with actin, whereas myosin class I mediate vesicle transportation with actin. These myosins in animal cells are related to stress fibers and their contractile nature. Plant cells lack these myosins: they only have myosin class VIII and XI. Myosin class XI is functionally homologous with myosin class V. Myosin V was present within the last common ancestor between plants and animals and mediates the transport of vesicles. The presence of myosin XI in plant cells shows the conserved nature of vesicle transportation in eukaryotic cells. In fungi, class V, I, and II myosins are present. And class II has a function like the one in the contractile ring seen in animal cells. This is an example of the phylogenetic closeness of the fungi kingdom to the animal kingdom in comparison to the plant kingdom to the animal kingdom. Myosin VIII in plant cells specifically mediates vesicle transportation of the phragmoplasts and plasmodesmata, a function specific to plant cells.   

 

LT: You have a collaboration with the department of physics at WPI. What does this entail in terms of your research and methods? Do you find the interdisciplinary nature of your research to be more enlightening about phytological research? How do you apply the principles of physics in your research?  

LV: In my lab we use biophysical and mathematical techniques to model the diffusion of vesicles and molecules in the cell. To do this, we first need to measure the diffusion coefficient of the particles. The diffusion coefficient provides information of how fast particles are moving in space and has units of µm2/s. Because the motion of particles is difficult to measure directly, we instead use the diffusion coefficient to estimate how fast particles may cover a given area in space. By using the plane of coverage as a function of time, we know it takes longer for the molecule or vesicle with a larger diffusion coefficient to cover a larger space.  In our experiments, we use reaction-diffusion calculations to measure how long vesicles bind to myosin, and we see that the vesicles bind to the myosin for very brief periods of time. These mathematical and physical models of diffusion allow us to understand the systems better and model changes in the rate of vesicle transport and secretion. 

We also use physics and mathematics to study the mechanical properties of plant cell walls. For our purposes, we model plant cell walls as a thin shell of a complex polysaccharide matrix, which behaves like a balloon. Having osmotic pressure of the plant cell will apply turgor pressure to the wall of the cells, causing it to expand and assume a more rigid form. The level we study the cell wall at is at the material rather than molecular level, generally speaking. We use an elastic model for the material properties of the cell wall, including considering tensions, stressors, and strains from the turgor pressure on the wall from osmosis. The purpose of our model is to make predictions about how the plant cell behaves, and as we continue to test these predictions, we update our model’s parameters accordingly.  

 

Related papers by Dr. Vidali and colleagues:  

Bibeau, J.P., Furt, F., Mousavi, S.I., Kingsley, J.L., Levine,M.F., Tüzel, E. and Vidali, L. (2020) In vivo interactions between myosin XI, vesicles and filamentous actin are fast and transient in Physcomitrella patens. J. Cell Sci. (2020) 133, jcs234682 doi: 10.1242/jcs.234682  

Chelladurai, D., Galotto, G., Petitto, J., Vidali, L., and Wu, M. (2020). Inferring lateral tension distribution in wall structures of single cells. Eur Phys J Plus 135, 662. https://doi.org/10.1140/epjp/s13360-020-00670-8. 

Galotto, G., Abreu, I., Sherman, C.A., Liu, B., Gonzalez-Guerrero, M., and Vidali, L. (2020) Chitin triggers calcium-mediated immune response in the plant model Physcomitrella patens. Molecular Plant-Microbe Interactions. doi: 10.1094/MPMI-03-20-0064-R 

Kingsley, J.L., Bibeau, J.P., Mousavi, S.I., Unsal, C., Chen, Z., Huang, X., Vidali, L., and Tüzel, E. (2018) Characterization of cell boundary and confocal effects improves quantitative FRAP analysis. Biophysical Journal. 114:1153-1164. doi:10.1016/j.bpj.2018.01.01. 

Filed Under: Biology, Math and Physics, Science Tagged With: Cell Biology, Cytoskeleton, Luis Vidali, Moss, Plants

When We Fall Asleep

December 5, 2021 by Grant Griesman

When our bodies shut down at night, our brains transport us into strange, convoluted alternate realities. Dreams range from the mundane to the fantastical, from classrooms to castles. Despite the sheer absurdity of many dreams, they always feel real. But where do dreams even come from, and why do we have them?

Definitions of a dream range from the generous “subjective experience during sleep” to the more specific “immersive spatiotemporal hallucination” (Siclari et al., 2020). Taken either way, dreams are characterized by increased blood flow to regions of the brain called the amygdala, hippocampus, and anterior cingulate cortex. The significant role that these regions play in regulating our emotions may explain the intense emotional aspect of many dreams (Schwartz & Maquet, 2002). 

There are five stages of sleep. The first four stages are collectively categorized as non-Rapid Eye Movement, or NREM, sleep. Accordingly, the fifth stage is referred to as the REM stage. During REM sleep, our eyeballs flit back and forth underneath our eyelids, our muscles are paralyzed to prevent self-injury from dream enactment, and our brain activity reflects that of wakefulness (Siclari et al., 2020). Although dreams are more common in REM sleep, recent research has shown that shorter and less bizarre dreams occur during NREM sleep as well (Nielsen, 2000).

It seems like something as peculiar as dreaming should have a distinct purpose. However, the exact function of dreams is still unknown. One theory speculates that dreams are simply a byproduct of other brain activity, such as memory consolidation, that occurs during sleep. Sigmund Freud, oft-considered the father of psychology, believed that dreams allowed for the disguised fulfillment of the sexual and aggressive desires of the id. According to Freud, the id is the component of our personality that lies below our consciousness and drives primitive, aggressive desires. Other theories suggest that dreaming is evolutionarily advantageous because it allows us to practice behaviors important to our survival in our sleep, preparing us for the same events in wakefulness. These behaviors include hunting, mating, responding to threats, and socializing (Siclari et al., 2020).

Some people seem to remember their dreams every night, while others claim to never dream at all. Dream recall averages at about one dream a week, but this varies widely. Practices such as keeping a dream journal and setting an alarm during a period of likely REM sleep improve recall.

 Recall is inherently easier with nightmares. By definition, nightmares cause awakening, while “bad dreams” contain similar emotionally troubling content but do not induce awakening (Robert & Zadra, 2014). There is evidence for a genetic predisposition to nightmares (Hublin et al., 1999).

Lucid dreams are a fascinating type of REM dreaming in which the individual is aware they are dreaming and may even be able to control the dream. Lucid dreams activate brain areas usually associated with insight and agency in wakefulness (Dresler et al., 2012). They also elicit the same eye movements and respiration patterns. For example, when asked to dive into a pool in their lucid dream, subjects briefly stopped breathing — as if they were underwater. The perception of time is also similar; counting from 0 to 10 in a lucid dream takes about as long as it does in real life. Lucid dreams provide particularly valuable insights into the mechanisms of dreaming because the dreamer can communicate with the researcher through pre-determined eye movements (Erlacher et al., 2014).

So what happens when we miss out on REM sleep and REM dreams? Unfortunately, modern society gives us plenty of chances to find out. Substances, especially alcohol and marijuana, decrease the time we spend in REM sleep. Medications such as benzodiazepines, antidepressants, and, ironically, sleeping pills also decrease REM sleep. Furthermore, exposure to artificial light before bed and the use of an alarm clock limit REM sleep. Collectively, the impact of these behaviors can hinder immune function, memory consolidation, and mood regulation (Naiman, 2017). 

Despite everything that scientists have discovered about dreams, there is still much about them that remains a mystery. Recently, researchers have been trying to interpret the content of dreams by using brain scans and machine learning to decode certain patterns of brain activity (Horikawa et al., 2013). For now, however, we can only take what we do know and marvel at the rest. Every night brings its own all-expenses-paid adventure into another reality.

References

Dresler, M., Wehrle, R., Spoormaker, V. I., Koch, S. P., Holsboer, F., Steiger, A., Obrig, H., Sämann, P. G., & Czisch, M. (2012). Neural Correlates of Dream Lucidity Obtained from Contrasting Lucid versus Non-Lucid REM Sleep: A Combined EEG/fMRI Case Study. Sleep, 35(7), 1017–1020. https://doi.org/10.5665/sleep.1974

Erlacher, D., Schädlich, M., Stumbrys, T., & Schredl, M. (2014). Time for actions in lucid dreams: Effects of task modality, length, and complexity. Frontiers in Psychology, 4, 1013. https://doi.org/10.3389/fpsyg.2013.01013

Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural Decoding of Visual Imagery During Sleep. Science, 340(6132), 639–642.

Hublin, C., Kaprio, J., Partinen, M., & Koskenvuo, M. (1999). Nightmares: Familial aggregation and association with psychiatric disorders in a nationwide twin cohort. American Journal of Medical Genetics, 88(4), 329–336. https://doi.org/10.1002/(SICI)1096-8628(19990820)88:4<329::AID-AJMG8>3.0.CO;2-E

Naiman, R. (2017). Dreamless: The silent epidemic of REM sleep loss. Annals of the New York Academy of Sciences, 1406(1), 77–85. https://doi.org/10.1111/nyas.13447

Nielsen, T. A. (2000). A review of mentation in REM and NREM sleep: “Covert” REM sleep as a possible reconciliation of two opposing models. Behavioral and Brain Sciences, 23(6), 851–866. https://doi.org/10.1017/S0140525X0000399X

Robert, G., & Zadra, A. (2014). Thematic and Content Analysis of Idiopathic Nightmares and Bad Dreams. Sleep, 37(2), 409–417. https://doi.org/10.5665/sleep.3426

Schwartz, S., & Maquet, P. (2002). Sleep imaging and the neuro-psychological assessment of dreams. Trends in Cognitive Sciences, 6(1), 23–30. https://doi.org/10.1016/S1364-6613(00)01818-0

Siclari, F., Valli, K., & Arnulf, I. (2020). Dreams and nightmares in healthy adults and in patients with sleep and neurological disorders. The Lancet Neurology, 19(10), 849–859. https://doi.org/10.1016/S1474-4422(20)30275-1

Filed Under: Psychology and Neuroscience, Science Tagged With: dreams, REM, sleep

What is more urgent for AI research: long-term or short-term concerns? Experts disagree

April 26, 2021 by Micaela Simeone '22

In a 2015 TED Talk, philosopher and Founding Director of the Oxford Future of Humanity Institute Nick Bostrom discusses the prospect of machine superintelligence: AI that would supersede human-level general intelligence. He begins by noting that with the advent of machine-learning models, we have shifted into a new paradigm of algorithms that learn—often from raw data, similar to the human infant (Bostrom, “What Happens” 3:26 – 3:49).

We are, of course, still in the era of narrow AI: the human brain possesses many capabilities beyond those of the most powerful AI. However, Bostrom notes that artificial general intelligence (AGI)—AI that can perform any intellectual task a human can—has been projected by many experts to arrive around mid- to late-century (Müller and Bostrom, 1) and that the period in between the development of AGI and whatever comes next may not be long at all.

Of course, Bostrom notes, the ultimate limits to information processing in the machine substrate lie far outside the limits of biological tissue due to factors such as size and speed difference (“What Happens” 5:05 – 5:43). So, Bostrom says, the potential for superintelligence lies dormant for now, but in this century, scientists may unlock a new path forward in AI. We might then see an intelligence explosion constituting a new shift in the knowledge substrate, and resulting in superintelligence (6:00 – 6:09).

What we should worry about, Bostrom explains, are the consequences (which reach as far as existential risk) of creating an immensely powerful intelligence guided wholly by processes of optimization. Bostrom imagines that a superintelligent AI tasked with, for example, solving a highly complex mathematical problem, might view human morals as threats to a strictly mathematical approach. In this scenario, our future would be shaped by the preferences of the AI, for better or for worse (Bostrom, “What Happens” 10:02 – 10:28).

For Bostrom, then, the answer is to figure out how to create AI that uses its intelligence to learn what we value and is motivated to perform actions that it would predict we will approve of. We would thus leverage this intelligence as much as possible to solve the control problem: “the initial conditions for the intelligence explosion might need to be set up in just the right way, if we are to have a controlled detonation,” Bostrom says (“What Happens” 14:33 – 14:41). 

Thinking too far ahead?

Experts disagree about what solutions are urgently needed in AI

Many academics think that concerns about superintelligence are too indefinite and too far in the future to merit much discussion. These thinkers usually also argue that our energies are better spent focused on short-term AI concerns, given that AI is already reshaping our lives in profound and not always positive ways. In a 2015 article, Oxford Internet Institute professor Luciano Floridi called discussions about a possible intelligence explosion “irresponsibly distracting,” arguing that we need to take care of the “serious and pressing problems” of present-day digital technologies (“Singularitarians” 9-10).

Beneficence versus non-maleficence

In conversations about how we can design AI systems that will better serve the interests of humanity and promote the common good, a distinction is often made between the negative principle (“do no harm”) and the positive principle (“do good”). Put another way, approaches toward developing principled AI can be either about ensuring that those systems are beneficent or ensuring they are non-maleficent. In the news, as one article points out, the two mindsets can mean the difference between headlines like “Using AI to eliminate bias from hiring” and “AI-assisted hiring is biased. Here’s how to make it more fair” (Bodnari, 2020).

Thinkers, like Bostrom, concerned with long-term AI worries such as superintelligence tend to structure their arguments more around the negative principle of non-maleficence. Though Bostrom does present a “common good principle” (312) in his 2014 book, Superintelligence: Paths, Dangers, Strategies, suggestions like this one are made more alongside the broader consideration that we need to be very careful with AI development in order to avoid the wide-ranging harm possible with general machine intelligence. 

In an article from last year, Floridi once again accuses those concerned with superintelligence of alarmism and irresponsibility, arguing that their worries mislead public opinion to be fearful of AI progress rather than knowledgeable about the potential and much-needed solutions AI could bring about. Echoing the beneficence principle, Floridi writes, “we need all the good technology that we can design, develop, and deploy to cope with these challenges, and all human intelligence we can exercise to put this technology in the service of a better future” (“New Winter” 2).

In his afterword, Bostrom echoes the non-maleficence principle when he writes, “I just happen to think that, at this point in history, whereas we might get by with a vague sense that there are (astronomically) great things to hope for if the machine intelligence transition goes well, it seems more urgent that we develop a precise detailed understanding of what specific things could go wrong—so that we can make sure to avoid them” (Superintelligence 324).

Considerations regarding the two principles within the field of bioethics (where they originated), can be transferred to conversations about AI. In taking the beneficence approach (do good = help the patient), one worry in the medical community is that doctors risk negatively interfering in their patients’ lives or overstepping boundaries such as privacy. Similarly, with the superintelligence debate, perhaps the short-term, “do good now” camp risks sidelining, for example, preventative AI safety mechanisms in the pursuit of other more pressing beneficent outcomes such as problem-solving or human rights compliance.

There are many other complications involved. If we take the beneficence approach, the loaded questions of “whose common good?” and of who is making the decisions are paramount. On the other hand, taking an approach that centers doing good arguably also centers humanity and compassion, whereas non-maleficence may lead to more mathematical or impersonal calculations of how best to avoid specific risks or outcomes. 

Bridging the gap

The different perspectives around hopes for AI and possible connections between them are outlined in a 2019 paper by Stephen Cave and Seán S. ÓhÉigeartaigh from the Leverhulme Centre for the Future of Intelligence at the University of Cambridge called “Bridging near- and long-term concerns about AI.”

The authors explain that researchers focused on the near-term prioritize immediate or imminent challenges such as privacy, accountability, algorithmic bias, and the safety of systems that are close to deployment. On the other hand, those working on the long-term examine concerns that are less certain, such as wide-scale job loss, superintelligence, and “fundamental questions about humanity’s place in a world with intelligent machines” (Cave and ÓhÉigeartaigh, 5).

Ultimately, Cave and ÓhÉigeartaigh argue that the disconnect between the two groups is a mistake, and that thinkers focused on one set of issues have good reasons to take seriously work done on the other.

The authors point to many possible benefits available to long-term research with insight from the present. For example, they write that immediate AI concerns will grow in importance as increasingly powerful systems are deployed. Technical safety research done now, they explain, could provide fundamental frameworks for future systems (5).

In considering what the long-term conversation has to offer us today, the authors write that “perhaps the most important point is that the medium to long term has a way of becoming the present. And it can do so unpredictably” (6). They emphasize that the impacts of both current and future AI systems might depend more on tipping points than even progressions, writing, “what the mainstream perceives to be distant-future speculation could therefore become reality sooner than expected” (6).

Regardless of the controversies over whether we should take the prospect of superintelligence seriously, support for investments in AI safety research unites many experts across the board. At the least, simply joining the conversation means asking one question which we might all agree is important: What does it mean to be human in a world increasingly shaped by the internet, digital technologies, algorithms, and machine-learning?

Works Cited

Bodnari, Andreea. “AI Ethics: First Do No Harm.” Towards Data Science, Sep 7, 2020, https://towardsdatascience.com/ai-ethics-first-do-no-harm-23fbff93017a 

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies, 2014, Oxford University Press. 

Bostrom, Nick. “What happens when our computers get smarter than we are?” Ted, March 2015, video, https://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are 

Cave, Stephen and ÓhÉigeartaigh, Seán S. “Bridging near- and long-term concerns about AI,” Nature Machine Intelligence, vol. 1, 2019, pp. 5-6. https://www.nature.com/articles/s42256-018-0003-2 

Floridi, Luciano. “AI and Its New Winter: from Myths to Realities,” Philosophy & Technology, vol. 33, 2020, pp. 1-3, SpringerLink. https://link.springer.com/article/10.1007/s13347-020-00396-6 

Floridi, Luciano. “Singularitarians, AItheists, and Why the Problem with Artificial Intelligence Is H.A.L. (Humanity At Large), Not HAL.” APA Newsletter on Philosophy and Computers, vol. 14, no. 2, Spring 2015, pp. 8-10. https://www.academia.edu/15037984/ 

Müller, Vincent C. and Bostrom, Nick. ‘Future progress in artificial intelligence: A Survey of Expert Opinion.” Fundamental Issues of Artificial Intelligence. Synthese Library; Berlin: Springer, 2014, www.nickbostrom.com

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, superintelligence

‘The Scariest Deepfake of All’: AI-Generated Text & GPT-3

March 1, 2021 by Micaela Simeone '22

Recent advances in machine-learning systems have led to both exciting and unnerving technologies—personal assistance bots, email spam filtering, and search engine algorithms are just a few omnipresent examples of technology made possible through these systems. Deepfakes (deep learning fakes), or, algorithm-generated synthetic media, constitute one example of a still-emerging and tremendously consequential development in machine-learning. WIRED recently called AI-generated text “the scariest deepfake of all”, turning heads to one of the most powerful text generators out there: artificial intelligence research lab OpenAI’s Generative Pre-Trained Transformer (GPT-3) language model.

GPT-3 is an autoregressive language model that uses its deep-learning experience to produce human-like text. Put simply, GPT-3 is directed to study the statistical patterns in a dataset of about a trillion words collected from the web and digitized books. GPT-3 then uses its digest of that massive corpus to respond to text prompts by generating new text with similar statistical patterns, endowing it with the ability to compose news articles, satire, and even poetry. 

GPT-3’s creators designed the AI to learn language patterns and immediately saw GPT-3 scoring exceptionally well on reading-comprehension tests. But when OpenAI researchers configured the system to generate strikingly human-like text, they began to imagine how these generative capabilities could be used for harmful purposes. Previously, OpenAI had often released full code with its publications on new models. This time, GPT-3s creators decided to hide its underlying code from the public, not wanting to disseminate the full model or the millions of web pages used to train the system. In OpenAI’s research paper on GPT-3, authors note that “any socially harmful activity that relies on generating text could be augmented by powerful language models,” and “the misuse potential of language models increases as the quality of text synthesis improves.” 

Just like humans are prone to internalizing the belief systems “fed” to us, machine-learning systems mimic what’s in their training data. In GPT-3’s case, biases present in the vast training corpus of Internet text led the AI to generate stereotyped and prejudiced content. Preliminary testing at OpenAI has shown that GPT-3-generated content reflects gendered stereotypes and reproduces racial and religious biases. Because of already fragmented trust and pervasive polarization online, Internet users find it increasingly difficult to trust online content. GPT-3-generated text online would require us to be even more critical consumers of online content. The ability for GPT-3 to mirror societal biases and prejudices in its generated text means that GPT-3 online might only give more voice to our darkest emotional, civic, and social tendencies.

Because GPT-3’s underlying code remains in the hands of OpenAI and its API (the interface where users can partially work with and test out GPT-3) is not freely accessible to the public, many concerns over its implications steer our focus towards a possible future where its synthetic text becomes ubiquitous online. Due to GPT-3’s frighteningly successful “conception” of natural language as well as creative capabilities and bias-susceptible processes, many are worried that a GPT-3-populated Internet could do a lot of harm to our information ecosystem. However, GPT-3 exhibits powerful affordances as well as limitations, and experts are asking us not to project too many fears about human-level AI onto GPT-3 just yet.

GPT-3: Online Journalist

GPT-3-generated news article that research participants had the greatest difficulty distinguishing from a human-written article

Fundamentally, concerns about GPT-3-generated text online come from an awareness of just how different a threat synthetic text poses versus other forms of synthetic media. In a recent article, WIRED contributor Renee DiResta writes that, throughout the development of photoshop and other image-editing CGI tools, we learned to develop a healthy skepticism, though without fully disbelieving such photos, because “we understand that each picture is rooted in reality.” She points out that generated media, such as deepfaked video or GPT-3 output, is different because there is no unaltered original, and we will have to adjust to a new level of unreality. In addition, synthetic text “will be easy to generate in high volume, and with fewer tells to enable detection.” Right now, it is possible to detect repetitive or recycled comments that use the same snippets of text to flood a comment section or persuade audiences. However, if such comments had been generated independently by an AI, DiResta notes, these manipulation campaigns would have been much harder to detect:

“Undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister … The ability to manufacture a majority opinion, or create a fake commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns.” – Renee DiResta, WIRED

In their paper “Language Models are Few-Shot Learners,” GPT-3’s developers discuss the potential for misuse and threat actors—those seeking to use GPT-3 for malicious or harmful purposes. The paper states that threat actors can be organized by skill and resource levels, “ranging from low or moderately skilled and resourced actors who may be able to build a malicious product to … highly skilled and well resourced (e.g. state-sponsored) groups with long-term agendas.” Interestingly, OpenAI researchers write that threat actor agendas are “influenced by economic factors like scalability and ease of deployment” and that ease of use is another significant incentive for malicious use of AI. It seems that the very principles that guide the development of many emerging AI models like GPT-3—scalability, accessibility, and stable infrastructure—could also be what position these models as perfect options for threat actors seeking to undermine personal and collective agency online.

Staying with the projected scenario of GPT-3-text becoming widespread online, it is useful to consider the already algorithmic nature of our interactions online. In her article, DiResta writes about the Internet that “algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.” Introducing an AI “voice” into this environment could make our online interactions even less human. One example of possible algorithmic accomplices of GPT-3 are Google Autocomplete algorithms which internalize queries and often reflect “-ism” statements and biases while processing suggestions based on common searches. The presence of AI-generated texts could populate Google algorithms with even more problematic content and continue to narrow our ability to have control over how we acquire neutral, unbiased knowledge.

An Emotional Problem

Talk of GPT-3 passing The Turing Test reflects many concerns about creating increasingly powerful AI. GPT-3 seems to hint at the possibility of a future where AI is able to replicate those attributes we might hope are exclusively human—traits like creativity, ingenuity, and, of course, understanding language. As Microsoft AI Blog contributor Jennifer Langston writes in a recent post, “designing AI models that one day understand the world more like people starts with language, a critical component to understanding human intent.” 

Of course, as a machine-learning model, GPT-3 relies on a neural network (inspired by neural pathways in the human brain) that can process language. Importantly, GPT-3 represents a massive acceleration in scale and computing power (rather than novel ML techniques), which give it the ability to exhibit something eerily close to human intelligence. A recent Vox article on the subject asks the question, “is human-level intelligence something that will require a fundamentally new approach, or is it something that emerges of its own accord as we pump more and more computing power into simple machine learning models?” For some, the idea that the only thing distinguishing human intelligence from our algorithms is our relative “computing power” is more than a little uncomfortable. 

As mentioned earlier, GPT-3 has been able to exhibit creative and artistic qualities, generating a trove of literary content including poetry and satire. The attributes we’ve long understood to be distinctly human are now proving to be replicable by AI, raising new anxieties about humanity, identity, and the future.

GPT-3’s recreation of Allen Ginsberg’s “Howl”

GPT-3’s Limitations

While GPT-3 can generate impressively human-like text, most researchers maintain that this text is often “unmoored from reality,” and, even with GPT-3, we are still far from reaching artificial general intelligence. In a recent MIT Technology Review article, author Will Douglas Heaven points out that GPT-3 often returns contradictions or nonsense because its process is not guided by any true understanding of reality. Ultimately, researchers believe that GPT-3’s human-like output and versatility are the results of excellent engineering, not genuine intelligence. GPT-3 uses many of its parameters to memorize Internet text that doesn’t generalize easily, and essentially parrots back “some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative,” according to Douglas. As it stands today, GPT-3 is just an early glimpse of AI’s world-altering potential, and remains a narrowly intelligent tool made by humans and reflecting our conceptions of the world.

A final point of optimism is that the field around ethical AI is ever-expanding, and developers at OpenAI are looking into the possibility of automatic discriminators that may have greater success than human evaluators at detecting AI model-generated text. In their research paper, developers wrote that “automatic detection of these models may be a promising area of future research.” Improving our ability to detect AI-generated text might be one way to regain agency in a possible future with bias-reproducing AI “journalists” or undetectable deepfaked text spreading misinformation online.

Ultimately, GPT-3 suggests that language is more predictable than many people assume, and challenges common assumptions about what makes humans unique. Plus, exactly what’s going on inside GPT-3 isn’t entirely clear, challenging us to continue to think about the AI “black box” problem and methods to figure out just how GPT-3 reiterates natural language after digesting millions of snippets of Internet text. However, perhaps GPT-3 gives us an opportunity to decide for ourselves whether even the most powerful of future text generators could undermine the distinctly human conception of the world and of poetry, language, and conversation. A tweet Douglas quotes in his article from user @mark_riedl provides one possible way to frame both our worries and hopes about tech like GPT-3: “Remember…the Turing Test is not for AI to pass, but for humans to fail.”

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, artificial intelligence, GPT-3, online journalists, textfakes

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Honors Projects Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • Biological ChatGPT: Rewriting Life With Evo 2 May 4, 2025
  • Unsupervised Thematic Clustering for Genre Classification in Literary Texts May 4, 2025
  • Motor Brain-Computer Interface Reanimates Paralyzed Hand May 4, 2025

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics Alzheimer's Disease antibiotics artificial intelligence bacteria Bathymetry Beavers Biology brain Cancer Biology Cell Biology Chemistry and Biochemistry Chlorofluorocarbons climate change Computer Science and Tech CRISPR Cytoskeleton Depression dreams epigenetics Ethics Genes honors Luis Vidali Marine Biology Marine Mammals Marine noise Medicine memory Montreal Protocol Moss neurobiology neuroscience Nutrients Ozone hole Plants Psychology and Neuroscience REM seabirds sleep student superintelligence Technology therapy

Copyright © 2025 · students.bowdoin.edu