• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

Mauricio Cuba Almeida '27

Ethical ramifications of AI-powered medical diagnoses

December 7, 2025 by Mauricio Cuba Almeida '27

Incredible advancements in artificial intelligence (AI) have recently paved the way for the use of AI in healthcare settings. Implementation of AI has the potential to address worker shortages in the medical field, lead to discovery of new drugs, or improve diagnoses (Bajwa et al., 2021). A writer for the American Medical Association, Benji Feldheim applauds AI for restoring the “human side” in medicine. For example, AI scribes in particular ease the documentation burden doctors face—reducing burnout and improving doctors’ interactions with patients as a result (Feldheim, 2025). Another example is the AI model developed by Shmatko et al. (2025), known as Delphi-2M, which is capable of accurately predicting a patient’s next 20 years of disease burden (i.e., what diseases they would contract and when). Evidently, AI is a very promising technology already capable of improving lives, however, there are reasons to be skeptical. While these advances are promising, these uses of AI also raise concerns about fairness and clinical safety. After a brief synopsis of Shmatko et al.’s Delphi-2M, I evaluate the ethical ramifications of AI-powered diagnoses and related clinical tools.

Delphi-2M is an AI model trained on over 400,000 patient histories from a UK database to forecast an individual’s 20-year disease trajectory. Similar to chatbots like ChatGPT, Delphi-2M is a large language model (LLM), a type of AI that can recognize and reproduce patterns from large amounts of data. Similar to how chatbots pick up on what words are likely to appear with other words in order to form sentences, Delphi-2M learns from its vast training set of medical records to predict a patient’s disease trajectory from realworld patterns. As Yonghui Wu puts it in her summary of Shmatko et al.’s work, it’s just how becoming a smoker may be followed by a future diagnosis of lung cancer—these are patterns Delphi-2M recognize. To do this, Delphi-2M is fed “tokens” that link diseases or health factors to specific times in a person’s life, like chickenpox at age 2 or smoking at age 41 (Figure 1). Then, Delphi-2M outputs new tokens that predict what diseases and when they will occur in an individual’s life, like the onset of respiratory disorders at age 71 as a result of smoking. Delphi-2M, after being trained, was tested by predicting the medical histories of 1.9 million patients not included in the original training set. Shmatko et al. demonstrate this AI to have great success in accurately predicting disease trajectory, as it partially predicts patterns in individuals’ diagnoses in 97% of cases.

Visualization of Delphi-2M input and output (Wu, 2025).

Nonetheless, we must hold AI used to diagnose patients to a higher level of scrutiny compared to AI used commercially. LLMs are not perfect as they are subject to algorithmic bias and misuse, beginning before their creation. Shmatko et al. (2025), for example, address some shortcomings of the training data used for Delphi-2M. Notably, they explain the data from a mostly-white, older subset of the UK population isn’t entirely generalizable to very different demographics. Though Shmatko et al. found successes testing the model against a Danish database after training it on UK patients, I’m still concerned how Delphi-2M would perform on non-European and younger demographics, or those underrepresented in training data. Facial recognition is a prime example of where AI underperforms when training datasets lack diverse representation. AI designed to recognize faces historically underperform on individuals with feminine features or darker skin due to unrepresentative training data (Hardesty, 2018). With this in mind, it’s important that training data for diagnostic AI is representative of all demographics prior to widespread implementation.

Furthermore, Cabitza et al. (2017) wrote on some of the unintended consequences of machine learning in healthcare, postulating that widespread implementation of these tools also has the potential to reduce the skill of physicians. Though convenient in the short run, Cabitza et al. raise concerns with overreliance on AI—as studies show physicians aided by AI were less sensitive and accurate in diagnosing patients. Mammogram readers, for instance, were 14% less sensitive in their diagnostics when presented with images marked by computer-aided detection (Povyakalo et al., 2013). Though this study focused on image diagnoses, it’s clear how widespread use of Delphi-2M would lead to the same problems of deskilling in physicians. Delphi-2M is also exclusively a text-based model, which as Cabitza et al. detail, means that these diagnosis algorithms do not incorporate crucial contextual elements that are “psychological, relational, social, and organizational” in nature. A realworld example that Cabitza et al. described was an instance in which an AI model predicted a lower mortality risk for patients with pneumonia and asthma compared to those with pneumonia and without asthma. Understanding that asthma is not a protective factor for pneumonia patients, the involved researchers found the discrepant AI output was the result of hospital procedures that admitted pneumonia patients with asthma directly to intensive care, giving them better health outcomes. This missing piece of crucial information, which was difficult to represent in these prognostic models, led to an error a physician would not make. Thus, AI is limited in what information it can train on.

Though these new advancements in healthcare AI are promising, they have their limits. Tools like Delphi-2M spot patterns across vast clinical histories that no single clinician could feasibly track, yet the benefits depend on who is represented in the data, how predictions are explained and used, and whether safeguards are in place when they fail. Before AI is implemented in healthcare, we must demand representative training sets, validation across diverse populations, clear disclosures of uncertainty and limitations, and constant human involvement in the process that resists automation bias and deskilling. In short, diagnostic AI should supplmenent—not replace—clinical judgment, and it should be developed with privacy, equity, and patient trust at the forefront. Only then will these systems reliably improve care rather than merely appear to.

 

References

Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthcare Journal, 8(2), e188–e194. https://doi.org/10.7861/fhj.2021-0095

Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517. https://doi.org/10.1001/jama.2017.7797

Feldheim, B. (2025, June 12). AI scribes save 15,000 hours—and restore the human side of medicine. American Medical Association. https://www.ama-assn.org/practice-management/digital-health/ai-scribes-save-15000-hours-and-restore-human-side-medicine

Hardesty, L. (2018, February 11). Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

Povyakalo, A. A., Alberdi, E., Strigini, L., & Ayton, P. (2013). How to Discriminate between Computer-Aided and Computer-Hindered Decisions. Medical Decision Making, 33(1), 98–107. https://doi.org/10.1177/0272989×12465490

Wu, Y. (2025). AI uses medical records to accurately predict onset of disease 20 years into the future. Nature, 647(8088), 44–45. https://doi.org/10.1038/d41586-025-02971-3

Filed Under: Biology, Computer Science and Tech, Psychology and Neuroscience, Science

Motor Brain-Computer Interface Reanimates Paralyzed Hand

May 4, 2025 by Mauricio Cuba Almeida '27

Over five million people in the United States live with paralysis (Armour et al., 2016), representing a large portion of the US population. Though the extent of paralysis varies from person-to-person, most with paralysis experience unmet needs that subtract from their overall life satisfaction. A survey of those with paralysis revealed “peer support, support for family caregivers, [and] sports activities” as domains where individuals with paralysis experienced less fulfillment—with lower household income predicting a higher likelihood of unmet needs (Trezzini et al., 2019). Consequently, individuals with sufficient motor function have turned to video games as a means to meet some of these needs, as video games are sources of recreation, artistic expression, social connectedness, and enablement (Cairns et al., 2019). Oftentimes, however, these individuals are limited by what games they are able to engage with—as they often “avoid multiplayer games with able-bodied players” (Willsey et al., 2025). Thus, Willsey and colleagues (2025) explore brain-computer interfaces as a valuable potential solution for restoring more sophisticated motor control of not just video games, but of digital interfaces used for social networking or remote work.

Brain-computer interfaces (BCIs) are devices that read and analyze brain activity in order to produce commands that are then relayed to output devices, with the intent of restoring useful bodily function (Shih et al., 2012). Willsey et al. explain how current motor BCIs are unable to distinguish between the brain activity corresponding to the movement of different fingers, so BCIs have instead relied on detecting the more general movement of grasping a hand (where the fingers are treated as one group). This limits BCIs to controlling fewer dimensions of an instrument: just being able to control a computer’s point-and-click cursor control as compared to typing on a computer. Hence, Willsey et al. seek to expand BCIs to allow for greater object manipulation—implementing finger decoding that will differentiate the brain output signals for different fingers, allowing for “typing, playing a musical instrument or manipulating a multieffector digital interface such as a video game controller.” Improving BCIs would also involve continuous finger decoding, as finger decoding has mostly been done retrospectively, where finger signals are not classified and read until after the brain data is analyzed. 

Willsey et al. developed a BCI system that is capable of decoding three independent finger groups (with the thumb decoded into two dimensions), allowing for four total dimensions of control. By training on the participant’s brain over nine days as they attempt to move individual fingers, the BCI can learn to distinguish brain regions that correspond to finger movements. These four dimensions of control are well reflected in a quadcopter simulation, where a patient with an implemented BCI is able to manipulate a virtual hand to fly a quadcopter drone through various hoops of an obstacle course. Many applications, even beyond video games, are apparent. These finger controls can be extended to a robotic hand or could reanimate the paralyzed limb. 

Finger movement is decoded into three distinct groups (differentiated by color).
Finger movement is decoded into three distinct groups (differentiated by color; Willsey et al., 2025).
Participant navigates quadcopter through a hoop through decoded finger movements.
Participant navigates quadcopter through a hoop through decoded finger movements (Willsey et al., 2025).

Download Full Video

The patient’s feelings of social connectedness, enablement and recreation were greatly improved. Willsey et al. note how the patient often looked forward to the quadcopter sessions, frequently “[asking] when the next quadcopter session was.” Not only did the patient find enjoyment in controlling the quadcopter, but they found training not to be tedious and the controls intuitive. To date, this finger BCI proves to be the most capable kind of motor BCI, and will serve as a valuable model for non-motor BCIs, like Brain2Char, a system for decoding text from brain recordings.

However, BCIs raise significant ethical considerations that must be addressed alongside their development. Are users responsible for all outputs from a BCI, even with outputs unintended? Given that BCIs decode brain signaling and train on data from a very controlled setting, there is always the potential for natural “noise” that may upset a delicate BCI model. Ideally, BCIs are trained on a participant’s brain in a variety of different circumstances to mitigate these errors. Furthermore, BCIs may further stigmatize motor disabilities by encouraging individuals toward restoring “normal” abilities. I am particularly concerned about the cost of this technology. As with most new clinical technologies, implementation is expensive and ends up pricing out individuals with lower socioeconomic statuses. These are often the individuals that face the greatest need for technologies like BCI. As mentioned earlier, lower household income predicts more unmet needs for individuals with paralysis. Nonetheless, so long as they are developed responsibly and efforts are made to ensure their affordability, there is great promise in motor BCIs.

 

References

Armour, B. S., Courtney-Long, E. A., Fox, M. H., Fredine, H., & Cahill, A. (2016). Prevalence and Causes of Paralysis—United States, 2013. American Journal of Public Health, 106(10), 1855–1857. https://doi.org/10.2105/ajph.2016.303270

Cairns, P., Power, C., Barlet, M., Haynes, G., Kaufman, C., & Beeston, J. (2019). Enabled players: The value of accessible digital games. Games and Culture, 16(2), 262–282. https://doi.org/10.1177/1555412019893877

Shih, J. J., Krusienski, D. J., & Wolpaw, J. R. (2012). Brain-Computer interfaces in medicine. Mayo Clinic Proceedings, 87(3), 268–279. https://doi.org/10.1016/j.mayocp.2011.12.008

Trezzini, B., Brach, M., Post, M., & Gemperli, A. (2019). Prevalence of and factors associated with expressed and unmet service needs reported by persons with spinal cord injury living in the community. Spinal Cord, 57(6), 490–500. https://doi.org/10.1038/s41393-019-0243-y

Willsey, M. S., Shah, N. P., Avansino, D. T., Hahn, N. V., Jamiolkowski, R. M., Kamdar, F. B., Hochberg, L. R., Willett, F. R., & Henderson, J. M. (2025). A high-performance brain–computer interface for finger decoding and quadcopter game control in an individual with paralysis. Nature Medicine. https://doi.org/10.1038/s41591-024-03341-8

Filed Under: Computer Science and Tech, Psychology and Neuroscience, Science

Machine learning and algorithmic bias

December 8, 2024 by Mauricio Cuba Almeida '27

Algorithms permeate modern society, especially AI algorithms. Artificial intelligence (AI) is built with various techniques, like machine learning, deep learning, or natural language processing, that trains AI to mimic humans at a certain task. Healthcare, loan approval, and security surveillance are a few industries that have begun using AI (Alowais et al., 2023; Purificato et al., 2022; Choung et al., 2024). Most people will inadvertently continue to interact with AI on a daily basis.

However, what are the problems faced by an increasing algorithmic society? Authors Sina Fazelpour and David Danks, in their article, explore this question in the context of algorithmic bias. Indeed, the problem they identify is that AI perpetuates bias. At its most neutral, Fazelpour and Danks (2021) explain that algorithmic bias is some “systematic deviation in algorithm output, performance, or impact, relative to some norm or standard,” suggesting that algorithms can be biased against a moral, statistical, or social norm. Fazelpour and Danks use a running example of a university training an AI algorithm with past student data to predict future student success. Thus, this algorithm exhibits a statistical bias if student success predictions are discordant with what has happened historically (in training data). Similarly, the algorithm exhibits a moral bias if it illegitimately depends on the student’s gender to produce a prediction. This is seen already in facial recognition algorithms that “perform worse for people with feminine features or darker skin” or recidivism prediction models that rate people of color as higher risk (Fazelpour & Danks, 2021). Clearly, algorithmic biases have the potential to preserve or exacerbate existing injustices under the guise of being “objective.” 

Algorithmic bias will manifest through different means. As Fazelpour and Danks discuss, harmful bias will be evident even prior to the creation of an algorithm if values and norms are not deeply considered. In the example of a student-success prediction model, universities must make value judgments, specifying what target variables define “student success,” whether it’s grades, respect from peers, or post-graduation salary. The more complex the goal, the more difficult and contested will choosing target variables be. Indeed, choosing target variables is a source of algorithmic bias. As Fazelpour and Danks explain, enrollment or financial aid decisions based on the prediction of student success may discriminate against minority students if first year performance was used in that prediction since minority students may face additional challenges.

Using training data that is biased will also lead to bias in an AI algorithm. In other words, bias in the measured world will be reflected in AI algorithms that mimic our world. For example, recruiting AI that reviews resumes is often trained on employees already hired by the company. In many cases, so-called gender-blind recruiting AI have discriminated against women by using gendered information on a resume that was absent from the resumes of a majority-male workplace (Pisanelli, 2022; Parasurama & Sedoc, 2021). Fazelpour and Danks also mention that biased data can arise from limitations and biases in measurement methods. This is what happens when facial recognition systems are trained predominantly on white faces. Consequently, these facial recognition systems are less effective when individuals do not look like the data the algorithm has been trained on.

Alternatively, users’ misinterpretations of predictive algorithms may produce biased results, Fazelpour and Danks argue. An algorithm is optimized for one purpose, and without even knowing, users may utilize this algorithm for another. A user could inadvertently interpret predicted “student success” as a metric for grades instead of what an algorithm is optimized to predict (e.g., likelihood to drop out). Decisions stemming from misinterpretations of algorithm predictions are doomed to be biased—and not just for the aforementioned reasons. Misunderstandings of algorithmic predictions lead to poor decisions if the variables predicting an outcome are also assumed to cause that outcome. Students in advanced courses may be predicted to have higher student success, but as Fazelpour and Danks put it, we shouldn’t enroll every underachieving student in an advanced course. Models such as these should also be applied in a context similar to when historical data was collected. Doing this is more important the longer a model is used as present data begins to differ from historical training data. In other words, student success models created for a small private college should not be deployed at a large public university nor many years later.

Fazelpour and Danks establish that algorithmic bias is nearly impossible to eliminate—solutions often must engage with the complexities of our society. The authors delve into several technical solutions, such as optimizing an algorithm using “fairness” as a constraint or training an algorithm on corrected historical data. This quickly reveals itself to be problematic, as determining fairness is a difficult value judgment. Nonetheless, algorithms provide tremendous benefit to us, even in moral and social ways. Algorithms can identify biases and serve as better alternatives to human practices. Fazelpour and Danks conclude that algorithms should continue to be studied in order to identify, mitigate, and prevent bias.

References

Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Saleh, K. B., Badreldin, H. A., Yami, M. S. A., Harbi, S. A., & Albekairy, A. M. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Medical Education, 23(1). https://doi.org/10.1186/s12909-023-04698-z

Choung, H., David, P., & Ling, T. (2024). Acceptance of AI-Powered Facial Recognition Technology in Surveillance scenarios: Role of trust, security, and privacy perceptions. Technology in Society, 102721. https://doi.org/10.1016/j.techsoc.2024.102721

Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8). https://doi.org/10.1111/phc3.12760

Parasurama, P., & Sedoc, J. (2021, December 16). Degendering resumes for fair algorithmic resume screening. arXiv.org. https://arxiv.org/abs/2112.08910

Pisanelli, E. (2022). Your resume is your gatekeeper: Automated resume screening as a strategy to reduce gender gaps in hiring. Economics Letters, 221, 110892. https://doi.org/10.1016/j.econlet.2022.110892

Purificato, E., Lorenzo, F., Fallucchi, F., & De Luca, E. W. (2022). The use of responsible artificial intelligence techniques in the context of loan approval processes. International Journal of Human-Computer Interaction, 39(7), 1543–1562. https://doi.org/10.1080/10447318.2022.2081284

Filed Under: Computer Science and Tech Tagged With: AI, AI ethics, artificial intelligence, Ethics

Treating allergic asthma with bacteria

April 21, 2024 by Mauricio Cuba Almeida '27

The prevalence of allergic diseases increased globally following the 1960s. Between 1982 and 1997, the prevalence of asthma and hay fever in Australian schoolchildren rose from 12.9 to 38.6% and 22.5 to 44.0%, respectively (Downs et al., 2001). Similar trends are observed globally (Thomsen, 2015; Turke, 2017). Allergic asthma occurs in about 12 million individuals in the U.S. and its prevalence continues to rise (Gutowska-Ślesik et al., 2023; GenenTech, n.d.).  The immune system is the body’s form of defense against pathogens like viruses and bacteria, and is also where allergies begin. When the immune system regularly overreacts to a harmless substance, one is said to have an allergic disease (Allergies and the Immune System, 2021). Allergies are the subject of many immunological studies due to their health effects. Asthma, for example, is characterized by minor or life-threatening inflammation in the airways.

Theories have surfaced in order to explain the dramatic increase in allergic diseases. One leading theory is the Hygiene Hypothesis. The hypothesis claims that lack of exposure to certain microbial species, like bacteria, is important for the proper development of our immune system (Bloomfield et al., 2006). Therefore, researchers have investigated the mechanisms by which allergic disease, particularly asthma, is deterred by these species. A 2023 study by Yao and colleagues focuses on PepN—a bacterial protein that has shown promise in previous studies—to uncover how the immune system changes when exposed to allergy-reducing disease.

Alveolar macrophages (AMs) are a kind of macrophage found in the lungs that substantially influence the development of asthma (macrophages are a type of immune cell). AMs produce signalers that either encourage or inhibit inflammation in the lungs, which makes them targets for asthma treatments. In fact, AMs, when activated, undergo reprogramming that transforms them into a pro-infammatory or anti-infammatory macrophage (referred to as a CD11chigh macrophage). Yao and colleagues believe this process to be the theoretical foundation of the Hygiene Hypothesis, suggesting that asthma can be treated or prevented by deliberately transforming macrophages to be anti-inflammatory.

Yao and colleagues induced allergic asthma in mice through the use of intranasally injected allergens. To serve as a baseline, the control group was given no further treatment; in the experimental group, mice were exposed to bacterial protein PepN multiple times before and after being inflicted with asthma. Yao and colleagues then dissected the mice, investigating the CD11chigh macrophages and other forces at play.

After comparing the control group of mice to the experimental group, PepN was found to recruit macrophages from the bone marrow into the respiratory tract and transform them to be anti-inflammatory through changes in the macrophages’ metabolism. PepN also encouraged the proliferation of already existing CD11chigh macrophages that reside in the lungs. These forces culminated in a protective effect against allergic asthma (see fig. 1). 

Figure 1. The proposed mechanism by which PepN reduces inflammation in allergic asthma. PepN encourages the proliferation of CD11chigh macrophages in the lungs and recruits additional macrophages which also differentiate into CD11chigh macrophages. Monocytes and CD11int macrophages are earlier forms of the CD11chigh macrophage. Adapted from Yao et al. (2023).

For the future, Yao and colleagues believe more research is necessary to determine other mechanisms of the Hygiene Hypothesis. Though there are limitations to their current study, Yao and colleagues provide a new idea for the prevention and treatment of allergic asthma: targeting CD11chigh macrophages to combat asthmatic inflammation.

 

References

Allergies and the immune system. (2021, August 8). Johns Hopkins Medicine. https://www.hopkinsmedicine.org/health/conditions-and-diseases/allergies-and-the-immune-system

Bloomfield, S. F., Stanwell‐Smith, R., Crevel, R., & Pickup, J. C. (2006). Too clean, or not too clean: the Hygiene Hypothesis and home hygiene. Clinical & Experimental Allergy/Clinical and Experimental Allergy, 36(4), 402–425. https://doi.org/10.1111/j.1365-2222.2006.02463.x

Downs, S. H., Marks, G. B., Sporik, R., Belosouva, E. G., Car, N., & Peat, J. K. (2001). Continued increase in the prevalence of asthma and atopy. Archives of Disease in Childhood, 84(1), 20–23. https://doi.org/10.1136/adc.84.1.20

GenenTech: Asthma Statistics. (n.d.). https://www.gene.com/patients/disease-education/asthma-statistics#:~:text=Prevalence,asthma%20sufferers%20in%20the%20U.S

Gutowska-Ślesik, J., Samoliński, B., & Krzych‐Fałta, E. (2023). The increase in allergic conditions based on a review of literature. Postępy Dermatologii I Alergologii, 40(1), 1–7. https://doi.org/10.5114/ada.2022.119009

Thomsen, S. F. (2015). Epidemiology and natural history of atopic diseases. European Clinical Respiratory Journal, 2(1), 24642. https://doi.org/10.3402/ecrj.v2.24642

Turke, P. W. (2017). Childhood food allergies. Evolution, Medicine and Public Health, 2017(1), 154–160. https://doi.org/10.1093/emph/eox014

Yao, S., Weng, D., Wang, Y., Zhang, Y., Huang, Q., Wu, K., Li, H., Zhang, X., Yin, Y., & Xu, W. (2023). The preprogrammed anti-inflammatory phenotypes of CD11chigh macrophages by Streptococcus pneumoniae aminopeptidase N safeguard from allergic asthma. Journal of Translational Medicine, 21(1). https://doi.org/10.1186/s12967-023-04768-2

Filed Under: Biology

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Honors Projects Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • Cause of Sea Star Wasting Disease Epidemic Linked to Common Bacteria December 16, 2025
  • Co-pyrolysis of Biomass and RDF for Waste to Energy Conversion December 16, 2025
  • Lobsters and Telomerase: How to Reach Longevity December 15, 2025

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics Alzheimer's Disease antibiotics artificial intelligence bacteria Bathymetry Biology brain Cancer Biology Cell Biology Chemistry and Biochemistry Chlorofluorocarbons climate change Computer Science and Tech CRISPR Cytoskeleton Depression Dermatology dreams epigenetics Ethics Genes Gut microbiota honors Luis Vidali Marine Biology Marine Mammals Marine noise Medicine memory Montreal Protocol Moss neurobiology neuroscience Nutrients Ozone hole Plants Psychology and Neuroscience REM seabirds sleep student Technology therapy

Copyright © 2025 · students.bowdoin.edu