• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

Mauricio Cuba Almeida

Motor Brain-Computer Interface Reanimates Paralyzed Hand

May 4, 2025 by Mauricio Cuba Almeida

Over five million people in the United States live with paralysis (Armour et al., 2016), representing a large portion of the US population. Though the extent of paralysis varies from person-to-person, most with paralysis experience unmet needs that subtract from their overall life satisfaction. A survey of those with paralysis revealed “peer support, support for family caregivers, [and] sports activities” as domains where individuals with paralysis experienced less fulfillment—with lower household income predicting a higher likelihood of unmet needs (Trezzini et al., 2019). Consequently, individuals with sufficient motor function have turned to video games as a means to meet some of these needs, as video games are sources of recreation, artistic expression, social connectedness, and enablement (Cairns et al., 2019). Oftentimes, however, these individuals are limited by what games they are able to engage with—as they often “avoid multiplayer games with able-bodied players” (Willsey et al., 2025). Thus, Willsey and colleagues (2025) explore brain-computer interfaces as a valuable potential solution for restoring more sophisticated motor control of not just video games, but of digital interfaces used for social networking or remote work.

Brain-computer interfaces (BCIs) are devices that read and analyze brain activity in order to produce commands that are then relayed to output devices, with the intent of restoring useful bodily function (Shih et al., 2012). Willsey et al. explain how current motor BCIs are unable to distinguish between the brain activity corresponding to the movement of different fingers, so BCIs have instead relied on detecting the more general movement of grasping a hand (where the fingers are treated as one group). This limits BCIs to controlling fewer dimensions of an instrument: just being able to control a computer’s point-and-click cursor control as compared to typing on a computer. Hence, Willsey et al. seek to expand BCIs to allow for greater object manipulation—implementing finger decoding that will differentiate the brain output signals for different fingers, allowing for “typing, playing a musical instrument or manipulating a multieffector digital interface such as a video game controller.” Improving BCIs would also involve continuous finger decoding, as finger decoding has mostly been done retrospectively, where finger signals are not classified and read until after the brain data is analyzed. 

Willsey et al. developed a BCI system that is capable of decoding three independent finger groups (with the thumb decoded into two dimensions), allowing for four total dimensions of control. By training on the participant’s brain over nine days as they attempt to move individual fingers, the BCI can learn to distinguish brain regions that correspond to finger movements. These four dimensions of control are well reflected in a quadcopter simulation, where a patient with an implemented BCI is able to manipulate a virtual hand to fly a quadcopter drone through various hoops of an obstacle course. Many applications, even beyond video games, are apparent. These finger controls can be extended to a robotic hand or could reanimate the paralyzed limb. 

Finger movement is decoded into three distinct groups (differentiated by color).
Finger movement is decoded into three distinct groups (differentiated by color; Willsey et al., 2025).
Participant navigates quadcopter through a hoop through decoded finger movements.
Participant navigates quadcopter through a hoop through decoded finger movements (Willsey et al., 2025).

Download Full Video

The patient’s feelings of social connectedness, enablement and recreation were greatly improved. Willsey et al. note how the patient often looked forward to the quadcopter sessions, frequently “[asking] when the next quadcopter session was.” Not only did the patient find enjoyment in controlling the quadcopter, but they found training not to be tedious and the controls intuitive. To date, this finger BCI proves to be the most capable kind of motor BCI, and will serve as a valuable model for non-motor BCIs, like Brain2Char, a system for decoding text from brain recordings.

However, BCIs raise significant ethical considerations that must be addressed alongside their development. Are users responsible for all outputs from a BCI, even with outputs unintended? Given that BCIs decode brain signaling and train on data from a very controlled setting, there is always the potential for natural “noise” that may upset a delicate BCI model. Ideally, BCIs are trained on a participant’s brain in a variety of different circumstances to mitigate these errors. Furthermore, BCIs may further stigmatize motor disabilities by encouraging individuals toward restoring “normal” abilities. I am particularly concerned about the cost of this technology. As with most new clinical technologies, implementation is expensive and ends up pricing out individuals with lower socioeconomic statuses. These are often the individuals that face the greatest need for technologies like BCI. As mentioned earlier, lower household income predicts more unmet needs for individuals with paralysis. Nonetheless, so long as they are developed responsibly and efforts are made to ensure their affordability, there is great promise in motor BCIs.

 

References

Armour, B. S., Courtney-Long, E. A., Fox, M. H., Fredine, H., & Cahill, A. (2016). Prevalence and Causes of Paralysis—United States, 2013. American Journal of Public Health, 106(10), 1855–1857. https://doi.org/10.2105/ajph.2016.303270

Cairns, P., Power, C., Barlet, M., Haynes, G., Kaufman, C., & Beeston, J. (2019). Enabled players: The value of accessible digital games. Games and Culture, 16(2), 262–282. https://doi.org/10.1177/1555412019893877

Shih, J. J., Krusienski, D. J., & Wolpaw, J. R. (2012). Brain-Computer interfaces in medicine. Mayo Clinic Proceedings, 87(3), 268–279. https://doi.org/10.1016/j.mayocp.2011.12.008

Trezzini, B., Brach, M., Post, M., & Gemperli, A. (2019). Prevalence of and factors associated with expressed and unmet service needs reported by persons with spinal cord injury living in the community. Spinal Cord, 57(6), 490–500. https://doi.org/10.1038/s41393-019-0243-y

Willsey, M. S., Shah, N. P., Avansino, D. T., Hahn, N. V., Jamiolkowski, R. M., Kamdar, F. B., Hochberg, L. R., Willett, F. R., & Henderson, J. M. (2025). A high-performance brain–computer interface for finger decoding and quadcopter game control in an individual with paralysis. Nature Medicine. https://doi.org/10.1038/s41591-024-03341-8

Filed Under: Computer Science and Tech, Psychology and Neuroscience, Science

Machine learning and algorithmic bias

December 8, 2024 by Mauricio Cuba Almeida

Algorithms permeate modern society, especially AI algorithms. Artificial intelligence (AI) is built with various techniques, like machine learning, deep learning, or natural language processing, that trains AI to mimic humans at a certain task. Healthcare, loan approval, and security surveillance are a few industries that have begun using AI (Alowais et al., 2023; Purificato et al., 2022; Choung et al., 2024). Most people will inadvertently continue to interact with AI on a daily basis.

However, what are the problems faced by an increasing algorithmic society? Authors Sina Fazelpour and David Danks, in their article, explore this question in the context of algorithmic bias. Indeed, the problem they identify is that AI perpetuates bias. At its most neutral, Fazelpour and Danks (2021) explain that algorithmic bias is some “systematic deviation in algorithm output, performance, or impact, relative to some norm or standard,” suggesting that algorithms can be biased against a moral, statistical, or social norm. Fazelpour and Danks use a running example of a university training an AI algorithm with past student data to predict future student success. Thus, this algorithm exhibits a statistical bias if student success predictions are discordant with what has happened historically (in training data). Similarly, the algorithm exhibits a moral bias if it illegitimately depends on the student’s gender to produce a prediction. This is seen already in facial recognition algorithms that “perform worse for people with feminine features or darker skin” or recidivism prediction models that rate people of color as higher risk (Fazelpour & Danks, 2021). Clearly, algorithmic biases have the potential to preserve or exacerbate existing injustices under the guise of being “objective.” 

Algorithmic bias will manifest through different means. As Fazelpour and Danks discuss, harmful bias will be evident even prior to the creation of an algorithm if values and norms are not deeply considered. In the example of a student-success prediction model, universities must make value judgments, specifying what target variables define “student success,” whether it’s grades, respect from peers, or post-graduation salary. The more complex the goal, the more difficult and contested will choosing target variables be. Indeed, choosing target variables is a source of algorithmic bias. As Fazelpour and Danks explain, enrollment or financial aid decisions based on the prediction of student success may discriminate against minority students if first year performance was used in that prediction since minority students may face additional challenges.

Using training data that is biased will also lead to bias in an AI algorithm. In other words, bias in the measured world will be reflected in AI algorithms that mimic our world. For example, recruiting AI that reviews resumes is often trained on employees already hired by the company. In many cases, so-called gender-blind recruiting AI have discriminated against women by using gendered information on a resume that was absent from the resumes of a majority-male workplace (Pisanelli, 2022; Parasurama & Sedoc, 2021). Fazelpour and Danks also mention that biased data can arise from limitations and biases in measurement methods. This is what happens when facial recognition systems are trained predominantly on white faces. Consequently, these facial recognition systems are less effective when individuals do not look like the data the algorithm has been trained on.

Alternatively, users’ misinterpretations of predictive algorithms may produce biased results, Fazelpour and Danks argue. An algorithm is optimized for one purpose, and without even knowing, users may utilize this algorithm for another. A user could inadvertently interpret predicted “student success” as a metric for grades instead of what an algorithm is optimized to predict (e.g., likelihood to drop out). Decisions stemming from misinterpretations of algorithm predictions are doomed to be biased—and not just for the aforementioned reasons. Misunderstandings of algorithmic predictions lead to poor decisions if the variables predicting an outcome are also assumed to cause that outcome. Students in advanced courses may be predicted to have higher student success, but as Fazelpour and Danks put it, we shouldn’t enroll every underachieving student in an advanced course. Models such as these should also be applied in a context similar to when historical data was collected. Doing this is more important the longer a model is used as present data begins to differ from historical training data. In other words, student success models created for a small private college should not be deployed at a large public university nor many years later.

Fazelpour and Danks establish that algorithmic bias is nearly impossible to eliminate—solutions often must engage with the complexities of our society. The authors delve into several technical solutions, such as optimizing an algorithm using “fairness” as a constraint or training an algorithm on corrected historical data. This quickly reveals itself to be problematic, as determining fairness is a difficult value judgment. Nonetheless, algorithms provide tremendous benefit to us, even in moral and social ways. Algorithms can identify biases and serve as better alternatives to human practices. Fazelpour and Danks conclude that algorithms should continue to be studied in order to identify, mitigate, and prevent bias.

References

Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Saleh, K. B., Badreldin, H. A., Yami, M. S. A., Harbi, S. A., & Albekairy, A. M. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Medical Education, 23(1). https://doi.org/10.1186/s12909-023-04698-z

Choung, H., David, P., & Ling, T. (2024). Acceptance of AI-Powered Facial Recognition Technology in Surveillance scenarios: Role of trust, security, and privacy perceptions. Technology in Society, 102721. https://doi.org/10.1016/j.techsoc.2024.102721

Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8). https://doi.org/10.1111/phc3.12760

Parasurama, P., & Sedoc, J. (2021, December 16). Degendering resumes for fair algorithmic resume screening. arXiv.org. https://arxiv.org/abs/2112.08910

Pisanelli, E. (2022). Your resume is your gatekeeper: Automated resume screening as a strategy to reduce gender gaps in hiring. Economics Letters, 221, 110892. https://doi.org/10.1016/j.econlet.2022.110892

Purificato, E., Lorenzo, F., Fallucchi, F., & De Luca, E. W. (2022). The use of responsible artificial intelligence techniques in the context of loan approval processes. International Journal of Human-Computer Interaction, 39(7), 1543–1562. https://doi.org/10.1080/10447318.2022.2081284

Filed Under: Computer Science and Tech Tagged With: AI, AI ethics, artificial intelligence, Ethics

Treating allergic asthma with bacteria

April 21, 2024 by Mauricio Cuba Almeida

The prevalence of allergic diseases increased globally following the 1960s. Between 1982 and 1997, the prevalence of asthma and hay fever in Australian schoolchildren rose from 12.9 to 38.6% and 22.5 to 44.0%, respectively (Downs et al., 2001). Similar trends are observed globally (Thomsen, 2015; Turke, 2017). Allergic asthma occurs in about 12 million individuals in the U.S. and its prevalence continues to rise (Gutowska-Ślesik et al., 2023; GenenTech, n.d.).  The immune system is the body’s form of defense against pathogens like viruses and bacteria, and is also where allergies begin. When the immune system regularly overreacts to a harmless substance, one is said to have an allergic disease (Allergies and the Immune System, 2021). Allergies are the subject of many immunological studies due to their health effects. Asthma, for example, is characterized by minor or life-threatening inflammation in the airways.

Theories have surfaced in order to explain the dramatic increase in allergic diseases. One leading theory is the Hygiene Hypothesis. The hypothesis claims that lack of exposure to certain microbial species, like bacteria, is important for the proper development of our immune system (Bloomfield et al., 2006). Therefore, researchers have investigated the mechanisms by which allergic disease, particularly asthma, is deterred by these species. A 2023 study by Yao and colleagues focuses on PepN—a bacterial protein that has shown promise in previous studies—to uncover how the immune system changes when exposed to allergy-reducing disease.

Alveolar macrophages (AMs) are a kind of macrophage found in the lungs that substantially influence the development of asthma (macrophages are a type of immune cell). AMs produce signalers that either encourage or inhibit inflammation in the lungs, which makes them targets for asthma treatments. In fact, AMs, when activated, undergo reprogramming that transforms them into a pro-infammatory or anti-infammatory macrophage (referred to as a CD11chigh macrophage). Yao and colleagues believe this process to be the theoretical foundation of the Hygiene Hypothesis, suggesting that asthma can be treated or prevented by deliberately transforming macrophages to be anti-inflammatory.

Yao and colleagues induced allergic asthma in mice through the use of intranasally injected allergens. To serve as a baseline, the control group was given no further treatment; in the experimental group, mice were exposed to bacterial protein PepN multiple times before and after being inflicted with asthma. Yao and colleagues then dissected the mice, investigating the CD11chigh macrophages and other forces at play.

After comparing the control group of mice to the experimental group, PepN was found to recruit macrophages from the bone marrow into the respiratory tract and transform them to be anti-inflammatory through changes in the macrophages’ metabolism. PepN also encouraged the proliferation of already existing CD11chigh macrophages that reside in the lungs. These forces culminated in a protective effect against allergic asthma (see fig. 1). 

Figure 1. The proposed mechanism by which PepN reduces inflammation in allergic asthma. PepN encourages the proliferation of CD11chigh macrophages in the lungs and recruits additional macrophages which also differentiate into CD11chigh macrophages. Monocytes and CD11int macrophages are earlier forms of the CD11chigh macrophage. Adapted from Yao et al. (2023).

For the future, Yao and colleagues believe more research is necessary to determine other mechanisms of the Hygiene Hypothesis. Though there are limitations to their current study, Yao and colleagues provide a new idea for the prevention and treatment of allergic asthma: targeting CD11chigh macrophages to combat asthmatic inflammation.

 

References

Allergies and the immune system. (2021, August 8). Johns Hopkins Medicine. https://www.hopkinsmedicine.org/health/conditions-and-diseases/allergies-and-the-immune-system

Bloomfield, S. F., Stanwell‐Smith, R., Crevel, R., & Pickup, J. C. (2006). Too clean, or not too clean: the Hygiene Hypothesis and home hygiene. Clinical & Experimental Allergy/Clinical and Experimental Allergy, 36(4), 402–425. https://doi.org/10.1111/j.1365-2222.2006.02463.x

Downs, S. H., Marks, G. B., Sporik, R., Belosouva, E. G., Car, N., & Peat, J. K. (2001). Continued increase in the prevalence of asthma and atopy. Archives of Disease in Childhood, 84(1), 20–23. https://doi.org/10.1136/adc.84.1.20

GenenTech: Asthma Statistics. (n.d.). https://www.gene.com/patients/disease-education/asthma-statistics#:~:text=Prevalence,asthma%20sufferers%20in%20the%20U.S

Gutowska-Ślesik, J., Samoliński, B., & Krzych‐Fałta, E. (2023). The increase in allergic conditions based on a review of literature. Postępy Dermatologii I Alergologii, 40(1), 1–7. https://doi.org/10.5114/ada.2022.119009

Thomsen, S. F. (2015). Epidemiology and natural history of atopic diseases. European Clinical Respiratory Journal, 2(1), 24642. https://doi.org/10.3402/ecrj.v2.24642

Turke, P. W. (2017). Childhood food allergies. Evolution, Medicine and Public Health, 2017(1), 154–160. https://doi.org/10.1093/emph/eox014

Yao, S., Weng, D., Wang, Y., Zhang, Y., Huang, Q., Wu, K., Li, H., Zhang, X., Yin, Y., & Xu, W. (2023). The preprogrammed anti-inflammatory phenotypes of CD11chigh macrophages by Streptococcus pneumoniae aminopeptidase N safeguard from allergic asthma. Journal of Translational Medicine, 21(1). https://doi.org/10.1186/s12967-023-04768-2

Filed Under: Biology

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Honors Projects Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • Floating Systems: Jellyfish and Evolving Nervous Systems May 22, 2025
  • Biological ChatGPT: Rewriting Life With Evo 2 May 4, 2025
  • Unsupervised Thematic Clustering for Genre Classification in Literary Texts May 4, 2025

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics Alzheimer's Disease antibiotics artificial intelligence bacteria Bathymetry Beavers Biology brain Cancer Biology Cell Biology Chemistry and Biochemistry Chlorofluorocarbons climate change Computer Science and Tech CRISPR Cytoskeleton Depression dreams epigenetics Ethics Genes honors Luis Vidali Marine Biology Marine Mammals Marine noise Medicine memory Montreal Protocol Moss neurobiology neuroscience Nutrients Ozone hole Plants Psychology and Neuroscience REM seabirds sleep student superintelligence Technology therapy

Copyright © 2025 · students.bowdoin.edu