• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

Science

When Distraction Helps: Music as a Tool for Focus in ADHD Cases

December 24, 2025 by Martina Tognato Guaqueta

Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that is characterized by inattention, hyperactivity, and impulsivity. This is one of the most common learning disorders diagnosed in children. ADHD not only takes a toll on an individual’s academic sphere but also their social sphere(Martin-Moratinos et al., 2023). From struggling to focus in class to having their impulsiveness affect their interpersonal relationships, symptoms permeate children’s worlds. The diagnosis process includes a variety of tests, interviews, questionnaires, and, in children, observation(ADHD Screening). For example, the Wechsler Intelligence Scale for Children-Revised (WISC-R), developed in 1974, is a test that attempts to measure intelligence through testing 10 abilities. This test, among others, such as the Wide Range Achievement Test-Revised (WRAT), was used in the diagnostic process for the participants in Abikoff et al. (1996). 

 

In 1983, Andries Frans Sanders proposed the underaroused theory in the Cognitive-Energetic Model. The theory stated that those with ADHD have abnormally low levels of physiological arousal and, in turn, seek out input via hyperactivity(Sanders, 1983). Abikoff et al. sought to use music as a high salience extra task stimulation to investigate this theory. This study assessed the impact of music on the arithmetic performance of a group of 40 second graders with ADHD. The theorized goal was to reach an optimal level of arousal(1996). They assessed this by giving the group of 40 boys an arithmetic test to match their ability and having them complete it in different conditions. Rather than just silence or music, researchers added a speech condition. Students were subjected to three arithmetic tasks of the same difficulty. The first test was done with 10 minutes of their 3 favorite songs on loop, the second was 10 minutes of background speech, and the third and final test was 10 minutes of silence. 

 

Through these results, researchers concluded that music was the ideal condition for kids with ADHD. This iteration of testing resulted in an accuracy of 82%, compared to 77% for speech and 79% for silence. Furthermore, when looking at the testing results of the control group, which was made up of non-disabled students at the same grade level, it was revealed that changes in condition did not affect their performance. A nuance that came to the surface through the analysis was that the effects of the conditions were order-dependent. Children who had music as their first condition had more than twice as many correct answers as those who had music as their second or third test condition. 

 

These results lend support to the under-arousal theory. Distractibility is often a characteristic thought to be heightened in those with ADHD, and is often attributed to extra stimuli that are not related to the work they are assigned. However, Abikoff et al. offer a counter to this. Using this study as a tool, focused facilitation strategies can be developed to better support students with ADHD. 

 

Despite the strengths of this study, it is also important to address some key limitations. It is worth glancing at the limitations of the study itself. Since this study took place in 1996, it is important to consider more recent developments in ADHD research. This could result in theories being disproved. However, even recent studies seem to support the idea that music can help with focus in ADHD patients (Martin-Moratinos et al., 2023). For example, Madjar et al showed improved reading scores in students with ADHD when exposed to music while reading(2020). Additionally, the presence of the following studies aids in the fact that the study had a relatively small sample size (40 participants), and they were all boys. Because Abikoff et al. (1996) only studied boys, it’s hard to know whether the same pattern would hold for girls with ADHD, especially since girls tend to show fewer outward hyperactive symptoms and more subtle, internalized ones. However, later work that did include girls—like Madjar et al. (2020), who tested mixed-gender preadolescents—also found that music boosted performance for students with ADHD. This suggests that the helpful effect of music isn’t limited to boys, even if the way it supports attention might look a little different across genders.

 

Ultimately, the management of ADHD in and out of the classroom requires an individualized, holistic approach. The demonstration of music as a coping mechanism can usher it into becoming a tool in treatment plans. In the same spirit, further development of this finding could lead to additional understandings of the impact of other types of stimuli (visual, tactile, or olfactory) on ADHD management. Overall, the study’s results open the door to using music not just as background noise, but as a strategic tool for cultivating focus in children with ADHD. As researchers expand this work—with larger, mixed-gender samples and broader types of sensory stimulation—we move closer to individualized interventions that address the whole child, both inside and outside the classroom. 

 

 

References:

Abikoff, H., Courtney, M. E., Szeibel, P. J., & Koplewicz, H. S. (1996). The effects of auditory stimulation on the arithmetic performance of children with ADHD and nondisabled children. Journal of Learning Disabilities, 29(3), 238–246. https://doi.org/10.1177/002221949602900302 

ADHD Screening: What To Expect. (n.d.). Cleveland Clinic. Retrieved December 24, 2025, from https://my.clevelandclinic.org/health/diagnostics/24758-adhd-screening 

Everything You Need to Know About ADHD. (n.d.). Retrieved December 24, 2025, from https://www.adhdevidence.org/blog/everything-you-need-to-know-about-adhd 

Madjar, N., Gazoli, R., Manor, I., & Shoval, G. (2020). Contrasting effects of music on reading comprehension in preadolescents with and without ADHD. Psychiatry Research, 291, 113207. https://doi.org/10.1016/j.psychres.2020.113207 

Martin-Moratinos, M., Bella-Fernández, M., & Blasco-Fontecilla, H. (2023). Effects of Music on Attention-Deficit/Hyperactivity Disorder (ADHD) and Potential Application in Serious Video Games: Systematic Review. Journal of Medical Internet Research, 25, e37742. https://doi.org/10.2196/37742 

Sanders, A. F. (1983). Towards a model of stress and human performance. Acta Psychologica, 53(1), 61–97. https://doi.org/10.1016/0001-6918(83)90016-1

Filed Under: Science

The Science of When You Exercise

December 21, 2025 by Ericah Folden

People often think the most important aspect of how exercise affects your overall health is how hard you work, how much weight you can lift, or how far you can run. However, two recent studies have uncovered another factor that might be just as important for maximizing health benefits – when you exercise. Multiple studies have looked at the impact of how when you exercise affects your body. For example, one study looked at the impact of exercise timing in mice, focusing on the growth of muscle tissue, while another study looked at a large population of people and how their exercise habits affected sleep quality. Together, these studies show that when exercise takes place matters more than most people believe.

The first study, done by Liu et al. and published in Nature Communications, looked at how timing of exercise in mice affected long-term health (Liu et al., 2025). Mice, like people, have a circadian rhythm, which is a 24-hour internal clock in the body that regulates and affects energy, metabolism, and sleep. Muscles in the body also have internal clocks, which decide when to burn fat or sugar.

In the study, Liu et al. had two groups of mice run at a low intensity and low volume on treadmills at different times of day: one group exercised before sleep and the other exercised right after waking up. Training lasted for several months, and researchers measured the mice’s body weight throughout the study and measured the mice’s strength, endurance, and blood sugar before and after the study was conducted, all of which are indicators of long-term exercise results. The results of the study were quite clear. Mice who exercised before sleep showed increased physical and metabolic improvements after the period of consistent exercise, meaning they gained less fat, had more endurance, and showed better blood sugar control. The group of mice that exercised after waking saw less improvement in these areas (Liu et al., 2025).

The second study, done by Leota et al. and also published in Nature Communications, tracked the health data of over 14,000 human participants using fitness wearables over four million nights of sleep (Leota et al., 2025). The researchers wanted to see whether exercising in the evening, before bedtime, affected sleep quality.

The researchers found that the later and harder people worked out, the more their sleep was affected. When people exercised four or more hours before going to bed, their sleep was normal, regardless of the intensity of the workout. When people exercised two to four hours before going to bed, they took a longer time to fall asleep and slept less. When people exercised two hours or less before going to bed, especially at a high intensity, sleep noticeably got worse. Some took up to over an hour longer to fall asleep, slept about 40 minutes less overall, and had a higher heart rate throughout the night (Leota et al., 2025).

Although the mice study found that exercise before bed improved overall health, the human study found that the closer exercise got to bedtime, the worse sleep became, which is also known to negatively impact recovery and overall health. While these studies may seem contradictory, they actually align upon consideration of the factor of exercise intensity. High-intensity training in the evening negatively affects sleep, while low/moderate-intensity exercise in the evening is beneficial for muscle growth and recovery without impacting sleep.

Although both studies were different, they arrived at the same key conclusion: that the body works best when its internal cycles, like its circadian rhythm, are not disrupted. Exercise, such as heavy lifting or sprinting, activates the body’s sympathetic nervous system, which is the part of the nervous system responsible for the “fight or flight” response. Sleep, along with recovery, lowered heart rate, and relaxation, is activated by the parasympathetic nervous system, otherwise known as the “rest and digest” state. While the activation of the sympathetic nervous system is good for exercise and performance, it is not good when the body needs to sleep. Instead of letting the body settle down, activation of the sympathetic nervous system keeps your body revved up, lessening sleep time and quality, and therefore overall recovery and future performance.

Because of the busyness of daily life, it’s not always possible to perfectly time every workout. Evening workouts are often unavoidable due to the realities of many people’s daily schedules. However, the combination of results from these studies shows that evening workouts aren’t automatically bad for overall health. In fact, they can even improve the benefits of exercise as long as their intensities are adjusted according to their relation to bedtime. If working out in the evening more than four hours before bedtime, high-intensity exercise can take place without risk of impacting sleep quality and physical health. If working out four hours or less before bedtime, it is better to opt for lower-intensity exercise, which will allow you to sleep better and recover more quickly. In the end, both studies show that being slightly more intentional about when and how hard you train can make a real difference in your sleep, recovery, and overall performance.

 

Works Cited

Liu, J., Xiao, F., Choubey, A., Kumar S, U., Wang, Y., Hong, S., Yang, T., Otlu, H. G., Oturmaz, E. S., Loro, E., Sun, Y., Saha, P., Khurana, T. S., Chen, L., Hou, X., & Sun, Z. (2025). Muscle Rev-erb controls time-dependent adaptations to chronic exercise in mice. Nature Communications, 16(1), 5708. https://doi.org/10.1038/s41467-025-60520-y

Leota, J., Presby, D. M., Le, F., Czeisler, M. É., Mascaro, L., Capodilupo, E. R., Wiley, J. F., Drummond, S. P. A., Rajaratnam, S. M. W., & Facer-Childs, E. R. (2025). Dose-response relationship between evening exercise and sleep. Nature Communications, 16(1), 3297. https://doi.org/10.1038/s41467-025-58271-x

Filed Under: Biology, Chemistry and Biochemistry, Science

Cause of Sea Star Wasting Disease Epidemic Linked to Common Bacteria

December 16, 2025 by Ella Ong

Photo of a sunflower sea star (Pycnopodia helianthoides) in a kelp forest. (Mazza, Marco. The Independent, June 21, 2024.)
Fig. 1. Photo of a sunflower sea star (Pycnopodia helianthoides) in a kelp forest. (Mazza, Marco. The Independent, June 21, 2024.)

Since its emergence in 2013, sea star wasting disease (SSWD) has quickly spread along the west coast of North America, infecting dozens of sea star species from Mexico to Alaska and upending marine ecosystems. A variety of causes of SSWD have been proposed over the past decade, but no clear cause has been isolated for what is now considered one of the largest marine epidemics. Sunflower sea stars, or Pycnopodia helianthoides, are considered one of the most vulnerable species to SSWD, with billions dying from SSWD since its emergence. Although sunflower sea stars once inhabited the entirety of the west coast of North America, they are now considered functionally extinct in much of their southern range. Over 87% of the population has been lost in the remaining northern areas, earning the species a classification of critically endangered. The large-scale decline of sunflower sea stars due to SSWD has had a cascading effect on ecosystems, in which sea urchin populations have experienced uninhibited growth in the absence of predation. This ecological imbalance has led to the mass destruction of kelp forests and the creation of “urchin barrens” (locations where a previous kelp forest was destroyed by sea urchin overgrazing), demonstrating the profound impact SSWD has on kelp ecosystems and the species that rely on them.

After a series of exposure experiments and genetic sequencing tests of sunflower sea stars infected with SSWD, scientists identified the common bacterium Vibrio pectenicida as a causative agent (a pathogen that directly leads to disease, but may occur under the influence of other environmental or physical conditions) for SSWD. These findings may have lasting impacts on attempts to stem the spread and population losses caused by SSWD, including future efforts to recover the population of sunflower sea stars. 

Over the course of three years (2021-2024), scientists conducted a total of seven exposure experiments on sunflower sea stars. Using tissue extracts, coelomic fluid injections (an essential fluid similar to blood for sea stars that circulates immune system cells), and tank water from diseased sunflower sea stars, exposed sea stars were infected with SSWD. Healthy sunflower sea stars were collected in Washington state or raised at Friday Harbor Laboratories, and were first isolated in a 2-week quarantine period to ensure that collected stars did not develop SSWD after potential exposure in the wild. All exposure methods led to transmission of SSWD, with 92% (46/50) of exposed individuals displaying symptoms of SSWD. The disease stages were progressively categorized as “arm twisting,” “arm autonomy,” and “mortality.” Stars exposed to SSWD often died between 6 days to 2 weeks post exposure, usually within a week after showing the first symptoms of the disease. 

While using diseased coelomic fluid and tissue sample injections to infect healthy sea stars, scientists also utilized control samples, in which tissues or coelomic fluid from a diseased star were first treated with heat or filtered before injection into a healthy star. All 54 individuals injected with treated samples survived, with limited indications of SSWD. Most sea stars injected with untreated tissue (24 out of 26) or coelomic fluid (16 out of 18) samples from diseased stars contracted SSWD. The dramatic decrease in disease spread after heat treatment indicated that the causative agent (pathogen) of SSWD was likely cellular.

Fig. 2. Diagram of exposure experiment process using treated and untreated Vibrio pectenicida bacteria and diseased tissues. (Prentice et al., 2025)
Fig. 2. Diagram of exposure experimental process using treated and untreated Vibrio pectenicida bacteria and diseased tissues. (Prentice et al., 2025)

After identifying that the cause of SSWD was likely cellular, scientists genetically sequenced diseased sea star coelomic fluid and tissues from both in-lab sea stars and sea stars at field outbreak sites. Coelomic fluid from healthy stars and stars exposed to SSWD was also collected to contrast the microbes present in sea stars at all disease stages. After RNA and DNA analysis (particularly using 16S ribosomal RNA gene amplicon datasets), the most significant microbial difference between healthy and diseased groups was identified to be the bacterium V. pectenicida (r^2 ≥ 0.90), which was found in abundance in samples from stars with SSWD and was absent in samples from healthy stars. This difference in microbial presence allowed scientists to pinpoint V. pectenicida as a likely causative agent of SSWD. Small bacterial loads of V. pectenicida were found in healthy stars, leading scientists to propose that sea stars can remain healthy with low concentrations of V. pectenicida in ideal environmental conditions. This may indicate that outbreaks occur when environmental conditions (such as increasing temperatures) compromise the star’s immune system and allow the bacterium to flourish.

After genetic sequencing identified V. pectenicida as a candidate for the causative agent of SSWD, scientists conducted a series of exposure experiments using pure V. pectenicida cultures isolated from infected stars. When injected into healthy sea stars, V. pectenicida bacterium strains FHCF-3 and FHCF-5 cultures resulted in SSWD. Healthy sea stars were then injected with high (10^5 colony forming units) and low (10^3 c.f.u.) amounts of V. pectenicida strain FHCF-3 and heat-treated controls. 13 out of 14 stars injected with living bacteria all contracted SSWD and died, while all stars injected with heat treated (dead) bacteria survived. The disease progressed faster in stars injected with a higher concentration of V. pectenicida strain FHCF-3, with mortality occurring 6-11 days post exposure. Meanwhile, the group exposed to a lower concentration of live bacteria progressed through the disease more slowly, with mortality occurring 11-16 days post exposure.

Fig. 3. Chart of disease progression in sunflower sea stars using different methods of exposure to SSWD. Visual representations of disease symptoms are displayed below. (Prentice et al., 2025)
Fig. 3. Chart of disease progression in sunflower sea stars using different methods of exposure to SSWD. Visual representations of disease symptoms are displayed below. (Prentice et al., 2025)

After identifying V. pectenicida as a strong possible cause of SSWD, gene sampling was also conducted at field sites across British Columbia in May and October 2023. Although no individuals sampled at the five sites exhibited signs of SSWD or had V. pectenicida in May, V. pectenicida was identified in two outbreak populations in October. Vibrio pectenicida was found in 16% of healthy stars from visually unaffected sites, 74% of visually normal stars in outbreak sites, and 86% of diseased stars in outbreak sites. The analysis of a genetic database from southeast Alaska in 2016 during an SSWD outbreak also found V. pectenicida in both diseased and normal stars in outbreak sites but not healthy sites, suggesting that V. pectenicida also played a role in past outbreaks of SSWD. Scientists hypothesized that instances of Vibrio pectenicida in apparently disease-free stars may be due to exposure to other diseased stars in the wild. 

The discovery of V. pectenicida as a contributing cause of SSWD has strong implications for future research and conservation efforts for struggling sea star populations. V. pectenicida has been found globally (ranging from Australia to Asia to Europe to the US) from 2009-2019 in a variety of marine hosts, particularly in shellfish and bivalve aquaculture. Future research can focus on the mechanism of V. pectenicida as a pathogen, further distinguishing where the bacterium can be found, and modes of transmission both between sea stars and from prey shellfish populations. Scientists proposed that warming oceans due to climate change may make stars more vulnerable to outbreaks of V. pectenicida and other pathogens that thrive in warmer environments, which would support an observed trend between SSWD and warming water temperatures. Since sea stars respond to unfavorable environmental conditions (such as warming water) with similar symptoms to SSWD, it has been difficult to classify SSWD outbreaks. The discovery of V. pectenicida as a causative agent allows researchers to identify V. pectenicida as an indicator of SSWD in sampling, supporting the expansion of sampling across different environments and sea star species. This is essential for continuing to understand SSWD and crafting a response to protect struggling sea star populations and affected ecosystems. 

 

References:

Mazza, Marco. “How Sunflower Stars Can Save California’s Vanishing Kelp Forests.” The Independent, Santa Barbara Independent, 21 June 2024, https://www.independent.com/2024/06/21/how-sunflower-stars-can-save-californias-vanishing-kelp-forests/ 

Prentice, M.B., Crandall, G.A., Chan, A.M. et al. “Vibrio pectenicida strain FHCF-3 is a causative agent of sea star wasting disease.” Nat Ecol Evol 9, 1739–1751 (2025). https://doi.org/10.1038/s41559-025-02797-2

Filed Under: Biology, Environmental Science and EOS, Science

Pumping Without Pedaling: How Corners Turn Timing into Speed

December 15, 2025 by Justin Zhang

Watch a skilled rider enter a berm: they arrive tall, compress as the turn loads up, and rise on exit – no pedaling, yet they launch out faster. This isn’t magic; it’s timing that lets the ground do positive work on you. This reciprocal motion between the bike and the rider is called pumping, evident in three places: rollers, banked corners (berms), and jumps. This article focuses on the physics in berms and a recent model by Golembiewski and colleagues that computes an optimal pumping rhythm through corners. We finish with brief notes on extending the same logic to rollers and jumps.

Fig 1: Rider pumping through a berm at UCI World Championships (Velosolutions Global, 2024)

The Basic Physics in a Berm 

The ground must push hard on the bike to bend its path towards the center. That “heaviness” is the normal load N. A compact way to sketch the load you feel (or the “heaviness”) is:

The first term is gravity on a bank with tilt β , the second term is the centripetal demand of the turn (speed v, radius R), and the third term is what you add by moving your body normal to the surface (a rider)a: positive when you compress, negative when you unweight). Even if the radius R stays roughly constant through the main arc, N ramps up when you go from straight to arc (entry) and drops when you go from arc back to straight (exit). Those ramps are the windows that matter, when a sliver of the ground’s reaction force points forward along the bikes path (T). The instantaneous power is roughly P=Tv. You use this to gain speed.

The bike has two wheels, splitting the pumping motion into two time frames: a short bar press as the front hits the entry ramp, then a short pedal press as the rear reaches it. Those two brief pulses create two small forward pushes per berm. Note that this isn’t conservation of angular momentum (mvr) because the ground is doing external work but applying compressive forces at the right time. 

Why this matters. This turns “pump the berm” from a vibe into a repeatable rule you can coach, measure, and design for: press twice in the entry window, glide out, and you bank real, compounding speed—no pedaling required.

Inside the Research: A Two-Mass Model on a Banked Ribbon

The paper starts with a cartoon model where the bike and the rider are represented by two points—centers of mass xb and xr — joined by a massless link of length l(t)

Fig 2: Simplified Bike & Rider Model

To give these points a world to live in, they build a 3D banked surface, called S, using a set of parametric equations:

Think of g as a recipe that turns the pair “where you are around the track (Φ)” and “where you are across the bank (Θ)” into a 3-D position.  Rather than let the bike wander anywhere on S, the authors choose a riding line by prescribing as a function of Φ.

Subsequently, the author derives a position equation for the bike–rider system that depends only on Φ — the progress angle around the banked turn—under the riding line assumption that the rider holds an inner line on the straights and shifts toward the outer (higher) line near the apex.

Fig 3: Visualization of Riding Path

Fig 4: Two-mass model on torus surface

To simplify the problem, the author also introduces an upright constraint (Fig. 4): this means the imaginary line between the bike and the rider is always perpendicular to the track surface. The movement of the rider will only be orthogonally, no fore–aft lean—so l(t) is exactly “how much you squat or extend” relative to the bank. Under that constraint, an explicit expression for the rider position (g̃) is derived.

This equation takes Φ — the progress angle around the banked turn, and l (the distance between the bike’s COM and the rider’s COM) as input, and outputs a point in 3D space. 

Setting up an Equation of Motion

The authors model the bike–rider system with positions that vary over time. The bike position is xb(t) and the rider position is xr(t). Velocity is the time derivative of position (how fast the points move), and acceleration is the time derivative of velocity. A single “squat/extend” degree of freedom along the surface normal is captured by the body–bike separation l(t). In everyday terms, l̇  is how quickly you are moving up or down, and l̈ is how hard you accelerate that motion. This variable l̈(t) is the core of the study — it becomes the control input in the simulation. From the position formulas, the paper computes the speeds (kinetic energy)
and heights (potential energy) of both masses:

  • Kinetic energy K = (bike term) + (rider term)
  • Potential energy U = (gravity acting on each mass via its z-height)

Rather than listing every individual force, they use the standard energy approach to produce a single, compact equation that governs motion along the track. Written the way it appears in the paper, it’s an implicit ordinary differential equation (ODE) in the along-track angle φ(t) that also depends on your body motion l(t):

The terms mean:

  • M(φ, l) φ̈ — the effective inertia for turning the system around the track.
  • F(φ, l) φ̇2 — curvature/banking effects that grow with speed.
  • Q(φ, l, l̇) φ̇ — coupling between your height change and along-track motion.
  • P(φ, l, l̇, l̈) — the part driven by your deliberate squat/extend acceleration (l̈), i.e., the “pump.”

Intuition: When you accelerate your body normal to the surface while the berm sets the contact frame, the last term acts like a small forward push in the along-track equation. That is the mechanism the model quantifies.

Setting Up an Optimal Control Problem

The paper asks a simple question: If you’re not allowed to pedal, how should you squat and extend to get through a banked turn the fastest? To answer it, they turn riding into a decision-making problem a computer can solve. This is called an optimal control problem.

Fig 5: Optimal Control Setup

 State: Within the integral, x(t) is the state vector—it stores four numbers at every instant:

  • Where you are around the corner (an angle): φ(t).
  • How fast you’re sweeping around (angular rate; higher rate = higher speed): φ̇(t).
  • How tall you are above the bike along the bank’s normal (body–bike separation): l(t).
  • How quickly that height is changing (going up or down): l̇(t).

The researchers bundle these into a compact vector the computer updates over time, simulating the rider’s progress around the track:

Reward and punishment inside the integral: The cost being minimized adds a reward for making progress/speed and a penalty for harsh pumping:

  • qT x(t) (Fig 5) is a linear reward/penalty on the state. In the paper, q = [ -65, -65, 0, 0 ]T. Because we minimize J, those negative weights reward larger φ (angle covered) and φ̇ (speed). In short: the optimizer prefers going farther and faster.
  • u(t)2 penalizes violent pumping. Here u(t) = l̈(t) is the rider’s normal acceleration (how hard you compress or unweight). Sudden, large inputs make u2 jump, increasing the cost, so the optimizer favors smooth, well-timed pulses over thrashing.

Units intuition: the integrand is “cost per second.” Integrating over dt gives a total cost. Lower J means you went farther/faster while using less harsh acceleration.

Control (what you choose): Your single decision signal is how hard you accelerate your body up or down relative to the bike, along the bank’s normal direction—the essence of pumping:

  • Positive control: compress (drive yourself down)
  • Negative control: unweight (pop up)

They call this input u(t) = l̈(t)

The notation u(⋅) ∈ PC ([0,T], ℝ) (Fig 5) means the control is piecewise-continuous over the time window — mostly smooth with at most a few kinks.

Dynamics constraint (obeying physics): The model provides an equation tying together how the state changes when you pick a control, with a specified starting condition x(0)=x0:

The equation basically means: given the track shape and gravity, if you push this hard right now, this rule predicts how your position, speed, and body height will evolve next. The solver enforces this rule at every tiny time step so it never “cheats.”

Setting Realistic Constraints

The researchers then determine the realistic bounds for length between the bike and rider and acceleration between the bike and rider by doing a motion capture of a real setup.

Fig 6: Experiment Setup
Fig 7: Camera image with marked pints

 

 

 

 

They use motion trackers to track 46 markers at 100 Hz and infer rider CoM and bike reference points to measure what a human can actually do. The result were the graphs below:

Fig 8: Absolute distance between CoM rider and bike
Fig 9: Acceleration of the riders CoM relative to the bike

 

 

 

 

 

We can see the graph roughly mirrors the motion you feel in real life:

  1. Riders enter the berm pushing the bike down and extending length
  2. Riders maintain pressure throughout the berm and compresses near the end
  3. When riders exit the berm they push the bike down again and re-extend.

In this real-world experiment, researchers observed the range of body-bike length to be(0.27803m  ≤ l(t) ≤  0.59559  m) and body-bike acceleration (pumping) to be (-8.6648 m/s2 ≤ l̈(t) ≤ 30.1478 m/s2)

The researchers then substituted these bounds into their optimal control problem to determine the optimal pumping technique mathematically. 

What the Model Predicts (and how it matches good riding)

The researchers solved the optimal-control problem for a 5-second ride segment that includes two steep corners and two short straights. They start the rider in a neutral body position, traveling at an angular speed of φ̇ = π/3 rad/s (about 9.43 m/s bike speed),
entering at the beginning section between two opposing corners. They then plug the physical parameters into the dynamics (equation of motion):

mb mr ggrav R r λ
15 kg 80 kg 9.8067 m/s2 3 m 1 m 3

The track constants R, r, and λ determine the sharpness and banking of the corner (here R = 3 m, r = 1 m, λ = 3). Finally, using MATLAB (via CasADi) and IPOPT the researchers solve the optimal-control problem, successfully simulating a complete cycle through the track, producing the graphs below:

Fig 10: Simulation results

What the optimal solution does:

  • Kinematics (Fig. 10a):ϕ(T) ends close to 2π — a full pass through the track section including both steep curves.
  • Pose evolution (Fig. 10b):The optimal profile drives the body high at entry (near lmax), then compresses toward lmin across each corner, and re-extends later. In short: enter tall, compress through the berm, then re-extend for each berm. We can also see this visualized in Fig 3
  • Speed gain (Fig. 10c):Remarkably, between t = 0 and t ≈ 2.8 s (the first berm), the bike speed increases by vb ≈ 1.49 m/s — generated without pedaling, purely by the reciprocal mass motion (squat/extend).
  • Control signal (Fig. 10d):The input u*(t) = l̈(t) shows short downward-acceleration bursts (negative then positive spikes) clustered in each corner. These spikes are the timed “pump” impulses that increase normal load precisely when the track’s orientation gives a small forward component — at corner entry and exit — producing useful forward work.

They ran a comparison without pumping (set u(t) = 0, but tested several starting heights l(0)). The fastest no-input case took 6.13 s to reach the same terminal angle φ(T). With the optimal pumping u*(t), the time dropped to 5.00 s. That’s a time saving of Δt = 1.13 s — about 18.43% faster for the same path segment.

Bottom line

Within the paper’s simplified two-mass, upright model (with experiment-derived bounds), pumping through a berm means spending your limited normal-acceleration budget inside the corner (to harvest speed) and avoiding payback at the exit. The solver’s best answer matches what skilled riders do: enter tall, compress through the berm, re-extend later. The numbers quantify the payoff: roughly ≈ 1.5 m/s speed gain per corner and ≈ 18% reduction in lap time for the segment.

From Model to Trail: a Real-World Translation in Riding Technique

The paper’s takeaway is: spend your limited “normal-acceleration budget” inside the corner and don’t pay it back on exit. In practice, that means arrive tall, compress through the corner, and re-extend later (ideally once you’re back on a straight).

What the model doesn’t capture (and how real riders adapt) is:

  • Only normal motion. The paper restricts the rider to move orthogonal to the track’s surface (no fore–aft pump). On dirt, riders can pump slightly forward/back as well. That can shift the pattern earlier, often giving a high → low → high within a single corner, instead of the model’s high → low (then high on the straight).
  • One fixed line. The solver rides a prescribed line (inner on straights, drifting outward at apex). Outside the lab you can pick lines that change banking and gravity use:
    • High → low line: drop from high entry to lower apex/exit to cash in gravitational energy while you compress.
    • Low → high line: for traction or setup, at the cost of more input work.
  • Two contacts, richer phasing. With front and rear wheels you get two timed opportunities per corner (bar press as the front enters the load ramp, pedal press as the rear reaches it). Skilled riders also unweight the front earlier to keep exit smooth.

Beyond Berms: Brief Notes on Rollers and Jumps

Rollers (smooth waving undulations): You speed up by placing two brief load pulses around each crest — bar press as the front rolls over the crest, pedal press as the rear follows — then staying light on the upslope so you don’t give the energy back. These two pulses creates a small forward push each. Add them up, subtract gravity and drag, and you get your acceleration

  1. Front wheel at crest (arms compact, legs half-extended). You’re coiling up at the exact moment curvature flips.
  2. Front on downside; rear crosses crest (arm push grows to full extension; legs fully compressed). Two quick injections: a bar press as the front tips over (raising front normal load just as the ground points forward), then a pedal press as the rear crests. Each creates a small forward contact component
  3. Front in ravine; rear on downside (arms finish extension, legs extend down the back face). Keep pedal pressure while the rear is still on the back face. begin to unweight the bars as the front meets the upslope to avoid negative work.
  4. Front on upslope; rear in ravine (legs reach full extension; arms half-compressed; front unweighted). Now the ground would slow you (upslope). You keep the front light and use the up-kick to pop your mass upward—maintaining speed while the bike climbs under you.

Jumps: Preload on the run-up, then choose: extend on the lip at the point of maximum normal force where m · v2 · κ is maximum (you can look at my personal research if interested) to trade speed for height or stay light to preserve forward speed. The same “press-when-helpful, light-when-hurtful” rule applies, just with a vertical-energy trade at takeoff.

Conclusion

Pumping is a control problem you solve with your body: press exactly while the turn makes you heavy, and rise while it makes you light. The research formalizes this with a minimal two-mass model and an optimizer that times a compress–extend input to harvest the berm’s geometry—a clean physics story for the “free speed” riders feel in corners.

Bibliography

Velosolutions Global. (2024, March 6). 2024 qualifier events announced for UCI Pump Track World Championships. Pinkbike. https://www.pinkbike.com/news/2024-qualifier-events-announced-for-uci-pump-track-world-championships.html

Golembiewski, J., Schmidt, M., Terschluse, B., Jaitner, T., Liebig, T., & Faulwasser, T. (2023). The dynamics of a bicycle on a pump track—First results on modeling and optimal control (arXiv Preprint No. 2311.07251). arXiv. https://doi.org/10.48550/arXiv.2311.07251

Filed Under: Math and Physics, Science

Effect of Dental Malocclusions on Posture in Children

December 12, 2025 by Lily Warmuth '28

Photograph of a Binator device, an orthodontic appliance made of acrylic resin and wire that resembles a traditional retainer.

It is estimated that over six million patients seek orthodontic treatment every year to improve their malocclusion, or misalignment of teeth (Hung et al. 2023). Seeing as many people value this treatment, it is not surprising to learn that the way our teeth fit into one another affects the way we eat, talk, breathe, and even our posture. Musculoskeletal (shoulders, spine, muscles) and stomatognathic (teeth, jaws, chewing muscles, tongue, lips) are separate systems of our bodies that interact in intricate ways. For example, a misalignment of teeth alters the muscle-use patterns in our cheeks to compensate for this disparity, which in turn affects the neck muscles which are connected to our face muscles. Through a slight discrepancy in teeth-alignment, the whole head can shift into a different position, impacting one’s health (Bardellini et al. 2022). Unfortunately, the intersection of posture and dental malocclusions is a scarcely researched field. Seeing how impactful dental alignment is to the rest of the body, it is important to research and understand the factors that influence it.    

One study published in 2022 by a group of Italian researchers (Bardellini et al.) examined how these systems work together, and the effects of correcting dental malocclusions through orthodontic treatment on the posture of children. While there are many different classifications and types of dental malocclusions, this article specifically analyzes patients using Angle’s classification. Angle’s classification shows three types of malocclusions: class I, II, or III (Fig. 1). Each is described by the position of the lower (mandible) and upper (maxillary) molars. Class I is defined as the molars fitting together in a standard way, however, malocclusions are still present in other teeth besides the molars. In Angle’s class II, the lower molar is farther back (distal) than the upper molar. Lastly, class III shows the lower molar too far in front of the upper molar (Campbell and Goldstein 2021).  

Angle's classification of occlusion illustrated with dental diagrams and hand analogies: normal occlusion, Class I, Class II, and Class III malocclusions.
Figure 1: Simulate Angle’s classification of malocclusion by hands. Xie, Zhiwei, Fuying Yang, Sujuan Liu, and Min Zong. 2023. “The ‘Hand as Foot’ Teaching Method in Angle’s Classification of Malocclusion.” Asian Journal of Surgery 46 (2): 1063

The patients that participated in the study were assessed by two clinicians who evaluated their dental occlusions according to Angle’s classification. While deciding which patients to include in the study, the type of dental-skeletal malocclusion within Angle’s classification did not play a role. Most patients observed in this study exhibited a class II malocclusion, followed by class I and III. Patients that had scoliosis, required physical therapy, chronic diseases affecting balance, macro trauma, cleft lip or palate were excluded to ensure that the improvement in posture depended only on malocclusions and orthodontic treatment. Since this study aimed to find a connection between misalignment of teeth and posture in children, the patients belonged to the age group of 9-12 (Bardellini et al. 2022).    

Bardellini and her team investigated the postures and weight distribution of patients before and after the treatment using multiple methods, such as vertical laser line (VLL) and stabilo-baropodometric analysis.   

To examine the posture through VLL, the patients were positioned in a standardized position (relaxed posture and arms at side) in front of a white wall. A singular vertical laser line (VLL) was projected onto the patients (Bardellini et al. 2022). The posture was then examined for two factors, the position of the head in relation to the VLL and an excess of extension or flexion. A standard position means the head is centered so that it crosses the tragus—the pointy piece of cartilage close to the cheek (Fig. 2).

Anatomical illustration of human ear with labeled pin above the tragus.
Figure 2: Tragus – anatomical structures. Source: IMAIOS, “Tragus – Anatomical Structures,” accessed November 14, 2025.

If the cartilage did not cross the VLL, the patients’ head was either in a forward or backwards position. Extension and flexion were examined by asking the patients to open their mouths as wide as possible. If the head moved away from the VLL line, it indicated either excess of extension—head bent backwards—or of flexion—the head bent forwards (Fig. 3, Bardellini et al. 2022).

Orthodontic treatment outcomes displayed as paired lateral profile photographs of six patients labeled a through f. Each pair shows pre-treatment (left) and post-treatment (right) views with a vertical line for reference. Arrows on some cases indicate anterior or posterior shifts in facial profile. The images demonstrate improvements in head posture following treatment.
Figure 3: Improvement of the head position (evidenced with the “open mouth test”) in six patients (a-b-c-d-e-f). Bardellini, Elena, Maria Gabriella Gulino, Stefania Fontana, Francesca Amadori, Massimo Febbrari, and Alessandra Majorana. 2022. “Can the Treatment of Dental Malocclusions Affect the Posture in Children?” May 1, 2022: 245

The VLL test indicated that 16 out of 60 patients had a backwards position of the head, 29 a forward position, 10 showed excess of extension while opening their mouths, and 31 an excess of flexion. Only seven patients already had a correct position, meaning that in 75% of patients, dental misalignment influenced head position in relation to VLL line, and 68.33% either flexion or extension.  

After determining the posture of the head, the researchers then examined the weight distribution of the participants using a stabilo-paropodometric platform. The patients were asked to stand on a carpet under which a stabilo-paropodometric platform (40x40cm) was placed. The platform measured the typology of the foot and weight distribution across the two feet. The typology of feet can be divided into three kinds: normal, cavus (extreme arch), or flat (underdeveloped arch). Typology can differ between feet, with either both feet showing the same type or different types. The ideal distribution of body weight between feet should be symmetrical at about 50% on each foot (Bardellini et al. 2022).  

Through measurements obtained with the stabilo-baropodometric platform, the study found 45 cases (both or one side) with cavus feet, and 6 with flat feet (both sides). Hence, 85% of patients had a typology that incorrectly supported their body. Additionally, about 70% of patients had an unequal weight distribution between their two feet, exacerbating bad posture. An incorrect spread of body weight can be identical on both feet—either too much pressure on the ball of the foot or heel—or it can vary between feet (i.e. one foot shows increased pressure at heel, and the other at the ball of the foot) (Bardellini et al. 2022).  

After the classification of malocclusion was identified and the posture (VLL) and weight distributions (Stabilo-baropodometric platform) were measured, the patients were treated with an individually prepared Mouth Slow Balance (Fig. 4), which works by repositioning the tongue, widening the maxilla (upper jaw), and keeping the mandible’s (lower jaw) relation to the maxilla (Bardellini et al. 2019, Bardellini et al. 2022). They describe the MSB device as a “evolution of the Binator”, a retainer like appliance adjusting the bite (Fig. 5, Bardellini et al. 2019 p. 243).  

Photograph of a Mouth Slow Balance (MSB) device, an orthodontic appliance made of acrylic resin and wire that resembles a traditional retainer.
Figure 4: The MBS (mouth slow balance) Class III device Bardellini, E., M. G. Gulino, S. Fontana, J. Merlo, M. Febbrari, and A. Majorana. 2019. “Long-term evaluation of the efficacy on the podalic support and postural control of a new elastic functional orthopaedic device for the correction of Class III malocclusion.” European Journal of Paediatric Dentistry, no. 3: 200.
Photograph of a Binator device, an orthodontic appliance made of acrylic resin and wire that resembles a traditional retainer.
Figure 5: The Binator appliance. Pakshir, Hamidreza, Ali Mokhtar, Alireza Darnahal, Zinat Kamali, Mohammad Hadi Behesti, and Abdolreza Jamilian. 2017. “Effect of Bionator and Farmand Appliance on the Treatment of Mandibular Deficiency in Prepubertal Stage.” Turkish Journal of Orthodontics 30 (1): 16

The patients were observed during their treatments for four years (2014-2018), and by the end, 51 out of 60 patients exhibited a correction of malocclusions, either fully aligned or class I (Bardellini et al. 2022). Other patients either dropped out of the study (3 patients) or reached a correction after the observed time frame (6 patients). 

Of the 53 patients, 23 obtained the ideal position and 19 saw an improvement but did not complete correction of head-position. In 10 cases, patients were found to have been overcorrected.  In the beginning of the four-year observation period, 15 patients had a correct position regarding VLL posture assessment. After treatment, 7 kept their correct position, while 8 now developed a forward position. Additionally, two patients that showed a backwards position before treatment developed a forward position by the end (Bardellini et al. 2022).  

Bardellini et al. (2022) also found significant improvements of the posture in VLL open mouth exams. 53.3% now kept their tragus on the laser line while opening their mouths, when they used to hyper-extend or –flex.  

53 participants (88%) improved their foot typology, of which 17 achieved a complete correction. Before treatment, only 15% of participants had a “normal” typology, which increased to 28% after treatment. However, weight distribution that varied between feet significantly increased from 18 to 37, of which seven patients developed a weight distribution imbalance they previously didn’t show. Overall, cases also exhibited an improvement without complete correction which decreased the median of support discrepancies over the course of the treatment (Bardellini et al. 2022).  

These findings provide evidence for Bardellini et al.’s hypothesis that posture is in fact altered by dental malocclusions. They explain that through a complex chain of muscles across different systems, muscles alter their patterns which disturb the posture, specifically in the position of the head and support of feet.  Muscles around our cheeks (masticatory) and neck (cervical) were already discovered to have a connection in previous research (Bardellini et al. 2022). Furthermore, trunk muscles (abdomen, chest, back) are also connected to these muscles. Since the misalignment of teeth affects the so-called mandibular elevator muscles that are a part of our cheek muscles, this change flows over into other muscle systems (cervical and trunk) acting on our posture. Our strategies for balancing are primarily spread across the trunk, head, and pelvis, which means that the misposition of the head leads our body to try and balance it using other methods (trunk and pelvis) (Bardellini et al. 2022). So, the wrong posture shifts the center of gravity. 

Although Bardellini et al. have found significant evidence that there is a correlation between dental malocclusions and posture, they acknowledge that they are one of few studies that focus on this specific alteration in posture, hence emphasizing that more research needs to be done.  

Furthermore, the results may have been skewed because the team did not consider that the natural changes occurring in growing children may also influence their posture, weight distribution, and more. However, for this specific study it would have been unethical to have a control group of untreated children to compare the effects of treatment vs no treatment (Bardellini et al. 2022).   

Bardellini and her team are one of the few trailblazing research articles that examine the impact of malocclusions on posture, specifically targeting the head and feet. As mentioned before, not much research has been done in this field that examines this topic especially, yet it can prove to be vital for child development. Correcting posture early on can improve a person’s life-quality for the rest of their lives, impacting everyday tasks. Hopefully, in the future more researchers will recognize the importance of this subject and contribute new findings.


References:

Bardellini, E., M. G. Gulino, S. Fontana, J. Merlo, M. Febbrari, and A. Majorana. 2019. “Long-term evaluation of the efficacy on the podalic support and postural control of a new elastic functional orthopaedic device for the correction of Class III malocclusion.” European Journal of Paediatric Dentistry, no. 3: 199–203. https://doi.org/10.23804/ejpd.2019.20.03.06. 

Bardellini, Elena, Maria Gabriella Gulino, Stefania Fontana, Francesca Amadori, Massimo Febbrari, and Alessandra Majorana. 2022. “Can the Treatment of Dental Malocclusions Affect the Posture in Children?” May 1. DOI: 10.17796/1053-4625-46.3.11 

Campbell, Stephen, and Gary Goldstein. 2021. “Angle’s Classification–A Prosthodontic Consideration: Best Evidence Consensus Statement.” Journal of Prosthodontics (United States) 30 (S1): 67–71. https://doi.org/10.1111/jopr.13307. 

Hung, Man, Golnoush Zakeri, Sharon Su, and Amir Mohajeri. 2023. “Profile of Orthodontic Use across Demographics.” Dentistry Journal 11 (12): 291. https://doi.org/10.3390/dj11120291. 

IMAIOS. “Tragus.” e-Anatomy, accessed November 20, 2025. https://www.imaios.com/en/e-anatomy/anatomical-structures/tragus-1536888748. 

Pakshir, Hamidreza, Ali Mokhtar, Alireza Darnahal, Zinat Kamali, Mohammad Hadi Behesti, and Abdolreza Jamilian. 2017. “Effect of Bionator and Farmand Appliance on the Treatment of Mandibular Deficiency in Prepubertal Stage.” Turkish Journal of Orthodontics 30 (1): 15–20. https://doi.org/10.5152/TurkJOrthod.2017.1604. 

Xie, Zhiwei, Fuying Yang, Sujuan Liu, and Min Zong. 2023. “The ‘Hand as Foot’ Teaching Method in Angle’s Classification of Malocclusion.” Asian Journal of Surgery 46 (2): 1062–64. https://doi.org/10.1016/j.asjsur.2022.07.130. 

Filed Under: Biology, Science Tagged With: Dentistry, Orthodontics, Posture, Treatment Outcomes

Ethical ramifications of AI-powered medical diagnoses

December 7, 2025 by Mauricio Cuba Almeida '27

Incredible advancements in artificial intelligence (AI) have recently paved the way for the use of AI in healthcare settings. Implementation of AI has the potential to address worker shortages in the medical field, lead to discovery of new drugs, or improve diagnoses (Bajwa et al., 2021). A writer for the American Medical Association, Benji Feldheim applauds AI for restoring the “human side” in medicine. For example, AI scribes in particular ease the documentation burden doctors face—reducing burnout and improving doctors’ interactions with patients as a result (Feldheim, 2025). Another example is the AI model developed by Shmatko et al. (2025), known as Delphi-2M, which is capable of accurately predicting a patient’s next 20 years of disease burden (i.e., what diseases they would contract and when). Evidently, AI is a very promising technology already capable of improving lives, however, there are reasons to be skeptical. While these advances are promising, these uses of AI also raise concerns about fairness and clinical safety. After a brief synopsis of Shmatko et al.’s Delphi-2M, I evaluate the ethical ramifications of AI-powered diagnoses and related clinical tools.

Delphi-2M is an AI model trained on over 400,000 patient histories from a UK database to forecast an individual’s 20-year disease trajectory. Similar to chatbots like ChatGPT, Delphi-2M is a large language model (LLM), a type of AI that can recognize and reproduce patterns from large amounts of data. Similar to how chatbots pick up on what words are likely to appear with other words in order to form sentences, Delphi-2M learns from its vast training set of medical records to predict a patient’s disease trajectory from realworld patterns. As Yonghui Wu puts it in her summary of Shmatko et al.’s work, it’s just how becoming a smoker may be followed by a future diagnosis of lung cancer—these are patterns Delphi-2M recognize. To do this, Delphi-2M is fed “tokens” that link diseases or health factors to specific times in a person’s life, like chickenpox at age 2 or smoking at age 41 (Figure 1). Then, Delphi-2M outputs new tokens that predict what diseases and when they will occur in an individual’s life, like the onset of respiratory disorders at age 71 as a result of smoking. Delphi-2M, after being trained, was tested by predicting the medical histories of 1.9 million patients not included in the original training set. Shmatko et al. demonstrate this AI to have great success in accurately predicting disease trajectory, as it partially predicts patterns in individuals’ diagnoses in 97% of cases.

Visualization of Delphi-2M input and output (Wu, 2025).

Nonetheless, we must hold AI used to diagnose patients to a higher level of scrutiny compared to AI used commercially. LLMs are not perfect as they are subject to algorithmic bias and misuse, beginning before their creation. Shmatko et al. (2025), for example, address some shortcomings of the training data used for Delphi-2M. Notably, they explain the data from a mostly-white, older subset of the UK population isn’t entirely generalizable to very different demographics. Though Shmatko et al. found successes testing the model against a Danish database after training it on UK patients, I’m still concerned how Delphi-2M would perform on non-European and younger demographics, or those underrepresented in training data. Facial recognition is a prime example of where AI underperforms when training datasets lack diverse representation. AI designed to recognize faces historically underperform on individuals with feminine features or darker skin due to unrepresentative training data (Hardesty, 2018). With this in mind, it’s important that training data for diagnostic AI is representative of all demographics prior to widespread implementation.

Furthermore, Cabitza et al. (2017) wrote on some of the unintended consequences of machine learning in healthcare, postulating that widespread implementation of these tools also has the potential to reduce the skill of physicians. Though convenient in the short run, Cabitza et al. raise concerns with overreliance on AI—as studies show physicians aided by AI were less sensitive and accurate in diagnosing patients. Mammogram readers, for instance, were 14% less sensitive in their diagnostics when presented with images marked by computer-aided detection (Povyakalo et al., 2013). Though this study focused on image diagnoses, it’s clear how widespread use of Delphi-2M would lead to the same problems of deskilling in physicians. Delphi-2M is also exclusively a text-based model, which as Cabitza et al. detail, means that these diagnosis algorithms do not incorporate crucial contextual elements that are “psychological, relational, social, and organizational” in nature. A realworld example that Cabitza et al. described was an instance in which an AI model predicted a lower mortality risk for patients with pneumonia and asthma compared to those with pneumonia and without asthma. Understanding that asthma is not a protective factor for pneumonia patients, the involved researchers found the discrepant AI output was the result of hospital procedures that admitted pneumonia patients with asthma directly to intensive care, giving them better health outcomes. This missing piece of crucial information, which was difficult to represent in these prognostic models, led to an error a physician would not make. Thus, AI is limited in what information it can train on.

Though these new advancements in healthcare AI are promising, they have their limits. Tools like Delphi-2M spot patterns across vast clinical histories that no single clinician could feasibly track, yet the benefits depend on who is represented in the data, how predictions are explained and used, and whether safeguards are in place when they fail. Before AI is implemented in healthcare, we must demand representative training sets, validation across diverse populations, clear disclosures of uncertainty and limitations, and constant human involvement in the process that resists automation bias and deskilling. In short, diagnostic AI should supplmenent—not replace—clinical judgment, and it should be developed with privacy, equity, and patient trust at the forefront. Only then will these systems reliably improve care rather than merely appear to.

 

References

Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthcare Journal, 8(2), e188–e194. https://doi.org/10.7861/fhj.2021-0095

Cabitza, F., Rasoini, R., & Gensini, G. F. (2017). Unintended consequences of machine learning in medicine. JAMA, 318(6), 517. https://doi.org/10.1001/jama.2017.7797

Feldheim, B. (2025, June 12). AI scribes save 15,000 hours—and restore the human side of medicine. American Medical Association. https://www.ama-assn.org/practice-management/digital-health/ai-scribes-save-15000-hours-and-restore-human-side-medicine

Hardesty, L. (2018, February 11). Study finds gender and skin-type bias in commercial artificial-intelligence systems. MIT News. https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212

Povyakalo, A. A., Alberdi, E., Strigini, L., & Ayton, P. (2013). How to Discriminate between Computer-Aided and Computer-Hindered Decisions. Medical Decision Making, 33(1), 98–107. https://doi.org/10.1177/0272989×12465490

Wu, Y. (2025). AI uses medical records to accurately predict onset of disease 20 years into the future. Nature, 647(8088), 44–45. https://doi.org/10.1038/d41586-025-02971-3

Filed Under: Biology, Computer Science and Tech, Psychology and Neuroscience, Science

Phytoplankton and Ocean Warming: Uneven Adaptations at the Base of the Marine Food Web

December 7, 2025 by Ella Scott '28


Global warming is steadily transforming Earth’s oceans. Between 1901 and 2023, sea surface temperatures have increased at an average rate of 0.14℉ per decade (US EPA, 2016). This seemingly small thermal shift is enough to disrupt circulation patterns, alter nutrient availability, and restructure entire marine communities. As oceans absorb over 90% of excess atmospheric heat, they become both a buffer against and a victim of climate change (Climate Change, 2025). Among the many organisms affected by these changes, phytoplankton—the microscopic, photosynthetic organisms that drift near the ocean’s surface—serve as a critical case study. These single-celled producers are responsible for about half of Earth’s oxygen production, and they form the foundation of aquatic food webs, converting sunlight into chemical energy that sustains nearly all marine life (Hook, 2023). Therefore, understanding how phytoplankton respond to warming is essential for predicting the future of marine ecosystems.

Phytoplankton are highly sensitive to temperature fluctuations. Since their metabolic processes, growth rates, and enzymatic activities are temperature-dependent, even minor thermal changes can reshape their abundance and distribution. When waters warm beyond a species’ thermal tolerance, populations may decline or shift toward cooler regions (Barton et al., 2016). At the microscopic level, these shifts can cascade upward through the food web, reducing food availability for zooplankton, fish, and the higher-level predators that feed on them, such as sharks, whales, and seals. However,  one key question remains: can phytoplankton adapt to rising temperatures, or will their thermal limits determine the structure of future marine ecosystems?

Huertas et al. (2011) directly addressed this question through controlled laboratory experiments designed to measure the capacity of phytoplankton to evolve under warming. The researchers selected twelve species representing a range of environments—freshwater, coastal, open-ocean, and coral symbiotic systems—to test whether thermal tolerance varied among ecological types. To simulate long-term warming, they employed a “ratchet technique,” in which phytoplankton populations were gradually exposed to higher temperatures. Each population started from a single cloned cell to remove preexisting genetic variation. Then, the cell cultures were repeatedly grown and transferred into warmer conditions, forcing the populations to either adapt to the changes through genetic mutations or face extinction.

The results revealed striking differences among species. Freshwater species, such as Scenedesmus intermedius, exhibited remarkable resilience, adapting to temperatures as high as 40°C. Coastal species like Tetraselmis suecica and Dictyosphaerium chlorelloides tolerated up to 35°C, while open-ocean species such as Emiliania huxleyi and Monochrysis lutheri showed little to no capacity for adaptation. Coral symbionts (Symbiodinium species) demonstrated limited but detectable resistance, reflecting the thermal stress already observed in coral reef environments. Importantly, adaptation was not simply a case of short-term acclimation. The researchers found that resistant populations arose at different times across replicate cultures. This serves as evidence that adaptation stemmed from rare, spontaneous genetic mutations instead of physiological flexibility. Growth rates of adapted populations diverged significantly from their ancestral strains, confirming that true evolutionary change had occurred.

These findings carry major implications for understanding the ecological future of the oceans. If phytoplankton species differ so widely in their ability to adapt, warming will likely reorganize marine communities from the bottom up. Species capable of rapid genetic adaptation may dominate, while others could decline or disappear. This uneven resilience could favor smaller, faster-growing species, altering nutrient cycling and potentially weakening the ocean’s ability to sequester carbon. Because phytoplankton drive roughly half of global primary production, any restructuring of these communities could ripple through food webs, climate regulation, and fisheries.

While Huertas et al. focused on individual species in controlled conditions, Poloczanska et al. (2016) broadens this picture to the scale of global ecosystems. Their review synthesized nearly 2,000 observations of marine organisms responding to climate change, confirming that uneven adaptation is already occurring across taxa and ocean regions. On average, species distributions are shifting towards the north and south poles by about 72 kilometers per decade, and spring life-cycle events such as breeding or migration are advancing by four days per decade. Warm-water species are becoming more abundant, while cold-water species decline. Coral calcification, the process by which corals take in calcium and carbonate ions to build their exoskeletons, is weakening under combined warming and acidification stress. These patterns mirror the interspecific variability observed by Huertas et al.; some organisms adjust successfully to changing conditions, while others falter. Here, the broader conclusion is that climate change does not affect marine life uniformly—it selectively reshapes communities based on biological flexibility, dispersal ability, and evolutionary potential.

Fig 1. Global distribution of documented marine biological responses to climate change across major ocean regions (Poloczanska et al., 2016). Bars show the proportion of observed responses as consistent (dark blue), equivocal (light blue), or no change (yellow). Numbers indicate total observations per region; symbols identify taxa with ≥10 observations. Background colors represent regional sea-surface warming from 1950–2009 (yellow: low; orange: medium; red: high). Regions are defined by ecological structure and oceanographic features. eveal that climate-driven shifts in abundance, distribution, and phenology vary sharply across ocean basins—mirroring the uneven adaptive capacities described by Huertas et al. (2011).

Together, these studies illustrate both the mechanisms and the consequences of ocean warming. Huertas et al. provides mechanistic insight—showing that adaptation in phytoplankton depends on genetic change, and that some species are inherently more capable than others. Building off of this, Poloczanska et al. reveals how these species-level differences scale up, driving global shifts in abundance, distribution, and ecosystem structure. The two perspectives complement one another; laboratory experiments explain how adaptation might occur, while global syntheses show where and to what extent it already has.

As climate change accelerates, understanding the adaptability of foundational organisms like phytoplankton becomes increasingly urgent. Their evolutionary potential will determine not only the structure of marine ecosystems, but also the ocean’s capacity to regulate the planet’s climate. By linking experimental evidence with global ecological trends, researchers are beginning to map out a future ocean defined by winners and losers—a mosaic of adaptation, migration, and loss. The challenge ahead lies in predicting how these microscopic shifts will ripple through the web of life that depends on them.


References:

Barton, A. D., Irwin, A. J., Finkel, Z. V., & Stock, C. A. (2016). Anthropogenic climate change drives shift and shuffle in North Atlantic phytoplankton communities. Proceedings of the National Academy of Sciences, 113(11), 2964–2969. https://doi.org/10.1073/pnas.1519080113 

Climate Change: Ocean Heat Content | NOAA Climate.gov. (2025, June 26). https://www.climate.gov/news-features/understanding-climate/climate-change-ocean-heat-content 

Hook, B. (2023, May 31). Phenomenal Phytoplankton: Scientists Uncover Cellular Process Behind Oxygen Production | Scripps Institution of Oceanography. https://scripps.ucsd.edu/news/phenomenal-phytoplankton-scientists-uncover-cellular-process-behind-oxygen-production 

Huertas, I. E., Rouco, M., López-Rodas, V., & Costas, E. (2011). Warming will affect phytoplankton differently: Evidence through a mechanistic approach. Proceedings of the Royal Society B: Biological Sciences, 278(1724), 3534–3543. https://doi.org/10.1098/rspb.2011.0160 

Poloczanska, E. S., Burrows, M. T., Brown, C. J., García Molinos, J., Halpern, B. S., Hoegh-Guldberg, O., Kappel, C. V., Moore, P. J., Richardson, A. J., Schoeman, D. S., & Sydeman, W. J. (2016). Responses of Marine Organisms to Climate Change across Oceans. Frontiers in Marine Science, 3. https://doi.org/10.3389/fmars.2016.00062 

US EPA, O. (2016, June 27). Climate Change Indicators: Sea Surface Temperature [Reports and Assessments]. https://www.epa.gov/climate-indicators/climate-change-indicators-sea-surface-temperature 

 

Filed Under: Biology, Environmental Science and EOS, Science

Biological ChatGPT: Rewriting Life With Evo 2

May 4, 2025 by Jenna Lam '28

What makes life life? Is there underlying code that, when written or altered, can be used to replicate or even create life? On February 19th 2025, scientists from Arc Institute, NVIDIA, Stanford, Berkeley, and UC San Francisco released Evo 2, a generative machine learning model that may help answer these questions. Unlike its precursor Evo 1, which was released a year earlier, Evo 2 is trained on genomic data of eukaryotes as well as prokaryotes. In total, it is trained on 9.3 trillion nucleotides from over 130,000 genomes, making it the largest AI model in biology. You can think of it as ChatGPT for creating genetic code—only it “thinks” in the language of DNA rather than human language, and it is being used to solve the most pressing health and disease challenges (rather than calculus homework).

Computers, defined broadly, are devices that store, process, and display information. Digital computers, such as your laptop or phone, function based on binary code—the most basic form of computer data composed of 0s and 1s, representing a current that is on or off. Evo 2 centers around the idea that DNA functions as nature’s “code,” which, through protein expression and organismal development, creates “computers” of life. Rather than binary, organisms function according to genetic code, made up of A, T, C, G, and U–the five major nucleotide bases that constitute DNA and RNA.

Although Evo 2 can potentially design code for artificial life, it has not yet designed an entire genome and is not being used to create artificial organisms. Instead, Evo 2 is being used to (1) predict genetic abnormalities and (2) generate genetic code.

11 Functions of Evo 2 in biology at the cellular/organismal, protein, RNA, and epigenome levels.
Functions of Evo 2 at different levels. Adapted from https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1.full

Accurate over 90% of the time, Evo 2 can predict which BRCA1 (a gene central to understanding breast cancer) mutations are benign versus potentially pathogenic. This is big, since each gene is composed of hundreds and thousands of nucleotides, and any mutation in a single nucleotide (termed a Single Nucleotide Variant, or SNV) could have drastic consequences for the protein structure and function. Thus, being able to computationally pinpoint dangerous mutations reduces the amount of time and money spent testing each mutation in a lab, and paves the way for developing more targeted drugs.

Secondly, Evo 2 can design genetic code for highly specialized and controlled proteins which provide many fruitful possibilities for synthetic biology (making synthetic molecules using biological systems), from pharmaceuticals to plastic-degrading enzymes. It can generate entire mitochondrial genomes, minimal bacterial genomes, and entire yeast chromosomes–a feat that had not been done yet.

A notable perplexity of eukaryotic genomes is their many-layered epigenomic interactions: the complex power of the environment in controlling gene expression. Evo 2 works around this by using models of epigenomic structures, made possible through inference-time scaling. Put simply, inference-time scaling is a technique developed by NVIDIA that allows AI models to take time to “think” by evaluating multiple solutions before selecting the best one.

How is Evo 2 so knowledgeable, despite only being one year old? The answer lies in deep learning.

Just as in Large Language Models, or LLMs (think: ChatGPT, Gemini, etc.), Evo 2 decides what genes should look like by “training” on massive amounts of previously known data. Where LLMs train on previous text, Evo 2 trains on entire genomes of over 130,000 organisms. This training—the processing of mass amounts of data—is central to deep learning. In training, individual pieces of data called tokens are fed into a “neural networks”—a fancy name for a collection of software functions that are communicate data to one another. As their name suggests, neural networks are modeled after the human nervous system, which is made up of individual neurons that are analogous to software functions. Just like brain cells, “neurons” in the network can both take in information and produce output by communicating with other neurons. Each neural network has multiple layers, each with a certain number of neurons. Within each layer, each neuron sends information to every neuron in the next layer, allowing the model to process and distill large amounts of data. The more neurons involved, the more fine-tuned the final output will be. 

This neural network then attempts to solve a problem. Since practice makes perfect, the network attempts the problem over and over; each time, it strengthens the successful neural connections while diminishing others. This is called adjusting parameters, which are variables within a model that can be adjusted, dictating how the model behaves and what it produces. This minimizes error and increases accuracy. Evo 2 was trained with 7b and 40b parameters to have a 1 million token context window, meaning the genomic data was fed through many neurons and fine-tuned many times.

Example neural network
Example neural network modeled using tensorflow adapted from playground.tensorflow.org

The idea of anyone being able to create genetic code may spark fear; however, Evo 2 developers have prevented the model from returning productive answers to inquiries about pathogens, and the data set was carefully chosen to not include pathogens that infect humans and complex organisms. Furthermore, the positive possibilities of Evo 2 usage are likely much more than we are currently aware of: scientists believe Evo 2 will advance our understanding of biological systems by generalizing across massive genomic data of known biology. This may reveal higher-level patterns and unearth more biological truths from a birds-eye view.

It’s important to note that Evo 2 is a foundational model, emphasizing generalist capabilities over task-specific optimization. It was intended to be a foundation for scientists to build upon and alter for their own projects. Being open source, anyone can access the model code and training data. Anyone (even you!) can even generate their own strings of genetic code with Evo Designer. 

Biotechnology is rapidly advancing. For example, DNA origami allows scientists to fold DNA into highly specialized nanostructures of any shape–including smiley faces and China–potentially allowing scientists to use DNA code to design biological robots much smaller than any robot we have today. These tiny robots can target highly specific areas of the body, such as receptors on cancer cells. Evo 2, with its designing abilities, opens up many possibilities for DNA origami design. From gene therapy, to mutation-predictions, to miniature smiley faces, it is clear that computation is becoming increasingly important in understanding the most obscure intricacies of life—and we are just at the start.

 

Garyk Brixi, Matthew G. Durrant, Jerome Ku, Michael Poli, Greg Brockman, Daniel Chang, Gabriel A. Gonzalez, Samuel H. King, David B. Li, Aditi T. Merchant, Mohsen Naghipourfar, Eric Nguyen, Chiara Ricci-Tam, David W. Romero, Gwanggyu Sun, Ali Taghibakshi, Anton Vorontsov, Brandon Yang, Myra Deng, Liv Gorton, Nam Nguyen, Nicholas K. Wang, Etowah Adams, Stephen A. Baccus, Steven Dillmann, Stefano Ermon, Daniel Guo, Rajesh Ilango, Ken Janik, Amy X. Lu, Reshma Mehta, Mohammad R.K. Mofrad, Madelena Y. Ng, Jaspreet Pannu, Christopher Ré, Jonathan C. Schmok, John St. John, Jeremy Sullivan, Kevin Zhu, Greg Zynda, Daniel Balsam, Patrick Collison, Anthony B. Costa, Tina Hernandez-Boussard, Eric Ho, Ming-Yu Liu, Thomas McGrath, Kimberly Powell, Dave P. Burke, Hani Goodarzi, Patrick D. Hsu, Brian L. Hie (2025). Genome modeling and design across all domains of life with Evo 2. bioRxiv preprint doi: https://doi.org/10.1101/2025.02.18.638918.

 

Filed Under: Biology, Computer Science and Tech, Science Tagged With: AI, Computational biology

Motor Brain-Computer Interface Reanimates Paralyzed Hand

May 4, 2025 by Mauricio Cuba Almeida '27

Over five million people in the United States live with paralysis (Armour et al., 2016), representing a large portion of the US population. Though the extent of paralysis varies from person-to-person, most with paralysis experience unmet needs that subtract from their overall life satisfaction. A survey of those with paralysis revealed “peer support, support for family caregivers, [and] sports activities” as domains where individuals with paralysis experienced less fulfillment—with lower household income predicting a higher likelihood of unmet needs (Trezzini et al., 2019). Consequently, individuals with sufficient motor function have turned to video games as a means to meet some of these needs, as video games are sources of recreation, artistic expression, social connectedness, and enablement (Cairns et al., 2019). Oftentimes, however, these individuals are limited by what games they are able to engage with—as they often “avoid multiplayer games with able-bodied players” (Willsey et al., 2025). Thus, Willsey and colleagues (2025) explore brain-computer interfaces as a valuable potential solution for restoring more sophisticated motor control of not just video games, but of digital interfaces used for social networking or remote work.

Brain-computer interfaces (BCIs) are devices that read and analyze brain activity in order to produce commands that are then relayed to output devices, with the intent of restoring useful bodily function (Shih et al., 2012). Willsey et al. explain how current motor BCIs are unable to distinguish between the brain activity corresponding to the movement of different fingers, so BCIs have instead relied on detecting the more general movement of grasping a hand (where the fingers are treated as one group). This limits BCIs to controlling fewer dimensions of an instrument: just being able to control a computer’s point-and-click cursor control as compared to typing on a computer. Hence, Willsey et al. seek to expand BCIs to allow for greater object manipulation—implementing finger decoding that will differentiate the brain output signals for different fingers, allowing for “typing, playing a musical instrument or manipulating a multieffector digital interface such as a video game controller.” Improving BCIs would also involve continuous finger decoding, as finger decoding has mostly been done retrospectively, where finger signals are not classified and read until after the brain data is analyzed. 

Willsey et al. developed a BCI system that is capable of decoding three independent finger groups (with the thumb decoded into two dimensions), allowing for four total dimensions of control. By training on the participant’s brain over nine days as they attempt to move individual fingers, the BCI can learn to distinguish brain regions that correspond to finger movements. These four dimensions of control are well reflected in a quadcopter simulation, where a patient with an implemented BCI is able to manipulate a virtual hand to fly a quadcopter drone through various hoops of an obstacle course. Many applications, even beyond video games, are apparent. These finger controls can be extended to a robotic hand or could reanimate the paralyzed limb. 

Finger movement is decoded into three distinct groups (differentiated by color).
Finger movement is decoded into three distinct groups (differentiated by color; Willsey et al., 2025).
Participant navigates quadcopter through a hoop through decoded finger movements.
Participant navigates quadcopter through a hoop through decoded finger movements (Willsey et al., 2025).

Download Full Video

The patient’s feelings of social connectedness, enablement and recreation were greatly improved. Willsey et al. note how the patient often looked forward to the quadcopter sessions, frequently “[asking] when the next quadcopter session was.” Not only did the patient find enjoyment in controlling the quadcopter, but they found training not to be tedious and the controls intuitive. To date, this finger BCI proves to be the most capable kind of motor BCI, and will serve as a valuable model for non-motor BCIs, like Brain2Char, a system for decoding text from brain recordings.

However, BCIs raise significant ethical considerations that must be addressed alongside their development. Are users responsible for all outputs from a BCI, even with outputs unintended? Given that BCIs decode brain signaling and train on data from a very controlled setting, there is always the potential for natural “noise” that may upset a delicate BCI model. Ideally, BCIs are trained on a participant’s brain in a variety of different circumstances to mitigate these errors. Furthermore, BCIs may further stigmatize motor disabilities by encouraging individuals toward restoring “normal” abilities. I am particularly concerned about the cost of this technology. As with most new clinical technologies, implementation is expensive and ends up pricing out individuals with lower socioeconomic statuses. These are often the individuals that face the greatest need for technologies like BCI. As mentioned earlier, lower household income predicts more unmet needs for individuals with paralysis. Nonetheless, so long as they are developed responsibly and efforts are made to ensure their affordability, there is great promise in motor BCIs.

 

References

Armour, B. S., Courtney-Long, E. A., Fox, M. H., Fredine, H., & Cahill, A. (2016). Prevalence and Causes of Paralysis—United States, 2013. American Journal of Public Health, 106(10), 1855–1857. https://doi.org/10.2105/ajph.2016.303270

Cairns, P., Power, C., Barlet, M., Haynes, G., Kaufman, C., & Beeston, J. (2019). Enabled players: The value of accessible digital games. Games and Culture, 16(2), 262–282. https://doi.org/10.1177/1555412019893877

Shih, J. J., Krusienski, D. J., & Wolpaw, J. R. (2012). Brain-Computer interfaces in medicine. Mayo Clinic Proceedings, 87(3), 268–279. https://doi.org/10.1016/j.mayocp.2011.12.008

Trezzini, B., Brach, M., Post, M., & Gemperli, A. (2019). Prevalence of and factors associated with expressed and unmet service needs reported by persons with spinal cord injury living in the community. Spinal Cord, 57(6), 490–500. https://doi.org/10.1038/s41393-019-0243-y

Willsey, M. S., Shah, N. P., Avansino, D. T., Hahn, N. V., Jamiolkowski, R. M., Kamdar, F. B., Hochberg, L. R., Willett, F. R., & Henderson, J. M. (2025). A high-performance brain–computer interface for finger decoding and quadcopter game control in an individual with paralysis. Nature Medicine. https://doi.org/10.1038/s41591-024-03341-8

Filed Under: Computer Science and Tech, Psychology and Neuroscience, Science

Computer Vision Ethics

May 4, 2025 by Madina Sotvoldieva '28

Computer vision (CV) is a field of computer science that allows computers to “see” or, in more technical terms, recognize, analyze, and respond to visual data, such as videos and images. CV is widely used in our daily lives, from something as simple as recognizing handwritten text to something as complex as analyzing and interpreting MRI scans. With the advent of AI in the last few years, CV has also been improving rapidly. However, just like any subfield of AI nowadays, CV has its own set of ethical, social, and political implications, especially when used to analyze people’s visual data.

Although CV has been around for some time, there is limited work on its ethical limitations in the general AI field. Among the existing literature, authors categorized six ethical themes, which are espionage, identity theft, malicious attacks, copyright infringement, discrimination, and misinformation [1]. As seen in Figure 1, one of the main CV applications is face recognition, which could also lead to issues of error, function creep (the expansion of technology beyond its original purposes), and privacy. [2].

Computer Vision technologies related to Identity Theft
Figure 1: Specific applications of CV that could be used for Identity Theft.

To discuss CV’s ethics, the authors of the article take a critical approach to evaluating the implications through the framework of power dynamics. The three types of power that are analyzed are dispositional, episodic, and systemic powers [3]. 

Dispositional Power

Dispositional power is defined as the ability to bring out a significant outcome [4]. When people gain that power, they feel empowered to explore new opportunities, and their scope of agency increases (they become more independent in their actions) [5]. However, CV can threaten this dispositional power in several ways, ultimately reducing people’s autonomy. 

One way CV disempowers people is by limiting their information control. Since CV works with both pre-existing and real-time camera footage, people might be often unaware that they are being recorded and often cannot avoid that. This means that technology makes it hard for people to control the data that is being gathered about them, and protecting their personal information might get as extreme as hiding their faces. 

Apart from people being limited in controlling what data is being gathered about them, advanced technologies make it extremely difficult for an average person to know what specific information can be retrieved from visual data. Another way CV might disempower people of following their own judgment is through communicating who they are for them (automatically inferring people’s race, gender, and mood), creating a forced moral environment (where people act from fear of being watched rather than their own intentions), and potentially leading to over-dependence on computers (e.g., relying on face recognition for emotion interpretations). 

In all these and other ways, CV undermines the foundation of dispositional power by limiting people’s ability to control their information, make independent decisions, express themselves, and act freely.

Episodic Power

Episodic power, or as often referred to as power-over, defines the direct exercise of power by one individual or group over another. CV can both give new power or improve the efficiency of existing one [6]. While this isn’t always a bad thing (for example, parents watching over children), problems arise when CV makes that control too invasive or one-sided—especially in ways that limit people’s freedom to act independently.

 With CV taking security cameras to the next level, opportunities such as baby-room monitoring or fall detection for elderly people open up to us. However, it also leads to the issues of surveillance automation, which can lead to over-enforcement in scales as small as private individuals to bigger corporations (workplaces, insurance companies, etc.). Another power dynamic shifts that need to be considered, for example, when the smart doorbells show far beyond the person at the door and might violate a neighbor’s privacy by creating peer-to-peer surveillance. 

These examples show that while CV may offer convenience or safety, it can also tip power balances in ways that reduce personal freedom and undermine one’s autonomy.

Systemic Power

Systematic power is not viewed as an individual exercise of power, but rather a set of societal norms and practices that affect people’s autonomy by determining what opportunities people have, what values they hold, and what choices they make. CV can strengthen the systematic power by making law enforcement more efficient through smart cameras and increase businesses’ profit through business intelligence tools. 

However, CV can also reinforce the pre-existing systematic societal injustices. One example of that might be flawed facial recognition, when the algorithms are more likely to recognize White people and males [7], which led to a number of false arrests. This might lead to people receiving unequal opportunities (when biased systems are used for hiring process), or harm their self-worth (when falsely recognized as a criminal). 

Another matter of systematic power is the environmental cost of CV. AI systems rely on vast amounts of data, which requires intensive energy for processing and storage. As societies become increasingly dependent on AI technologies like CV, those trying to protect the environment have little ability to resist or reshape these damaging practices. The power lies with tech companies and industries, leaving citizens without the means to challenge the system. When the system becomes harder to challenge or change, that’s when the ethical concerns regarding CV arise.

Conclusion

Computer Vision is a powerful tool that keeps evolving each year. We already see numerous applications of it in our daily lives, starting from the self-checkouts in the stores and smart doorbells to autonomous vehicles and tumor detections. With the potential that CV holds in improving and making our lives safer, there are a number of ethical limitations that should be considered. We need to critically examine how CV affects people’s autonomy, might cause one-sided power dynamics, and reinforces societal prejudices. As we are rapidly transitioning into the AI-driven world, there is more to come in the field of computer vision. However, in the pursuit of innovation, we should ensure the progress does not come at the cost of our ethical values.

References:

[1] Lauronen, M.: Ethical issues in topical computer vision applications. Information Systems, Master’s Thesis. University of Jyväskylä. (2017). https://jyx.jyu.fi/bitstream/handle/123456789/55806/URN%3aNBN%3afi%3ajyu-201711084167.pdf?sequence=1&isAllowed=y

[2] Brey, P.: Ethical aspects of facial recognition systems in public places. J. Inf. Commun. Ethics Soc. 2(2), 97–109 (2004). https:// doi.org/10.1108/14779960480000246

[3] Haugaard, M.: Power: a “family resemblance concept.” Eur. J. Cult. Stud. 13(4), 419–438 (2010)

[4] Morriss, P.: Power: a philosophical analysis. Manchester University Press, Manchester, New York (2002)

[5] Morriss, P.: Power: a philosophical analysis. Manchester University Press, Manchester, New York (2002)

[6] Brey, P.: Ethical aspects of facial recognition systems in public places. J. Inf. Commun. Ethics Soc. 2(2), 97–109 (2004). https://doi.org/10.1108/14779960480000246

[7] Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency, pp. 77–91 (2018) Coeckelbergh, M.: AI ethics. MIT Press (2020)

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, artificial intelligence, Computer Science and Tech, Computer Vision, Ethics, Technology

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 7
  • Go to Next Page »

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Honors Projects Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • Venom As Medicine: Novel Pathways for Dravet Syndrome Treatment Using Modulatory Peptides from Scorpion Venom January 8, 2026
  • When Distraction Helps: Music as a Tool for Focus in ADHD Cases December 24, 2025
  • Timing Matters: How the menstrual cycle influences ACL injury in Female athletes December 24, 2025

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics Alzheimer's Disease antibiotics artificial intelligence bacteria Bathymetry Biology brain Cancer Biology Cell Biology Chemistry and Biochemistry Chlorofluorocarbons climate change Computer Science and Tech CRISPR Cytoskeleton Depression Dermatology dreams epigenetics Ethics Genes Gut microbiota honors Marine Biology Marine Mammals Marine noise Medicine memory Montreal Protocol Moss neurobiology neuroscience Nutrients Ozone hole Plants Psychology and Neuroscience REM seabirds sleep student Technology therapy Women's health

Copyright © 2026 · students.bowdoin.edu