• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

Ethics

Computer Vision Ethics

May 4, 2025 by Madina Sotvoldieva

Computer vision (CV) is a field of computer science that allows computers to “see” or, in more technical terms, recognize, analyze, and respond to visual data, such as videos and images. CV is widely used in our daily lives, from something as simple as recognizing handwritten text to something as complex as analyzing and interpreting MRI scans. With the advent of AI in the last few years, CV has also been improving rapidly. However, just like any subfield of AI nowadays, CV has its own set of ethical, social, and political implications, especially when used to analyze people’s visual data.

Although CV has been around for some time, there is limited work on its ethical limitations in the general AI field. Among the existing literature, authors categorized six ethical themes, which are espionage, identity theft, malicious attacks, copyright infringement, discrimination, and misinformation [1]. As seen in Figure 1, one of the main CV applications is face recognition, which could also lead to issues of error, function creep (the expansion of technology beyond its original purposes), and privacy. [2].

Computer Vision technologies related to Identity Theft
Figure 1: Specific applications of CV that could be used for Identity Theft.

To discuss CV’s ethics, the authors of the article take a critical approach to evaluating the implications through the framework of power dynamics. The three types of power that are analyzed are dispositional, episodic, and systemic powers [3]. 

Dispositional Power

Dispositional power is defined as the ability to bring out a significant outcome [4]. When people gain that power, they feel empowered to explore new opportunities, and their scope of agency increases (they become more independent in their actions) [5]. However, CV can threaten this dispositional power in several ways, ultimately reducing people’s autonomy. 

One way CV disempowers people is by limiting their information control. Since CV works with both pre-existing and real-time camera footage, people might be often unaware that they are being recorded and often cannot avoid that. This means that technology makes it hard for people to control the data that is being gathered about them, and protecting their personal information might get as extreme as hiding their faces. 

Apart from people being limited in controlling what data is being gathered about them, advanced technologies make it extremely difficult for an average person to know what specific information can be retrieved from visual data. Another way CV might disempower people of following their own judgment is through communicating who they are for them (automatically inferring people’s race, gender, and mood), creating a forced moral environment (where people act from fear of being watched rather than their own intentions), and potentially leading to over-dependence on computers (e.g., relying on face recognition for emotion interpretations). 

In all these and other ways, CV undermines the foundation of dispositional power by limiting people’s ability to control their information, make independent decisions, express themselves, and act freely.

Episodic Power

Episodic power, or as often referred to as power-over, defines the direct exercise of power by one individual or group over another. CV can both give new power or improve the efficiency of existing one [6]. While this isn’t always a bad thing (for example, parents watching over children), problems arise when CV makes that control too invasive or one-sided—especially in ways that limit people’s freedom to act independently.

 With CV taking security cameras to the next level, opportunities such as baby-room monitoring or fall detection for elderly people open up to us. However, it also leads to the issues of surveillance automation, which can lead to over-enforcement in scales as small as private individuals to bigger corporations (workplaces, insurance companies, etc.). Another power dynamic shifts that need to be considered, for example, when the smart doorbells show far beyond the person at the door and might violate a neighbor’s privacy by creating peer-to-peer surveillance. 

These examples show that while CV may offer convenience or safety, it can also tip power balances in ways that reduce personal freedom and undermine one’s autonomy.

Systemic Power

Systematic power is not viewed as an individual exercise of power, but rather a set of societal norms and practices that affect people’s autonomy by determining what opportunities people have, what values they hold, and what choices they make. CV can strengthen the systematic power by making law enforcement more efficient through smart cameras and increase businesses’ profit through business intelligence tools. 

However, CV can also reinforce the pre-existing systematic societal injustices. One example of that might be flawed facial recognition, when the algorithms are more likely to recognize White people and males [7], which led to a number of false arrests. This might lead to people receiving unequal opportunities (when biased systems are used for hiring process), or harm their self-worth (when falsely recognized as a criminal). 

Another matter of systematic power is the environmental cost of CV. AI systems rely on vast amounts of data, which requires intensive energy for processing and storage. As societies become increasingly dependent on AI technologies like CV, those trying to protect the environment have little ability to resist or reshape these damaging practices. The power lies with tech companies and industries, leaving citizens without the means to challenge the system. When the system becomes harder to challenge or change, that’s when the ethical concerns regarding CV arise.

Conclusion

Computer Vision is a powerful tool that keeps evolving each year. We already see numerous applications of it in our daily lives, starting from the self-checkouts in the stores and smart doorbells to autonomous vehicles and tumor detections. With the potential that CV holds in improving and making our lives safer, there are a number of ethical limitations that should be considered. We need to critically examine how CV affects people’s autonomy, might cause one-sided power dynamics, and reinforces societal prejudices. As we are rapidly transitioning into the AI-driven world, there is more to come in the field of computer vision. However, in the pursuit of innovation, we should ensure the progress does not come at the cost of our ethical values.

References:

[1] Lauronen, M.: Ethical issues in topical computer vision applications. Information Systems, Master’s Thesis. University of Jyväskylä. (2017). https://jyx.jyu.fi/bitstream/handle/123456789/55806/URN%3aNBN%3afi%3ajyu-201711084167.pdf?sequence=1&isAllowed=y

[2] Brey, P.: Ethical aspects of facial recognition systems in public places. J. Inf. Commun. Ethics Soc. 2(2), 97–109 (2004). https:// doi.org/10.1108/14779960480000246

[3] Haugaard, M.: Power: a “family resemblance concept.” Eur. J. Cult. Stud. 13(4), 419–438 (2010)

[4] Morriss, P.: Power: a philosophical analysis. Manchester University Press, Manchester, New York (2002)

[5] Morriss, P.: Power: a philosophical analysis. Manchester University Press, Manchester, New York (2002)

[6] Brey, P.: Ethical aspects of facial recognition systems in public places. J. Inf. Commun. Ethics Soc. 2(2), 97–109 (2004). https://doi.org/10.1108/14779960480000246

[7] Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency, pp. 77–91 (2018) Coeckelbergh, M.: AI ethics. MIT Press (2020)

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, artificial intelligence, Computer Science and Tech, Computer Vision, Ethics, Technology

Machine learning and algorithmic bias

December 8, 2024 by Mauricio Cuba Almeida

Algorithms permeate modern society, especially AI algorithms. Artificial intelligence (AI) is built with various techniques, like machine learning, deep learning, or natural language processing, that trains AI to mimic humans at a certain task. Healthcare, loan approval, and security surveillance are a few industries that have begun using AI (Alowais et al., 2023; Purificato et al., 2022; Choung et al., 2024). Most people will inadvertently continue to interact with AI on a daily basis.

However, what are the problems faced by an increasing algorithmic society? Authors Sina Fazelpour and David Danks, in their article, explore this question in the context of algorithmic bias. Indeed, the problem they identify is that AI perpetuates bias. At its most neutral, Fazelpour and Danks (2021) explain that algorithmic bias is some “systematic deviation in algorithm output, performance, or impact, relative to some norm or standard,” suggesting that algorithms can be biased against a moral, statistical, or social norm. Fazelpour and Danks use a running example of a university training an AI algorithm with past student data to predict future student success. Thus, this algorithm exhibits a statistical bias if student success predictions are discordant with what has happened historically (in training data). Similarly, the algorithm exhibits a moral bias if it illegitimately depends on the student’s gender to produce a prediction. This is seen already in facial recognition algorithms that “perform worse for people with feminine features or darker skin” or recidivism prediction models that rate people of color as higher risk (Fazelpour & Danks, 2021). Clearly, algorithmic biases have the potential to preserve or exacerbate existing injustices under the guise of being “objective.” 

Algorithmic bias will manifest through different means. As Fazelpour and Danks discuss, harmful bias will be evident even prior to the creation of an algorithm if values and norms are not deeply considered. In the example of a student-success prediction model, universities must make value judgments, specifying what target variables define “student success,” whether it’s grades, respect from peers, or post-graduation salary. The more complex the goal, the more difficult and contested will choosing target variables be. Indeed, choosing target variables is a source of algorithmic bias. As Fazelpour and Danks explain, enrollment or financial aid decisions based on the prediction of student success may discriminate against minority students if first year performance was used in that prediction since minority students may face additional challenges.

Using training data that is biased will also lead to bias in an AI algorithm. In other words, bias in the measured world will be reflected in AI algorithms that mimic our world. For example, recruiting AI that reviews resumes is often trained on employees already hired by the company. In many cases, so-called gender-blind recruiting AI have discriminated against women by using gendered information on a resume that was absent from the resumes of a majority-male workplace (Pisanelli, 2022; Parasurama & Sedoc, 2021). Fazelpour and Danks also mention that biased data can arise from limitations and biases in measurement methods. This is what happens when facial recognition systems are trained predominantly on white faces. Consequently, these facial recognition systems are less effective when individuals do not look like the data the algorithm has been trained on.

Alternatively, users’ misinterpretations of predictive algorithms may produce biased results, Fazelpour and Danks argue. An algorithm is optimized for one purpose, and without even knowing, users may utilize this algorithm for another. A user could inadvertently interpret predicted “student success” as a metric for grades instead of what an algorithm is optimized to predict (e.g., likelihood to drop out). Decisions stemming from misinterpretations of algorithm predictions are doomed to be biased—and not just for the aforementioned reasons. Misunderstandings of algorithmic predictions lead to poor decisions if the variables predicting an outcome are also assumed to cause that outcome. Students in advanced courses may be predicted to have higher student success, but as Fazelpour and Danks put it, we shouldn’t enroll every underachieving student in an advanced course. Models such as these should also be applied in a context similar to when historical data was collected. Doing this is more important the longer a model is used as present data begins to differ from historical training data. In other words, student success models created for a small private college should not be deployed at a large public university nor many years later.

Fazelpour and Danks establish that algorithmic bias is nearly impossible to eliminate—solutions often must engage with the complexities of our society. The authors delve into several technical solutions, such as optimizing an algorithm using “fairness” as a constraint or training an algorithm on corrected historical data. This quickly reveals itself to be problematic, as determining fairness is a difficult value judgment. Nonetheless, algorithms provide tremendous benefit to us, even in moral and social ways. Algorithms can identify biases and serve as better alternatives to human practices. Fazelpour and Danks conclude that algorithms should continue to be studied in order to identify, mitigate, and prevent bias.

References

Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Saleh, K. B., Badreldin, H. A., Yami, M. S. A., Harbi, S. A., & Albekairy, A. M. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Medical Education, 23(1). https://doi.org/10.1186/s12909-023-04698-z

Choung, H., David, P., & Ling, T. (2024). Acceptance of AI-Powered Facial Recognition Technology in Surveillance scenarios: Role of trust, security, and privacy perceptions. Technology in Society, 102721. https://doi.org/10.1016/j.techsoc.2024.102721

Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8). https://doi.org/10.1111/phc3.12760

Parasurama, P., & Sedoc, J. (2021, December 16). Degendering resumes for fair algorithmic resume screening. arXiv.org. https://arxiv.org/abs/2112.08910

Pisanelli, E. (2022). Your resume is your gatekeeper: Automated resume screening as a strategy to reduce gender gaps in hiring. Economics Letters, 221, 110892. https://doi.org/10.1016/j.econlet.2022.110892

Purificato, E., Lorenzo, F., Fallucchi, F., & De Luca, E. W. (2022). The use of responsible artificial intelligence techniques in the context of loan approval processes. International Journal of Human-Computer Interaction, 39(7), 1543–1562. https://doi.org/10.1080/10447318.2022.2081284

Filed Under: Computer Science and Tech Tagged With: AI, AI ethics, artificial intelligence, Ethics

From Crystal Balls to Blue Flies: Death Prediction in the Modern Scientific World

November 6, 2022 by Alexa Comess

To most people, the phrase “death prediction” conjures distant images of glowing crystal balls, vibrant tarot cards, or the mystical fortune tellers in popular movies like Big and Ghost. Despite the terrifying implications of a finite and predictable death, a societal obsession with it pervades our media, culture, and everyday life. Though death prediction has historically been confined to fiction and spirituality, scientific advances are transforming it into an imminent next step.

Until the early 2010s, death prediction in the scientific sphere was limited to chronological age. The basic understanding that humans are more likely to die as they reach the end of their average life span was, and still is in many ways, the foundation to any scientific attempts to predict mortality (Gaille et al.). However, more recently, researchers have discovered observable markers of “physiological age”, or traits independent of chronological age that indicate when an individual organism is near the end of its life. In a 2012 experiment, scientists Rera et al. found a reliable predictor in the model organism Drosophila melanogaster, otherwise known as the common fruit fly. According to the study, the flies enter an identifiable “pre-death stage” marked by an increase in intestinal permeability, which can accurately predict when they are near the end of their life. Increases in intestinal permeability were tracked by injecting the drosophila flies with a non-digestible blue dye and observing if their intestinal walls allowed the dye to pass through, causing them to externally turn blue (in drosophila with normal levels of intestinal permeability the dye remained confined to the digestive tract). The high intestinal permeability associated with the blue flies and pre-death stage was appropriately dubbed the “Smurf phenotype”.

Drosophila with the Smurf phenotype were observed to have significantly lower remaining life spans than their age-matched non-Smurf counterparts. While the link between intestinal dysfunction and approaching death is still not fully understood, recent data point to changes in immunity-related gene expression and the aging fly’s microbiome as potential causes. These changes can be caused by old age or other afflictions; in the Smurf flies who were significantly below the average lifespan of the species, other morbidities were often observed, such as mitochondrial dysfunction, increased internal bacterial load, and insulin resistance syndrome. Evidently, the flies’ transition to their “pre-death stage”, or the Smurf phenotype, is a more accurate and comprehensive predictor of death than chronological age (Rera et al.). This biological phenomenon was later observed in other animals too, notably zebrafish and nematodes (Gaille et al.). The prevalence of this observable transition in multiple organisms coupled with its accuracy is slowly but surely turning mortality prediction into a reality.

Surprisingly, death prediction in humans is not far off from the developments seen in Drosophila and other model organisms. In a 2019 UCLA clinical trial, scientists tentatively proved that intestinal permeability is linked to approaching mortality in humans. While the trial was small and needs to be replicated, it provided significant evidence that the efficacy of intestinal permeability decline in death prediction has significant potential for human mortality prediction (Angarita et al.).

Outside of intestinal permeability, scientists have discovered alternative ways to predict a pre-death stage in humans. In a 2014 study, Pinto et al. theorized that olfaction could serve as another indicator as it relies heavily on peripheral and central cell regeneration, which tend to degrade near the end of an individual’s life due to old age or other morbidities. In the study, roughly 3,000 adults in the age range of 57-85 were asked to identify five different common odorants via forced choice. After 5 years, scientists collected data on which subjects were still alive, and analyzed the connection between their olfactory capability and mortality within the 5 year span. The findings were startling: the mortality rate was four times higher for adults with complete loss of smell than adults with fully intact senses of smell (Pinto et al.).

  A: Olfactory dysfunction versus 5 year mortality separated by age group

B: Progression of errors in scent identification versus 5 year mortality (Pinto et al.)

While other methods of death prediction such as biomarkers, genetic screenings, and
demographic studies exist, the discovery of pre-death indicators like intestinal permeability and olfactory decline grant us a unique and improved perspective on mortality. As research continues to grow on this subject, we must question the implications these developments have on our society: how will we reckon with the seemingly impossible ability to predict the future? Is it possible to enjoy life with an exact knowledge of its end? How will both health and wealth inequalities affect the commercialization of testing for death prediction? The moral and ethical dilemmas arising from this development are boundless.

Though these questions do not have finite answers, they must remain present in our discussions of death prediction. While scientific innovations like mortality predictors hold great promise in advancing society, they also have the capacity to exacerbate inequity and other social ills. As rapid development continues to occur in the scientific world, we must maintain both an open mind and an understanding of the complex challenges change poses to our world.

Works Cited

Angarita, Stephanie A. K., et al. “Quantitative Measure of Intestinal Permeability Using Blue Food Coloring.” Journal of Surgical Research, vol. 233, Jan. 2019, pp. 20–25. DOI.org (Crossref), https://doi.org/10.1016/j.jss.2018.07.005.

Gaille, Marie, et al. “Ethical and Social Implications of Approaching Death Prediction in Humans – When the Biology of Ageing Meets Existential Issues.” BMC Medical Ethics, vol. 21, no. 1, Dec. 2020, p. 64. DOI.org (Crossref), https://doi.org/10.1186/s12910-020-00502-5.

Pinto, Jayant M., et al. “Olfactory Dysfunction Predicts 5-Year Mortality in Older Adults.” PLoS ONE, edited by Thomas Hummel, vol. 9, no. 10, Oct. 2014, p. e107541. DOI.org (Crossref), https://doi.org/10.1371/journal.pone.0107541.

Rera, Michael, et al. “Intestinal Barrier Dysfunction Links Metabolic and Inflammatory Markers of Aging to Death in Drosophila.” Proceedings of the National Academy of Sciences, vol. 109, no. 52, Dec. 2012, pp. 21528–33. DOI.org (Crossref), https://doi.org/10.1073/pnas.1215849110.

Cover Image Credit: https://www.npr.org/2019/07/26/745361267/hello-brave-new-world

Filed Under: Biology, Science Tagged With: Biology, Death Prediction, Ethics

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Honors Projects Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • Biological ChatGPT: Rewriting Life With Evo 2 May 4, 2025
  • Unsupervised Thematic Clustering for Genre Classification in Literary Texts May 4, 2025
  • Motor Brain-Computer Interface Reanimates Paralyzed Hand May 4, 2025

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics Alzheimer's Disease antibiotics artificial intelligence bacteria Bathymetry Beavers Biology brain Cancer Biology Cell Biology Chemistry and Biochemistry Chlorofluorocarbons climate change Computer Science and Tech CRISPR Cytoskeleton Depression dreams epigenetics Ethics Genes honors Luis Vidali Marine Biology Marine Mammals Marine noise Medicine memory Montreal Protocol Moss neurobiology neuroscience Nutrients Ozone hole Plants Psychology and Neuroscience REM seabirds sleep student superintelligence Technology therapy

Copyright © 2025 · students.bowdoin.edu