• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bowdoin Science Journal

  • Home
  • About
    • Our Mission
    • Our Staff
  • Sections
    • Biology
    • Chemistry and Biochemistry
    • Math and Physics
    • Computer Science and Technology
    • Environmental Science and EOS
    • Honors Projects
    • Psychology and Neuroscience
  • Contact Us
  • Fun Links
  • Subscribe

artificial intelligence

Computer Vision Ethics

May 4, 2025 by Madina Sotvoldieva

Computer vision (CV) is a field of computer science that allows computers to “see” or, in more technical terms, recognize, analyze, and respond to visual data, such as videos and images. CV is widely used in our daily lives, from something as simple as recognizing handwritten text to something as complex as analyzing and interpreting MRI scans. With the advent of AI in the last few years, CV has also been improving rapidly. However, just like any subfield of AI nowadays, CV has its own set of ethical, social, and political implications, especially when used to analyze people’s visual data.

Although CV has been around for some time, there is limited work on its ethical limitations in the general AI field. Among the existing literature, authors categorized six ethical themes, which are espionage, identity theft, malicious attacks, copyright infringement, discrimination, and misinformation [1]. As seen in Figure 1, one of the main CV applications is face recognition, which could also lead to issues of error, function creep (the expansion of technology beyond its original purposes), and privacy. [2].

Computer Vision technologies related to Identity Theft
Figure 1: Specific applications of CV that could be used for Identity Theft.

To discuss CV’s ethics, the authors of the article take a critical approach to evaluating the implications through the framework of power dynamics. The three types of power that are analyzed are dispositional, episodic, and systemic powers [3]. 

Dispositional Power

Dispositional power is defined as the ability to bring out a significant outcome [4]. When people gain that power, they feel empowered to explore new opportunities, and their scope of agency increases (they become more independent in their actions) [5]. However, CV can threaten this dispositional power in several ways, ultimately reducing people’s autonomy. 

One way CV disempowers people is by limiting their information control. Since CV works with both pre-existing and real-time camera footage, people might be often unaware that they are being recorded and often cannot avoid that. This means that technology makes it hard for people to control the data that is being gathered about them, and protecting their personal information might get as extreme as hiding their faces. 

Apart from people being limited in controlling what data is being gathered about them, advanced technologies make it extremely difficult for an average person to know what specific information can be retrieved from visual data. Another way CV might disempower people of following their own judgment is through communicating who they are for them (automatically inferring people’s race, gender, and mood), creating a forced moral environment (where people act from fear of being watched rather than their own intentions), and potentially leading to over-dependence on computers (e.g., relying on face recognition for emotion interpretations). 

In all these and other ways, CV undermines the foundation of dispositional power by limiting people’s ability to control their information, make independent decisions, express themselves, and act freely.

Episodic Power

Episodic power, or as often referred to as power-over, defines the direct exercise of power by one individual or group over another. CV can both give new power or improve the efficiency of existing one [6]. While this isn’t always a bad thing (for example, parents watching over children), problems arise when CV makes that control too invasive or one-sided—especially in ways that limit people’s freedom to act independently.

 With CV taking security cameras to the next level, opportunities such as baby-room monitoring or fall detection for elderly people open up to us. However, it also leads to the issues of surveillance automation, which can lead to over-enforcement in scales as small as private individuals to bigger corporations (workplaces, insurance companies, etc.). Another power dynamic shifts that need to be considered, for example, when the smart doorbells show far beyond the person at the door and might violate a neighbor’s privacy by creating peer-to-peer surveillance. 

These examples show that while CV may offer convenience or safety, it can also tip power balances in ways that reduce personal freedom and undermine one’s autonomy.

Systemic Power

Systematic power is not viewed as an individual exercise of power, but rather a set of societal norms and practices that affect people’s autonomy by determining what opportunities people have, what values they hold, and what choices they make. CV can strengthen the systematic power by making law enforcement more efficient through smart cameras and increase businesses’ profit through business intelligence tools. 

However, CV can also reinforce the pre-existing systematic societal injustices. One example of that might be flawed facial recognition, when the algorithms are more likely to recognize White people and males [7], which led to a number of false arrests. This might lead to people receiving unequal opportunities (when biased systems are used for hiring process), or harm their self-worth (when falsely recognized as a criminal). 

Another matter of systematic power is the environmental cost of CV. AI systems rely on vast amounts of data, which requires intensive energy for processing and storage. As societies become increasingly dependent on AI technologies like CV, those trying to protect the environment have little ability to resist or reshape these damaging practices. The power lies with tech companies and industries, leaving citizens without the means to challenge the system. When the system becomes harder to challenge or change, that’s when the ethical concerns regarding CV arise.

Conclusion

Computer Vision is a powerful tool that keeps evolving each year. We already see numerous applications of it in our daily lives, starting from the self-checkouts in the stores and smart doorbells to autonomous vehicles and tumor detections. With the potential that CV holds in improving and making our lives safer, there are a number of ethical limitations that should be considered. We need to critically examine how CV affects people’s autonomy, might cause one-sided power dynamics, and reinforces societal prejudices. As we are rapidly transitioning into the AI-driven world, there is more to come in the field of computer vision. However, in the pursuit of innovation, we should ensure the progress does not come at the cost of our ethical values.

References:

[1] Lauronen, M.: Ethical issues in topical computer vision applications. Information Systems, Master’s Thesis. University of Jyväskylä. (2017). https://jyx.jyu.fi/bitstream/handle/123456789/55806/URN%3aNBN%3afi%3ajyu-201711084167.pdf?sequence=1&isAllowed=y

[2] Brey, P.: Ethical aspects of facial recognition systems in public places. J. Inf. Commun. Ethics Soc. 2(2), 97–109 (2004). https:// doi.org/10.1108/14779960480000246

[3] Haugaard, M.: Power: a “family resemblance concept.” Eur. J. Cult. Stud. 13(4), 419–438 (2010)

[4] Morriss, P.: Power: a philosophical analysis. Manchester University Press, Manchester, New York (2002)

[5] Morriss, P.: Power: a philosophical analysis. Manchester University Press, Manchester, New York (2002)

[6] Brey, P.: Ethical aspects of facial recognition systems in public places. J. Inf. Commun. Ethics Soc. 2(2), 97–109 (2004). https://doi.org/10.1108/14779960480000246

[7] Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency, pp. 77–91 (2018) Coeckelbergh, M.: AI ethics. MIT Press (2020)

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, artificial intelligence, Computer Science and Tech, Computer Vision, Ethics, Technology

Machine learning and algorithmic bias

December 8, 2024 by Mauricio Cuba Almeida

Algorithms permeate modern society, especially AI algorithms. Artificial intelligence (AI) is built with various techniques, like machine learning, deep learning, or natural language processing, that trains AI to mimic humans at a certain task. Healthcare, loan approval, and security surveillance are a few industries that have begun using AI (Alowais et al., 2023; Purificato et al., 2022; Choung et al., 2024). Most people will inadvertently continue to interact with AI on a daily basis.

However, what are the problems faced by an increasing algorithmic society? Authors Sina Fazelpour and David Danks, in their article, explore this question in the context of algorithmic bias. Indeed, the problem they identify is that AI perpetuates bias. At its most neutral, Fazelpour and Danks (2021) explain that algorithmic bias is some “systematic deviation in algorithm output, performance, or impact, relative to some norm or standard,” suggesting that algorithms can be biased against a moral, statistical, or social norm. Fazelpour and Danks use a running example of a university training an AI algorithm with past student data to predict future student success. Thus, this algorithm exhibits a statistical bias if student success predictions are discordant with what has happened historically (in training data). Similarly, the algorithm exhibits a moral bias if it illegitimately depends on the student’s gender to produce a prediction. This is seen already in facial recognition algorithms that “perform worse for people with feminine features or darker skin” or recidivism prediction models that rate people of color as higher risk (Fazelpour & Danks, 2021). Clearly, algorithmic biases have the potential to preserve or exacerbate existing injustices under the guise of being “objective.” 

Algorithmic bias will manifest through different means. As Fazelpour and Danks discuss, harmful bias will be evident even prior to the creation of an algorithm if values and norms are not deeply considered. In the example of a student-success prediction model, universities must make value judgments, specifying what target variables define “student success,” whether it’s grades, respect from peers, or post-graduation salary. The more complex the goal, the more difficult and contested will choosing target variables be. Indeed, choosing target variables is a source of algorithmic bias. As Fazelpour and Danks explain, enrollment or financial aid decisions based on the prediction of student success may discriminate against minority students if first year performance was used in that prediction since minority students may face additional challenges.

Using training data that is biased will also lead to bias in an AI algorithm. In other words, bias in the measured world will be reflected in AI algorithms that mimic our world. For example, recruiting AI that reviews resumes is often trained on employees already hired by the company. In many cases, so-called gender-blind recruiting AI have discriminated against women by using gendered information on a resume that was absent from the resumes of a majority-male workplace (Pisanelli, 2022; Parasurama & Sedoc, 2021). Fazelpour and Danks also mention that biased data can arise from limitations and biases in measurement methods. This is what happens when facial recognition systems are trained predominantly on white faces. Consequently, these facial recognition systems are less effective when individuals do not look like the data the algorithm has been trained on.

Alternatively, users’ misinterpretations of predictive algorithms may produce biased results, Fazelpour and Danks argue. An algorithm is optimized for one purpose, and without even knowing, users may utilize this algorithm for another. A user could inadvertently interpret predicted “student success” as a metric for grades instead of what an algorithm is optimized to predict (e.g., likelihood to drop out). Decisions stemming from misinterpretations of algorithm predictions are doomed to be biased—and not just for the aforementioned reasons. Misunderstandings of algorithmic predictions lead to poor decisions if the variables predicting an outcome are also assumed to cause that outcome. Students in advanced courses may be predicted to have higher student success, but as Fazelpour and Danks put it, we shouldn’t enroll every underachieving student in an advanced course. Models such as these should also be applied in a context similar to when historical data was collected. Doing this is more important the longer a model is used as present data begins to differ from historical training data. In other words, student success models created for a small private college should not be deployed at a large public university nor many years later.

Fazelpour and Danks establish that algorithmic bias is nearly impossible to eliminate—solutions often must engage with the complexities of our society. The authors delve into several technical solutions, such as optimizing an algorithm using “fairness” as a constraint or training an algorithm on corrected historical data. This quickly reveals itself to be problematic, as determining fairness is a difficult value judgment. Nonetheless, algorithms provide tremendous benefit to us, even in moral and social ways. Algorithms can identify biases and serve as better alternatives to human practices. Fazelpour and Danks conclude that algorithms should continue to be studied in order to identify, mitigate, and prevent bias.

References

Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Saleh, K. B., Badreldin, H. A., Yami, M. S. A., Harbi, S. A., & Albekairy, A. M. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Medical Education, 23(1). https://doi.org/10.1186/s12909-023-04698-z

Choung, H., David, P., & Ling, T. (2024). Acceptance of AI-Powered Facial Recognition Technology in Surveillance scenarios: Role of trust, security, and privacy perceptions. Technology in Society, 102721. https://doi.org/10.1016/j.techsoc.2024.102721

Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8). https://doi.org/10.1111/phc3.12760

Parasurama, P., & Sedoc, J. (2021, December 16). Degendering resumes for fair algorithmic resume screening. arXiv.org. https://arxiv.org/abs/2112.08910

Pisanelli, E. (2022). Your resume is your gatekeeper: Automated resume screening as a strategy to reduce gender gaps in hiring. Economics Letters, 221, 110892. https://doi.org/10.1016/j.econlet.2022.110892

Purificato, E., Lorenzo, F., Fallucchi, F., & De Luca, E. W. (2022). The use of responsible artificial intelligence techniques in the context of loan approval processes. International Journal of Human-Computer Interaction, 39(7), 1543–1562. https://doi.org/10.1080/10447318.2022.2081284

Filed Under: Computer Science and Tech Tagged With: AI, AI ethics, artificial intelligence, Ethics

‘The Scariest Deepfake of All’: AI-Generated Text & GPT-3

March 1, 2021 by Micaela Simeone '22

Recent advances in machine-learning systems have led to both exciting and unnerving technologies—personal assistance bots, email spam filtering, and search engine algorithms are just a few omnipresent examples of technology made possible through these systems. Deepfakes (deep learning fakes), or, algorithm-generated synthetic media, constitute one example of a still-emerging and tremendously consequential development in machine-learning. WIRED recently called AI-generated text “the scariest deepfake of all”, turning heads to one of the most powerful text generators out there: artificial intelligence research lab OpenAI’s Generative Pre-Trained Transformer (GPT-3) language model.

GPT-3 is an autoregressive language model that uses its deep-learning experience to produce human-like text. Put simply, GPT-3 is directed to study the statistical patterns in a dataset of about a trillion words collected from the web and digitized books. GPT-3 then uses its digest of that massive corpus to respond to text prompts by generating new text with similar statistical patterns, endowing it with the ability to compose news articles, satire, and even poetry. 

GPT-3’s creators designed the AI to learn language patterns and immediately saw GPT-3 scoring exceptionally well on reading-comprehension tests. But when OpenAI researchers configured the system to generate strikingly human-like text, they began to imagine how these generative capabilities could be used for harmful purposes. Previously, OpenAI had often released full code with its publications on new models. This time, GPT-3s creators decided to hide its underlying code from the public, not wanting to disseminate the full model or the millions of web pages used to train the system. In OpenAI’s research paper on GPT-3, authors note that “any socially harmful activity that relies on generating text could be augmented by powerful language models,” and “the misuse potential of language models increases as the quality of text synthesis improves.” 

Just like humans are prone to internalizing the belief systems “fed” to us, machine-learning systems mimic what’s in their training data. In GPT-3’s case, biases present in the vast training corpus of Internet text led the AI to generate stereotyped and prejudiced content. Preliminary testing at OpenAI has shown that GPT-3-generated content reflects gendered stereotypes and reproduces racial and religious biases. Because of already fragmented trust and pervasive polarization online, Internet users find it increasingly difficult to trust online content. GPT-3-generated text online would require us to be even more critical consumers of online content. The ability for GPT-3 to mirror societal biases and prejudices in its generated text means that GPT-3 online might only give more voice to our darkest emotional, civic, and social tendencies.

Because GPT-3’s underlying code remains in the hands of OpenAI and its API (the interface where users can partially work with and test out GPT-3) is not freely accessible to the public, many concerns over its implications steer our focus towards a possible future where its synthetic text becomes ubiquitous online. Due to GPT-3’s frighteningly successful “conception” of natural language as well as creative capabilities and bias-susceptible processes, many are worried that a GPT-3-populated Internet could do a lot of harm to our information ecosystem. However, GPT-3 exhibits powerful affordances as well as limitations, and experts are asking us not to project too many fears about human-level AI onto GPT-3 just yet.

GPT-3: Online Journalist

GPT-3-generated news article that research participants had the greatest difficulty distinguishing from a human-written article

Fundamentally, concerns about GPT-3-generated text online come from an awareness of just how different a threat synthetic text poses versus other forms of synthetic media. In a recent article, WIRED contributor Renee DiResta writes that, throughout the development of photoshop and other image-editing CGI tools, we learned to develop a healthy skepticism, though without fully disbelieving such photos, because “we understand that each picture is rooted in reality.” She points out that generated media, such as deepfaked video or GPT-3 output, is different because there is no unaltered original, and we will have to adjust to a new level of unreality. In addition, synthetic text “will be easy to generate in high volume, and with fewer tells to enable detection.” Right now, it is possible to detect repetitive or recycled comments that use the same snippets of text to flood a comment section or persuade audiences. However, if such comments had been generated independently by an AI, DiResta notes, these manipulation campaigns would have been much harder to detect:

“Undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister … The ability to manufacture a majority opinion, or create a fake commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns.” – Renee DiResta, WIRED

In their paper “Language Models are Few-Shot Learners,” GPT-3’s developers discuss the potential for misuse and threat actors—those seeking to use GPT-3 for malicious or harmful purposes. The paper states that threat actors can be organized by skill and resource levels, “ranging from low or moderately skilled and resourced actors who may be able to build a malicious product to … highly skilled and well resourced (e.g. state-sponsored) groups with long-term agendas.” Interestingly, OpenAI researchers write that threat actor agendas are “influenced by economic factors like scalability and ease of deployment” and that ease of use is another significant incentive for malicious use of AI. It seems that the very principles that guide the development of many emerging AI models like GPT-3—scalability, accessibility, and stable infrastructure—could also be what position these models as perfect options for threat actors seeking to undermine personal and collective agency online.

Staying with the projected scenario of GPT-3-text becoming widespread online, it is useful to consider the already algorithmic nature of our interactions online. In her article, DiResta writes about the Internet that “algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.” Introducing an AI “voice” into this environment could make our online interactions even less human. One example of possible algorithmic accomplices of GPT-3 are Google Autocomplete algorithms which internalize queries and often reflect “-ism” statements and biases while processing suggestions based on common searches. The presence of AI-generated texts could populate Google algorithms with even more problematic content and continue to narrow our ability to have control over how we acquire neutral, unbiased knowledge.

An Emotional Problem

Talk of GPT-3 passing The Turing Test reflects many concerns about creating increasingly powerful AI. GPT-3 seems to hint at the possibility of a future where AI is able to replicate those attributes we might hope are exclusively human—traits like creativity, ingenuity, and, of course, understanding language. As Microsoft AI Blog contributor Jennifer Langston writes in a recent post, “designing AI models that one day understand the world more like people starts with language, a critical component to understanding human intent.” 

Of course, as a machine-learning model, GPT-3 relies on a neural network (inspired by neural pathways in the human brain) that can process language. Importantly, GPT-3 represents a massive acceleration in scale and computing power (rather than novel ML techniques), which give it the ability to exhibit something eerily close to human intelligence. A recent Vox article on the subject asks the question, “is human-level intelligence something that will require a fundamentally new approach, or is it something that emerges of its own accord as we pump more and more computing power into simple machine learning models?” For some, the idea that the only thing distinguishing human intelligence from our algorithms is our relative “computing power” is more than a little uncomfortable. 

As mentioned earlier, GPT-3 has been able to exhibit creative and artistic qualities, generating a trove of literary content including poetry and satire. The attributes we’ve long understood to be distinctly human are now proving to be replicable by AI, raising new anxieties about humanity, identity, and the future.

GPT-3’s recreation of Allen Ginsberg’s “Howl”

GPT-3’s Limitations

While GPT-3 can generate impressively human-like text, most researchers maintain that this text is often “unmoored from reality,” and, even with GPT-3, we are still far from reaching artificial general intelligence. In a recent MIT Technology Review article, author Will Douglas Heaven points out that GPT-3 often returns contradictions or nonsense because its process is not guided by any true understanding of reality. Ultimately, researchers believe that GPT-3’s human-like output and versatility are the results of excellent engineering, not genuine intelligence. GPT-3 uses many of its parameters to memorize Internet text that doesn’t generalize easily, and essentially parrots back “some well-known facts, some half-truths, and some straight lies, strung together in what first looks like a smooth narrative,” according to Douglas. As it stands today, GPT-3 is just an early glimpse of AI’s world-altering potential, and remains a narrowly intelligent tool made by humans and reflecting our conceptions of the world.

A final point of optimism is that the field around ethical AI is ever-expanding, and developers at OpenAI are looking into the possibility of automatic discriminators that may have greater success than human evaluators at detecting AI model-generated text. In their research paper, developers wrote that “automatic detection of these models may be a promising area of future research.” Improving our ability to detect AI-generated text might be one way to regain agency in a possible future with bias-reproducing AI “journalists” or undetectable deepfaked text spreading misinformation online.

Ultimately, GPT-3 suggests that language is more predictable than many people assume, and challenges common assumptions about what makes humans unique. Plus, exactly what’s going on inside GPT-3 isn’t entirely clear, challenging us to continue to think about the AI “black box” problem and methods to figure out just how GPT-3 reiterates natural language after digesting millions of snippets of Internet text. However, perhaps GPT-3 gives us an opportunity to decide for ourselves whether even the most powerful of future text generators could undermine the distinctly human conception of the world and of poetry, language, and conversation. A tweet Douglas quotes in his article from user @mark_riedl provides one possible way to frame both our worries and hopes about tech like GPT-3: “Remember…the Turing Test is not for AI to pass, but for humans to fail.”

Filed Under: Computer Science and Tech, Science Tagged With: AI, AI ethics, artificial intelligence, GPT-3, online journalists, textfakes

Primary Sidebar

CATEGORY CLOUD

Biology Chemistry and Biochemistry Computer Science and Tech Environmental Science and EOS Honors Projects Math and Physics Psychology and Neuroscience Science

RECENT POSTS

  • Biological ChatGPT: Rewriting Life With Evo 2 May 4, 2025
  • Unsupervised Thematic Clustering for Genre Classification in Literary Texts May 4, 2025
  • Motor Brain-Computer Interface Reanimates Paralyzed Hand May 4, 2025

FOLLOW US

  • Facebook
  • Twitter

Footer

TAGS

AI AI ethics Alzheimer's Disease antibiotics artificial intelligence bacteria Bathymetry Beavers Biology brain Cancer Biology Cell Biology Chemistry and Biochemistry Chlorofluorocarbons climate change Computer Science and Tech CRISPR Cytoskeleton Depression dreams epigenetics Ethics Genes honors Luis Vidali Marine Biology Marine Mammals Marine noise Medicine memory Montreal Protocol Moss neurobiology neuroscience Nutrients Ozone hole Plants Psychology and Neuroscience REM seabirds sleep student superintelligence Technology therapy

Copyright © 2025 · students.bowdoin.edu