{"id":1700,"date":"2024-12-08T12:27:17","date_gmt":"2024-12-08T17:27:17","guid":{"rendered":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/?p=1700"},"modified":"2024-12-08T12:27:17","modified_gmt":"2024-12-08T17:27:17","slug":"machine-learning-and-algorithmic-bias","status":"publish","type":"post","link":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/csci-tech\/machine-learning-and-algorithmic-bias\/","title":{"rendered":"Machine learning and algorithmic bias"},"content":{"rendered":"<p><span style=\"font-weight: 400\">Algorithms permeate modern society, especially AI algorithms. Artificial intelligence (AI) is built with various techniques, like machine learning, deep learning, or natural language processing, that trains AI to mimic humans at a certain task. Healthcare, loan approval, and security surveillance are a few industries that have begun using AI (Alowais et al., 2023; Purificato et al., 2022; Choung et al., 2024). Most people will inadvertently continue to interact with AI on a daily basis.<\/span><\/p>\n<p><span style=\"font-weight: 400\">However, what are the problems faced by an increasing algorithmic society? Authors Sina Fazelpour and David Danks, in their article, explore this question in the context of algorithmic bias. Indeed, the problem they identify is that AI perpetuates bias. At its most neutral, Fazelpour and Danks (2021) explain that algorithmic bias is some \u201csystematic deviation in algorithm output, performance, or impact, relative to some norm or standard,\u201d suggesting that algorithms can be biased against a moral, statistical, or social norm. Fazelpour and Danks use a running example of a university training an AI algorithm with past student data to predict future student success. Thus, this algorithm exhibits a statistical bias if student success predictions are discordant with what has happened historically (in training data). Similarly, the algorithm exhibits a moral bias if it illegitimately depends on the student\u2019s gender to produce a prediction. This is seen already in facial recognition algorithms that \u201cperform worse for people with feminine features or darker skin\u201d or recidivism prediction models that rate people of color as higher risk (Fazelpour &amp; Danks, 2021). Clearly, algorithmic biases have the potential to preserve or exacerbate existing injustices under the guise of being \u201cobjective.\u201d\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">Algorithmic bias will manifest through different means. As Fazelpour and Danks discuss, harmful bias will be evident even prior to the creation of an algorithm if values and norms are not deeply considered. In the example of a student-success prediction model, universities must make value judgments, specifying what target variables define \u201cstudent success,\u201d whether it\u2019s grades, respect from peers, or post-graduation salary. The more complex the goal, the more difficult and contested will choosing target variables be. Indeed, choosing target variables is a source of algorithmic bias. As Fazelpour and Danks explain, enrollment or financial aid decisions based on the prediction of student success may discriminate against minority students if first year performance was used in that prediction since minority students may face additional challenges.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Using training data that is biased will also lead to bias in an AI algorithm. In other words, bias in the measured world will be reflected in AI algorithms that mimic our world. For example, recruiting AI that reviews resumes is often trained on employees already hired by the company. In many cases, so-called gender-blind recruiting AI have discriminated against women by using gendered information on a resume that was absent from the resumes of a majority-male workplace (Pisanelli, 2022; Parasurama &amp; Sedoc, 2021). Fazelpour and Danks also mention that biased data can arise from limitations and biases in measurement methods. This is what happens when facial recognition systems are trained predominantly on white faces. Consequently, these facial recognition systems are less effective when individuals do not look like the data the algorithm has been trained on.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Alternatively, users\u2019 misinterpretations of predictive algorithms may produce biased results, Fazelpour and Danks argue. An algorithm is optimized for one purpose, and without even knowing, users may utilize this algorithm for another. A user could inadvertently interpret predicted \u201cstudent success\u201d as a metric for grades instead of what an algorithm is optimized to predict (e.g., likelihood to drop out). Decisions stemming from misinterpretations of algorithm predictions are doomed to be biased\u2014and not just for the aforementioned reasons. Misunderstandings of algorithmic predictions lead to poor decisions if the variables predicting an outcome are also assumed to cause that outcome. Students in advanced courses may be predicted to have higher student success, but as Fazelpour and Danks put it, we shouldn\u2019t enroll every underachieving student in an advanced course. Models such as these should also be applied in a context similar to when historical data was collected. Doing this is more important the longer a model is used as present data begins to differ from historical training data. In other words, student success models created for a small private college should not be deployed at a large public university nor many years later.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Fazelpour and Danks establish that algorithmic bias is nearly impossible to eliminate\u2014solutions often must engage with the complexities of our society. The authors delve into several technical solutions, such as optimizing an algorithm using \u201cfairness\u201d as a constraint or training an algorithm on corrected historical data. This quickly reveals itself to be problematic, as determining fairness is a difficult value judgment. Nonetheless, algorithms provide tremendous benefit to us, even in moral and social ways. Algorithms can identify biases and serve as better alternatives to human practices. Fazelpour and Danks conclude that algorithms should continue to be studied in order to identify, mitigate, and prevent bias.<\/span><\/p>\n<p><strong>References<\/strong><\/p>\n<div>\n<p>Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Saleh, K. B., Badreldin, H. A., Yami, M. S. A., Harbi, S. A., &amp; Albekairy, A. M. (2023). Revolutionizing healthcare: the role of artificial intelligence in clinical practice. <i>BMC Medical Education<\/i>, <i>23<\/i>(1). <span class=\"url\">https:\/\/doi.org\/10.1186\/s12909-023-04698-z<\/span><\/p>\n<p>Choung, H., David, P., &amp; Ling, T. (2024). Acceptance of AI-Powered Facial Recognition Technology in Surveillance scenarios: Role of trust, security, and privacy perceptions. <i>Technology in Society<\/i>, 102721. <span class=\"url\">https:\/\/doi.org\/10.1016\/j.techsoc.2024.102721<\/span><\/p>\n<p>Fazelpour, S., &amp; Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. <i>Philosophy Compass<\/i>, <i>16<\/i>(8). <span class=\"url\">https:\/\/doi.org\/10.1111\/phc3.12760<\/span><\/p>\n<p>Parasurama, P., &amp; Sedoc, J. (2021, December 16). <i>Degendering resumes for fair algorithmic resume screening<\/i>. arXiv.org. <span class=\"url\">https:\/\/arxiv.org\/abs\/2112.08910<\/span><\/p>\n<p>Pisanelli, E. (2022). Your resume is your gatekeeper: Automated resume screening as a strategy to reduce gender gaps in hiring. <i>Economics Letters<\/i>, <i>221<\/i>, 110892. <span class=\"url\">https:\/\/doi.org\/10.1016\/j.econlet.2022.110892<\/span><\/p>\n<p>Purificato, E., Lorenzo, F., Fallucchi, F., &amp; De Luca, E. W. (2022). The use of responsible artificial intelligence techniques in the context of loan approval processes. <i>International Journal of Human-Computer Interaction<\/i>, <i>39<\/i>(7), 1543\u20131562. <span class=\"url\">https:\/\/doi.org\/10.1080\/10447318.2022.2081284<\/span><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Algorithms permeate modern society, especially AI algorithms. Artificial intelligence (AI) is built with various techniques, like machine learning, deep learning, or natural language processing, that trains AI to mimic humans at a certain task. Healthcare, loan approval, and security surveillance are a few industries that have begun using AI (Alowais et al., 2023; Purificato et [&hellip;]<\/p>\n","protected":false},"author":716,"featured_media":1765,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[65],"tags":[85,87,96,108],"class_list":{"0":"post-1700","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-csci-tech","8":"tag-ai","9":"tag-ai-ethics","10":"tag-artificial-intelligence","11":"tag-ethics","12":"entry"},"featured_image_src":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2024\/12\/ai-e1733689523841-572x400.png","featured_image_src_square":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-content\/uploads\/sites\/35\/2024\/12\/ai-600x600.png","author_info":{"display_name":"Mauricio Cuba Almeida '27","author_link":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/author\/mcubaalmeida\/"},"_links":{"self":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/posts\/1700","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/users\/716"}],"replies":[{"embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/comments?post=1700"}],"version-history":[{"count":0,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/posts\/1700\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/media\/1765"}],"wp:attachment":[{"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/media?parent=1700"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/categories?post=1700"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/students.bowdoin.edu\/bowdoin-science-journal\/wp-json\/wp\/v2\/tags?post=1700"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}