Skip to main content

Bias, on Bias, on Bias: Concerns Surrounding Bias with ChatGPT: Bias, on Bias, on Bias: Concerns Surrounding Bias with ChatGPT

Bias, on Bias, on Bias: Concerns Surrounding Bias with ChatGPT
Bias, on Bias, on Bias: Concerns Surrounding Bias with ChatGPT
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Issue HomeGW Journal of Ethics in Publishing
  • Journals
  • Learn more about Manifold

Notes

table of contents
  1. Bias, on Bias, on Bias: Concerns Surrounding Bias with ChatGPT
  2. Author Note
  3. Unconscious Bias
  4. Worldwide Application
  5. Systematic Racism Disguised as Neutral
  6. Lack of Diversity in the Development of Technology
  7. Lack of Transparency
  8. References

Bias, on Bias, on Bias: Concerns Surrounding Bias with ChatGPT

Shelby E. Jenkins, M.A.

American Psychological Association

Author Note

Shelby E. Jenkins, M.A. (0009-0001-3369-338X) is now at the APA Books and Journals Department, American Psychological Association. APA affiliation is listed for identification purposes only; the opinions expressed in this article are solely those of the author.

Correspondence concerning this commentary should be addressed to Shelby E. Jenkins, 750 First Street, NW Washington, DC. Email sjenkins@apa.org

Here we are again. There is little I can do to avoid it, so I make my peace with my anxiety and prepare myself for the inevitable. As I exit my stall in the restroom and walk over to the automated soap dispenser, faucet, and paper towel dispenser, I prepare to have to test as many as possible to complete the act of washing my hands. This has been the case ever since this technology was made widely available in public restrooms. And while many may struggle with this technology because they do not understand how it works, it is literally because of how it works that I struggle to use it with my darker skin. So as always, I start with the faucet; the others you can figure out, but you need water. Eureka! The first faucet I try recognizes my hand; I am elated. I try the soap dispenser. Again, a win! It works with little effort on my part. Ah, but the paper towel dispenser has always been the worst of the three. More often than I’d like to admit, I’m seen waving my hand around the machine like I’m trying to invoke a spell. And yet, it worked! With a slow deliberate wave of my hand, a paper towel sheet slowly dispenses, and I am now beaming from ear to ear in the women’s public restroom at Reagan National Airport. The ladies waiting in line for an open stall, see me break out into a large grin and are puzzled by my show of happiness, and I understand why. They were all white.

Fighting with the automated soap dispenser, faucet, and paper towel dispenser has been the story of my life since they were introduced into the workplace. I always assumed it was faulty technology. Just one more machine that if it worked correctly, could save water and paper towel waste but only when it worked. It wasn’t until articles and stories started being published about how the technology worked and how people with darker skin tones have issues using these devices (Goethe, 2019; Plenke, 2015), that I realized this wasn’t about faulty technology. It was about someone creating technology for a specific group of people without any thought given to others. In learning this, I thought back on my conversations with my White colleagues about my frustrations with these devices and realized they always seemed confused about how often I complained. This “fight” (as I call it), happened often, every time I went to the restroom. Then as soon as the revelation of this information hit, I was reminded of the “veil.” In one of my favorite books, the Souls of Black Folk by W.E.B. DuBois (2005), he defines the veil as a second sight. In chapter one, it reads:

…the Negro is a sort of seventh son, born with a veil, and gifted with second-sight in the American world, –a world which yields him no true self-consciousness, but only lets him see himself through the revelation of the other world. It is a peculiar sensation, this double-consciousness, this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity. One ever feels his twoness, –an American, a Negro; two souls, two thoughts, two unreconciled strivings; two warring ideals in one dark body, whose dogged strength alone keeps it from being torn asunder.

In learning how these devices worked, I found I foolishly allowed myself to believe that this technology was made with my dark skin in mind. That I was not forgotten and that there was no difference between my world and my White colleagues’ when it came to technology of this nature. But once my eyes were open to the neglectful ways technology ignored my existence, I found I could not unsee it.

Unconscious Bias

Countless articles started to be published outlining how BIPOC individuals struggled with recently released technology. When Xbox unveiled their Kinect system, there were reports from darker skin users that found the system did not recognize their face or movements (Ionescu, 2010). When Apple released facial recognition software to unlock your iPhone, a user in China found a colleague who was also of Chinese descent, could unlock her phone (Papenfuss, 2017). Both of these issues were quickly addressed as nothing significant; the companies making it clear that the technology they were creating wasn’t biased but instead put the onus on how it was being used. And yet, when you look at the tech world and see articles about the lack of diversity in America’s technology hub, Silicon Valley for example having less than 3% of its tech workers identify as Black (Ioannou, 2018; Noble 2020), you can’t help but wonder if they, once again, created technology for a specific group of people without any given thoughts to others.

Worldwide Application

While I know bias exists in our society, what I find frustrating about bias in our technology is that it is often sold as an equal and universal tool (Nobel, 2018). A tool all people can use and improve daily life (Broussard, 2023). Technology such as facial recognition is widespread and is said to significantly improve efficiencies, which is why this technology is everywhere (Klosowski, 2020). However, we also know this same technology is full of bias against people with darker skin (Broussard, 2023). And while strides have been made in this area, Black and Asian men are still 100 times more likely to be misidentified via facial recognition (Koenecke et. al, 2020). Automated speech recognition is also everywhere: it’s in our phones, TVs, and gaming consoles, but even they have been proven to also hold bias (Lloreda, 2020). The application of innovative technology can be boundless. However, if that same “universal” technology is biased, it can undermine the very innovation it was meant for, and instead, reinforce bias to the most at-risk people (Abraham, 2023; Broussard, 2023).

Systematic Racism Disguised as Neutral

What I also find most deceitful about this technology is the lack of full disclosure. There is no small print on the item when you purchase or use it that says, “have lighter skin for best use.” At least that way I would know what I was getting myself into. Instead, it is often packaged and sold as a way to reduce bias. For instance, hiring managers use software that weeds out applicants that are not suitable for the position. Forget the days of using initials in your first or middle name so it isn’t obvious you are a woman or removing affiliations that are known to be associated with specific BIPOC groups. Using this software would remove the conscious and/or unconscious bias with hiring managers and allow for the best applicants to be highlighted. Being a hiring manager myself, I know I personally appreciate saving time on skipping an applicant that has zero experience in a position that is listed as needing five years’ worth. However, software like this has been found to be full of bias – such as using algorithms to find “relevant” job seekers or using predictive technology to predict if an applicant will be successful in the position (Bogen, 2019). There is something to be said when one of the world’s biggest companies had to scrap their AI hiring recruitment tool for being sexist (Dastin, 2018).

Lack of Diversity in the Development of Technology

I was speaking with a friend about my mistrust of new technology and how even though this bias is constant and prevalent in almost every facet of life, I did not believe the lack of consideration of BIPOC individuals was malicious. And she summed up very succinctly that constantly ignoring a part of the population repeatedly in the development of technology is, in fact, malicious. And in her words, I was reminded of an interview with Dr. Maya Angelou when she said, “When people show you who they are, believe them” (Winfrey, 2000). So how can I, in good conscience, trust AI technology like ChatGPT? An AI built on the premise of being able to replicate human conversation. The same humans that remind me constantly, new technology is not made for me (Broussard, 2023; Noble, 2018). The same AI technology that has already been subject to questions about its bias (Gow, 2022). When earlier iterations of AI systems show evidence of it being used against people who look and speak like me. What is most frustrating about this is that I can see how the application of technology like ChatGPT can revolutionize the work we do as a society if we can address the bias (Silberg & Manyika, 2019). Just like I could see the impact an automated faucet and paper towel dispenser could have on the environment if it could recognize darker skin. People have already started using ChatGPT to help facilitate ideas, translate text, or write papers, and this is just the beginning. Once there is worldwide acceptance and daily uses for AIs like ChatGPT, it will be everywhere and completely unavoidable.

Lack of Transparency

However, I already see the signs that this technology might be full of bias. While there is a general idea of how ChatGPT was created and the algorithms it is built upon, there is no clear understanding of exactly what information went into building its response system (Ramponi, 2022). The biggest concern for me is the use of Reinforcement Learning through human feedback. For ChatGPT to respond in a way that is more in-line with the questions it is being asked, the process used to develop ChatGPT uses human intervention to rank responses that align with the user’s request (Ramponi, 2022). I’m sure everyone has had an experience with a helpdesk bot on a website that could not understand the question it was being asked. ChatGPT was built to eliminate that issue by using Reinforcement Learning. In doing so, the responses provided by ChatGPT presents the user a response that matches what they were looking for. And since the questions, the answers, and the rankings used to create ChatGPT are not public, one can only hope that they do not contain bias. In addition, even when using ChatGPT to build text for a paper, you are unable to gather the sources it used to create the text it provided when prompted. For example, you request ChatGPT to write a three-page paper on the Civil War. ChatGPT will generate three pages of text as you requested, but if you ask for the references it used to build the text, it does not provide accurate citations. In one recent case, a lawyer used ChatGPT to prepare a court filing. However, the citations used in the brief were nonexistent (Weiser, 2023). So not only do I not know what information ChatGPT used to build its responses, I also cannot confirm it is not using biased source material to build a prompted response. To address these concerns there needs to be more transparency in the creation and citations used for AIs like ChatGPT. Allowing users to know the full scope of the Reinforcement Learning process would create a framework for all future AI tools and allow for a level of peer review outside the limited scope of the creators.

The fact of the matter is that AIs like ChatGPT will be widely adopted. It is far too innovative and can provide society with too many benefits to be ignored. The issue is the bias that is most likely integrated within the AI. There is no clear way to confirm that this issue of bias, whether unconscious or not, does not exist in the creation of this technology as well as the responses it provides. And if I’ve learned anything from my experiences with new technology and the research done on ChatGPT, it is most likely built with bias. And while I would love to believe that is not the case, the tech world has shown me time and time again who they are, so why wouldn’t I believe them?

References

Abraham, R. (2023 February 22). AI Use by Cops, Child Services In NYC Is a Mess: Report. Vice. https://www.vice.com/en/article/3adxak/nypd-child-services-ai-facial-recognition

Broussard, M. (2023). More than a glitch: confronting race, gender, and ability bias in tech. The MIT Press.

Bogen, M. (2019 May 6). All the ways hiring algorithms can introduce bias. Harvard Business Review. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias

Dastin, J. (2018 October 10). Amazon scraps secret AI recruiting tool that showed bias against women, Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

DuBois, W.E.B. (2005). The souls of black folk. ‎ Bantam Classics. https://www.amazon.com/Souls-Black-Folk-Bantam-Classics-ebook/dp/B000FCKAVW

Goethe, T. S. (2019, March 2). Bigotry encoded: Racial bias in technology. Reporter.

https://reporter.rit.edu/tech/bigotry-encoded-racial-bias-technology

Gow, G. (2022 July 17). How to use ai to eliminate bias. Forbes. https://www.forbes.com/sites/glenngow/2022/07/17/how-to-use-ai-to-eliminate-bias/?sh=40ee87701f1f

Ioannou, L. (2018, June 20). Silicon valley’s achilles’ heel threatens to topple its supremacy in innovation. CNBC. https://www.cnbc.com/2018/06/20/silicon-valleys-diversity-problem-is-its-achilles-heel.html

Ionescu, D. (2010 November 4). Is microsoft’s kinect racist? PCWorld. https://www.pcworld.com/article/504514/is_microsoft_kinect_racist.html

Klosowski, T. (2020, July 15). Facial recognition is everywhere. here’s what we can do about it. Wirecutter, New York Times. https://www.nytimes.com/wirecutter/blog/how-facial-recognition-works/

Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., Toups, C., Rickford, J. R., Jurafsky, D., and Goel, S. (2020, March 23). Racial disparities in automated speech recognition. PNAS. https://doi.org/10.1073/pnas.1915768117

Lloreda, C. L. (2020, July 5). Speech Recognition Tech Is Yet Another Example of Bias. Scientific American. https://www.scientificamerican.com/article/speech-recognition-tech-is-yet-another-example-of-bias/

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Noble, S. U. (2020 July 1). The loss of public goods to big tech. Noema. https://www.noemamag.com/the-loss-of-public-goods-to-big-tech/

Papenfuss, M. (2017, December 14). Woman in China says colleague’s face was able to unlock her iPhone X. HuffPost. https://www.huffpost.com/entry/iphone-face-recognition-double_n_5a332cbce4b0ff955ad17d50

Plenke, M. (2015, September 9). The reason this "racist soap dispenser" doesn't work on black skin. MIC. https://www.mic.com/articles/124899/the-reason-this-racist-soap-dispenser-doesn-t-work-on-black-skin

Ramponi, M. (2022 December 23). How ChatGPT actually works. AssemblyAI. https://www.assemblyai.com/blog/how-chatgpt-actually-works/

Silberg, J. & Manyika, J. (2019 June 6). Tackling bias in artificial intelligence (and in humans). McKinsey Global Institute. https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

Weiser, B. (2023 May 27). Here’s what happens when your lawyer uses ChatGPT, The New York Times. https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

Winfrey, O. (2000 December). Oprah Talks to Maya Angelou. Oprah.com. https://www.oprah.com/omagazine/oprah-interviews-maya-angelou/2

Annotate

Articles
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org