On Monday, Staff Writer Phoebe Lu attended Professor Ruha Benjamin ‘s lecture titled “The New Jim Code” and learned about how machine learning algorithms contribute to systemic racism.
While humans may hold prejudices, robots should be totally impartial, right? At least, that’s what multiple corporations, such as Amazon, believed when they relied on AI in their hiring process.
Yet, sociologist and Princeton Professor Ruha Benjamin argued that even machines can harbor the same biases as humans. In her work, Professor Benjamin focuses on the relationship between technology and injustice. She’s the author of People’s Science and Race After Technology and the founding director of the IDA B. WELLS Just Data Lab. This Monday, as the guest lecturer for the Lewis-Ezekoye Distinguished Lecture in Africana Studies, Professor Benjamin explored how the systemic prejudices embedded in our algorithms are actively endangering marginalized populations.
Though AI remains a relatively contemporary topic, Professor Benjamin began her lecture by tracing all the way back to the 19th century to introduce French naturalist Georges Cuvier. Cuvier attempted to argue that “the White race” was superior based on their “oval face, straight hair and nose” and claimed that Black facial traits were associated with “barbarism.” Through mapping his racial biases to physical features, Cuvier attempts to justify his racism by basing it on human biology, making it seem as if it were “unimpeachable” or even “natural.” As such racist assertions become seen as “natural”, people become less and less likely to question them.
Professor Benjamin’s lecture is thus a call to “denaturalize.”
Though hundreds of years apart, Professor Benjamin’s lecture reveals similarities between Cuvier and modern biases in AI. She explored how machine-learning algorithms now play a major part in the Healthcare process, in hiring procedures, in medical devices… As companies assume the objectivity of AI to be “unimpeachable” because they’re a product of science, they fail to question whether such algorithms are only deepening the existing ramifications of systemic racism and sexism. While these algorithms run unchecked by the organizations that use them, they make racism a normal process in facets of daily life.
Professor Benjamin introduced numerous examples of how machine racism exists in our society. Amazon’s AI-based hiring process quickly revealed a bias against women. Digital tools used to assess a patient’s risk produces the same risk score for Black and White patients even when the Black patient is much sicker. The pulse oximeter, a device prevalently used to detect COVID fevers, produces inaccurate readings of blood oxygen levels for nonwhite people. As COVID numbers spike across the country, the potential ramifications of this error are life threatening. As AI algorithms are built off of real world data, the prejudices of the real world thus translates into its functionality. As Professor Benjamin states, “racism is constructed”: human prejudice constructs machine prejudice which then continues to build on existing inequality.
It’s unrealistic to rely on supposedly impartial machine-learning algorithms, or as Professor Benjamin calls it, a “social justice bot”, to save us all from systemic injustices. Instead, she believes that human imagination holds the potential for breaking the circle of constructed racism.
She gives the example of the White Collar Crime Risk Zones app, which informs users if they’re in an area where a white collar crime is likely to occur. It also superimposes an enlarged photo of the most likely culprit (mostly a white male) on the map of your region (imagine a Jared Kushner lookalike the size of your fist). This app closely mirrors Citizen, an app for crime reporting that’s criticized for its propensity to encourage racial profiling of people of color. Employing traditional policing techniques to instead expose wealthy white men, WCCRZ thus flips the script on the over-policing of poor, predominantly Black and Brown communities. WCCRZ’s website states that “unlike typical predictive policing apps which criminalize poverty, White Collar Crime Risk Zones criminalizes wealth.”
Professor Benjamin cites WCCRZ as an example of how imagination can rectify digital bias. WCCRZ reimagined algorithms that traditionally exacerbated racism to instead make a statement against racial profiling, showing that racial awareness can coexist with technology.
Yet, some efforts at correcting prejudice are prejudiced in itself. Professor Benjamin introduced the example where, attempting to make their facial recognition accurate for multiple racial demographics, Google visited homeless populations and offered people 5 dollar gift cards in exchange for them to allow the software to scan their faces. However, Google didn’t inform participants about any specific details of the product nor did they sign any formal contracts. Thus, Google took an unethical route in an effort to make their software more ethical. Pondering Google’s error, Professor Benjamin suggested that perhaps having a team that’s more diverse or more racially aware could have conjured a better way to garner representation.
Speaking further on the issue of representation in the Q & A, Professor Benjamin believes that while more marginalized voices in technology spaces can generate good ideas, such ideas might not always thrive in the marketplace. She cites the example of a developer who wanted to make a virtual assistant like Siri but with the voice of a Black man in order to disrupt the norm of using white female voices as assistants. Ultimately, the developer decided against it, knowing it wouldn’t thrive in the marketplace.
While the marketplace may be less receptive to change, Professor Benjamin offers alternative ways to combat discrimination in AI. She introduced the Data for Black Lives, a movement using data to combat racial bias and the Detroit Community Technology Project, a project working to make technology more accessible for everyday citizens. She also advised the audience to read the “Advancing Racial Literacy in Tech” paper from the Data & Society Research Institute.
Ultimately, Professor Benjamin concludes her lecture by encouraging each member of the audience to recognize their role as “pattern-makers”, critical in reinventing the inequity “woven into the very fabric of society.” Her message for us to restore human imagination into the technological world finds an apt setting in the annual Lewis-Ezekoye Distinguished Lecture, as the lecture itself honors the imagination of Mrs. Denise Jackson-Lewis’66 and Adaeze Otue Ezekoye’66 during late-night conversations in the dorm when they both attended Barnard.
“robots” via Creative Commons.
4 Comments
@Anonymous Professor Ruha Benjamin isn’t too bright. Read her work.
1. Where does she formulate a hypothesis?
2. Under what circumstances would she reject that hypothesis?
3. What data does she collect or logos-based argument does she cite to rule out the existence of those circumstances?
4. How does she interpret that data?
5. How does her interpretation lead to her conclusion?
Ever notice how sociologists and other woke PhDs always want to apply their trade to the hard sciences, yet reject all attempts by actual scientists to apply the scientific method and enlightenment epistemology to woke ideas? It’s almost like there’s virtually nothing falsifiable underlying woke ideology because wokeness is grievance porn that asserts opinion as fact and then alters an ever-changing, cynical, and resentment-based model of the world to match. Taking these superstitious grifters seriously is dangerous and asserting they have a modicum of analytical capacity is laughable.
@humble yourself I’m sure a Princeton (US News #1 school) professor, reading a guest lecture at Columbia (US News #3 school), has a lot to learn from you, Anonymous Bwog Commentor.
@Anonymous Wow your entire argument is based on credentialism and has zero substance. Looks like they’re lowering the bar at CU…
@Anonymous So is that a counterargument or are you going to jerk off to her credentials? Pretty lazy analysis.