How does AI reap the rewards of human labor, even though it is primarily used to lessen our workloads?

Columbia University’s Center for Science and Society is currently hosting a series of workshops surrounding the topic of AI. Today, artificially-generated material is a pervasive part of all our lives. From online content to academic research, it’s hard to do anything digitally without some form of AI popping up. This series sets out to demystify artificial intelligence, as well as invite members of both Columbia and the larger community to consider AI’s role in today’s society. 

The second workshop in the series, Mapping AI: Labor, was held on March 4th over Zoom. The focus of this event was the many forms of exploitation AI relies on, as well as its developing influence here on campus and beyond. While Large Language Models (LLMs) are what we typically think of as AI, other AI tools, such as facial-recognition and medical artificial intelligence, were frequently brought up over the course of this event. 

Two speakers took part in this event—Alex Press, a staff writer at Jacobin who reports on union efforts, and Adrienne Williams, a researcher from the Distributed AI Research Institute who advocates for greater public understanding of technology. Both of their work aligns closely with workers affected by AI and the impact of AI on the labor front as a whole.

For the first part of the event, we listened to the speaker’s own experiences and opinions on AI today. Adrienne Williams started off with the bold claim that AI was grown in “toxic soil, thus growing toxic fruit”. She drew on her previous experience as an Amazon driver, which was her “Job A” but also entailed an invisible “Job B”, which Amazon would illegally profit off of. The “Job B” was data collecting from the use of Netradyne cameras. Which provides a 360° view of the surrounding areas where the delivery truck would visit, collecting data of the neighborhoods in its route. 

This data is shared to autonomous vehicle companies, who cannot afford to collect this amount of data by themselves without partnering with Amazon and Netradyne. Williams argued if more drivers knew about their contribution to the profits Amazon gains, they would demand to be reciprocated fairly. However, Williams clarified that this level of exploitation happens to everyone, as even the act of using a Microsoft product supplies data to the company. 

AI is not necessarily taking jobs, but taking “chopped-up high paying jobs”, with certain jobs being “broken into piecework”. Billionaires search for the cheapest possible workforce to maximize profit. By breaking jobs down into several different “unskilled” tasks that AI can easily complete, it transforms AI into the next force they can exploit for profit. Williams likened AI to the shift toward using assembly lines during the industrial revolution. There was a lack of care in the thought of the negative consequences from the figures that led the industrial revolution, and Williams claims we are heading towards a similar socioeconomic transformation. This time, according to Williams we have an opportunity “right at the start of change” to “put people over money”. 

The next speaker, Alex Press, discussed the example Willams gave about Amazon, stating that it showed how the use of AI is not only “just futuristic, but intrusive.” Press noted that Amazon has a hand in other AI-driven issues laborers are confronting, such as productivity monitoring and route optimization. Press claimed that AI has always been about power and how it gets used. Once this is understood, Press emphasized, “the ‘mystique’ about AI fades away.” 

During the 2023 Hollywood strikes, artists were demonstrating against the use of artificial intelligence in their respective fields. Studios were unprepared for how determined strikers were against the use of AI, and eventually reached a tentative agreement with the artists. A similar situation occurred at Boston University, when a dean suggested AI could be used to replace striking graduate students. 

But most industries do not have strong unions nor preparation for what AI can bring to the table. Another problem with the conversation around AI is that it happens mainly after the AI is available to the public. Striking workers function as the “early warning signs for society.” As Amazon drivers and grad students alike continue striking and adapting to changes stemming from AI adoption, Press reminded us we should be “wise to heed those warnings”. 

The floor was subsequently opened to questions from the audience. One audience member brought up our limited knowledge of the future and whether or not concerns on AI were simply “fear-mongering” the public. The speakers and several other audience members voiced the opinion that there is more than enough research and evidence to show the opposite. Press stressed the importance of caution, especially with the speed and rate in which AI develops. Another audience member asked that if AI takes on our workload, why should we view AI as a bad thing? Williams responded that the logic sounded a lot like trickle-down economics, while another spectator said the “price of AI is socialized but profits are privatized.” 

An argument that was brought up in favor of the use of AI was that it can be used for good just as it can for bad. But it’s difficult for us to consider its good qualities when it has such disastrous environmental drawbacks. It’s important to mention that earlier AI, such as simple machine learning algorithms or expert systems, lacked the current energy consumption issues of modern AI. 

The main issue we found was that AI today is treated as if it was real intelligence, without allowing for human input on where it is used and how it is produced. There is a balance that needs to be struck between AI and human labor, one that currently is going unchecked.

After an intermission, we arrived at the workshop portion of the event. We were given time to discuss what kinds of AI are active on campus. 

Our group was a mix of students, faculty, and community members, with each of us bringing different experiences with AI to the table. One Barnard professor shared her experience with Millie, an AI accessible to Barnard faculty and staff meant to inform them of employee benefits. She was suspicious of the disclaimer it generates after every response, questioning why they were recommended to use Millie for assistance if it itself claims to be unreliable. Another attendee recalled that in her job at International Business Machines Corporation (IBM), layoffs are becoming increasingly common due to the encroaching use of AI by the standards of the company. The use of facial recognition software by the NYPD to identify Pro-Palestinian Columbia students was also brought up. Several other students and I talked about how we’ve seen our fellow students use AI for everything from job applications to class assignments, which in turn are possibly examined by AI again. We circled back to the idea of universities as places of higher learning, where some part of that is now taken by AI. We wondered—could this bring about change to how students learn in the future?

We were unable to reach a solution which fully addresses the challenges AI brings in the classroom and workforce. However, solving these issues was never the primary goal of this workshop. Instead, our discussions made it clear that many people feel uneasy about the direction AI is currently heading. The future may seem bleak, but as this workshop made clear, the only way through is together. 

The next workshop in this series can be accessed here: AI Mapping: Power.

Images via Author