On Wednesday, Columbia University’s Institute for Religion, Culture, and Public Life facilitated a conversation featuring Lydia Chilton of Columbia University,  Philip Butler of the Iliff School of Theology, and Timothy Beal of Case Western Reserve University on the ever evolving role of AI in daily life as it compares to “monsters.”

On Wednesday, December 6, Columbia University’s Institute for Religion, Culture and Public Life sponsored a panel discussion entitled “Monstrous AI,” which discussed comparisons between artificial intelligence and monsters in the modern imagination. The event was moderated by Professor Lydia Liu from the department of East Asian Language and Cultures and featured presentations from three panelists followed by a larger discussion.

Lydia Chilton, a professor and researcher in computational design at the Columbia University School of Engineering and Applied Sciences, began the presentation with an overview of modern artificial intelligence. Between generating text, images, and code, AI has been taking the world by storm, leaving many to ask, “what can’t it do?” 

While a general fear exists in the population that AI will replace many creative jobs, the technology has some fundamental limitations. For one, AI predictions are decidedly “average” in their output. “I’m sure AI could write a Hallmark movie,” Chilton said, commenting on what she perceived to be the overwhelming mediocrity of AI’s creative work.

For all of their shortcomings, generative writing-based and art-based technologies like GPT and DALL-E have far reaching predictive capabilities. Both learn to predict an image or the next words in a sentence based on existing information, drawing upon the data that already exists on the internet. What is particularly impressive about these models are their diverse applications; GPT and DALL-E do not need to be trained to do a particular task. This is a phenomenon known as emergent behavior, which means that the output of AI cannot be predicted simply by analyzing its components.

When Professor Liu asked about her views on the AI “hype” fueled by prominent CEOs, Chilton noted that fear mongering is rampant in conversations about this technology. Much of the fear that surrounds artificial intelligence is based on its future capacities, not the machine learning that exists now. On the potential of AI to “steal” jobs, she responded that jobs have the capacity to evolve to make space for both humans and technology.

However, putting too much trust into AI may also be detrimental to human beings. If mental health service operators are replaced with machines, this could negatively impact patient outcomes. Ultimately, as creators of these machines, humans are able to control their capacities and applications. As AI development progresses into the future, Chilton argued that regulations should be enforced more fairly to regulate AI technology and the resources it depends on.

The next panelist, Philip Butler, is a Professor at the Iliff School of Theology and the author of “Black Transhuman Liberation Theology.” His presentation focused primarily on understanding monsters and fear from a more philosophical perspective. A monster, according to Butler, simultaneously has “superhuman abilities” and a “subhuman classification.” In many ways, artificial intelligence aligns with this definition. People can use AI to complete tasks they are not quite capable of, and yet they consider AI a sort of “disembodied thing” that exists below them.

In the discourse on monsters, there is also an overwhelming sense that a creator may fear what it has created, said Butler. What does it mean for something as powerful and different as AI to exist in our social spheres? In many ways, AI speaks to a human fascination with change and progress simply for its own sake. Now that the technology exists on a wider scale, the true extent of its capabilities are inciting fear among humans.

Butler also argued that we may have to reevaluate our relationship with AI as it currently exists. To some extent, both human beings and artificial intelligence are a “compilation of ingredients” in a type of “lab,” and the line between man-made and organic may very well be blurry. If we recognize that “we are all made of real and artificial things,” we may be able to reconstruct our outlook on modern technology, said Butler.

The final panelist, Timothy Beal, Distinguished Professor of Religion at Case Western Reserve University and the director of h.lab, focuses on integrating humanities and technology in his work. He has been learning Python, a programming language, while experimenting with machine learning and natural language processing. 

Beal defines the monstrous as that which “seems to blur the line between self and other,” representing an “otherness within sameness.” In his presentation, Beal connected this definition back to Genesis 1:1, which states that humankind was created in God’s image. In this story, Beal argues that God is worried that the “creature” is going to become “one of us”—a striking parallel to the lack of control many people currently feel about the capabilities of AI. 

Beal was also quick to comment on the way in which humanity’s coupled fear and fascination with AI can act as a kind of mystification that distracts us from the planetary costs of AI, masking the AI industry’s global processes of extraction. In mentioning this, Beal touched on the inevitable inequities that will accompany the permeation of AI into society, stating that “social effect is the real message of a technology.” In other words, the ways in which AI will—and already does—shape us is the true mark of its effect. 

These inequities largely stem from already existing societal privileges relating to gender, race, and income, among other similar social classifications, a point echoed by Butler. Both panelists also mentioned the related issue of bias in AI training data, resurfacing the question of to what extent we have control over the technologies that we create.

In full, this panel was a fascinating combination of many seemingly unrelated fields, revealing just how interdisciplinary the issue of AI truly is. The contribution of professionals in computational design, religion, philosophy, and cultural studies alike illustrated the value of cross-discipline research,meaningful discussion about major issues, and the participation of leaders in diverse fields in tackling complex problems. 

As Butler emphasized, if one is not actively involved in the creation and deployments of technology, one will always be at the whims of those who are. In bringing these academic leaders together to discuss these difficult questions, it is clear that these fields are deeply intertwined and will rely on one another to pave a way forward. 

Image via Bwarchives