On Tuesday, January 24, the student group Columbia Policy Institute (CPI) discussed ChatGPT and its ramifications in the university setting for their first meeting of the new semester!
I didn’t know what ChatGPT stood for; as it turns out, it means Chat Generative Pre-Transformer. But ever since OpenAI’s new chatbot entered the scene in November 2022, the “generative” aspect—aka ChatGPT’s ability to do most anything from writing essays and code to solving problem sets for students—has been raising questions in academic circles about the ethics, logistics, and future implications of using ChatGPT in a school setting.
Last week, the Columbia Policy Institute, an undergraduate-run student group, met for the first time of the Spring semester for a highly attended program called “Demystifying ChatGPT: A Guide to Surviving Generative AI.” Soham Mehta (CC ’24), CPI’s Technology Policy Center Director, started out the meeting by passing out a guide with background on what Generative AI is and how it works, as well as information on ChatGPT in particular and its possibilities for use ranging from the innocuous to the malicious.
As opposed to other forms of AI which classify existing data, generative AI is uniquely able to create brand new content, including text, images, and music, according to the fact sheet that CPI provided. In order to be able to generate content, generative AI works by training a model on a very large dataset of examples. (ChatGPT was trained on a dataset with 45 TB of text and 175 billion parameters). Most of this data that OpenAI used to train the bot came from Common Crawl, a nonprofit that extracts information from web pages to create publicly available datasets in an online repository. In order to generate new content, ChatGPT uses what is known as the Generative Pre-Trained Transformer 3 language model (GPT-3), which creates text by “reading” examples and “paying attention” to factors like the structure of language.
ChatGPT’s unique generative abilities allow it to create “original” essays, articles, stories, poems, songs, skits, and more. It can also translate text, summarize articles, edit writing, and produce social media content, as well as writing and debugging code. However, even though ChatGPT was trained on a dataset with ten times the parameters of any previous model, making it theoretically the most advanced chatbot of its kind, no chatbot is without its errors, and ChatGPT is still susceptible to the kinds of serious blunders that have affected previous bots of its kind.
For example, in 2016, after trolls started inundating Microsoft’s “Tay’s Chatbot” with adversarial examples, trying to provoke the AI to say something controversial, the chatbot began expressing sympathy for Nazis and became a 9/11 truther conspiracy theorist. Meanwhile, in 2017, Facebook’s translating algorithm mistranslated a Palestinian man’s tweet that said “Good morning” to “Attach them,” leading to his arrest. (Facebook later attributed the issue to its AI’s lack of exposure to Arabic dialects.) Most recently, in November 2022, Meta’s chatbot “Galatica” came under fire when it was tasked with producing scientific articles and frequently “hallucinated,” coming up with obviously fake articles, including one about bears in space, at a user’s command. (“Hallucination,” it turns out, refers to when an AI encounters data significantly different from its training data and, unable to know that it doesn’t know something, just makes something up). At the same time, an overactive safety filter in Galactica led the bot to refuse to generate articles if the prompt included certain phrases, including “queer theory,” “racism,” or “AIDS.”
ChatGPT has a built-in safety filter that rejects inappropriate requests by blacklisting a set request list and including a constantly fine-tuned bias filter. Even so, as Mehta explained in the meeting, ChatGPT has been goaded into writing songs making fun of women scientists and pricing human brains based on race and gender upon user requests through a process called “prompt injection.” ChatGPT also frequently hallucinates information.
Much of the meeting (and the provided fact sheet) centered on the possible malicious uses of generative AI and ChatGPT’s ability to mislead, misinform, and manipulate. For instance, ChatGPT’s particular talent for writing well in multiple languages could improve the work of non-English-speaking scammers as they write phishing emails. Meanwhile, since artificial intelligence is not self-aware and is willing to speak on even blatantly erroneous topics, the South China Morning Post was able to get ChatGPT to write a scientific article mimicking those in medical journals despite lacking any data. Alarmingly, a study from Northwestern and UChicago found that experienced scientific researchers mistakenly identified AI-generated articles as real articles. In the few short months since its launch, ChatGPT has already been used to create a cryptocurrency scam using Elon Musk’s face, produce pornographic images using publicly available images of women, and impersonate Volodymyr Zelensky in a speech asking his troops to surrender to Russia.
Finally, CPI’s discussion came full circle to the topic of ChatGPT use in academia, one of the major goals of the meeting and the main draw for all the student attendees. Unsurprisingly, high school teachers and college professors over the last couple of months have predicted that ChatGPT will lead to “The End of the Essay,” considering how teachers will have no way of knowing if a student writes their essay by simply giving the bot the assignment instructions as a prompt. Besides creating a brand new essay from the ground up, however, ChatGPT can give feedback and revision suggestions for pre-written student essays with prompts like “What can I do to make this essay better?” (Fun fact: I have a friend who got ChatGPT to edit his Art Hum final essay last semester.)
New York City public schools have already banned ChatGPT on student devices. After ChatGPT’s emergence this past fall, meanwhile, some universities are reducing take-home essay exams introducing more in-person midterms and finals for the current Spring semester. Not all instructors, however, are decrying ChatGPT as merely a tool to help students cheat. Some college professors have pointed out that artificial intelligence belongs in classrooms and that learning to use generative AI is the way of the future. Wharton professor Ethan Mollick has written in his Spring 2023 syllabi that he expects his students to use ChatGPT, as “learning to use AI is an emerging skill.”
After a review of the fact sheet and opening remarks, CPI opened up the floor for a discussion with the students in attendance. One man, a political science major, said that he used ChatGPT in December to write a final essay… but only after writing a very detailed outline and using ChatGPT to turn his original ideas into essay format. ChatGPT, he said, cannot do research or cite sources, forcing students to do research for themselves and generate their own ideas and original thinking about the material since ChatGPT can’t do everything.
Several students at the meeting expressed concern about ChatGPT’s malicious uses and their possibility for spouting racist and sexism. As one person pointed out, many of the examples of racist output in ChatGPT’s current version, ChatGPT-3, have been so explicit as to be almost comical, and people hardly need a reminder to take whatever a generative AI chatbot says with a grain of salt. However, as generative AI continues to get better and better, will the problematic ideologies reinforced by ChatGPT-4, or ChatGPT-5, be more hidden and therefore more insidious? Only time will tell.
As is CPI’s custom, the meeting ended with everyone there sharing their final thoughts in one-sentence “Tweet form.” My favorite was from a woman who said that learning to write is an important skill that we continue to develop throughout our lives. “Don’t let ChatGPT deprive you of that.”
The Columbia Policy Institute meets every Tuesday at 9 pm in Lerner 569 with discussions open to the whole Columbia community. For past Bwog coverage of CPI events, please check out Charlotte Slovin’s lovely post from last semester!
Poster via Columbia Policy Institute
Meeting via CPI Outreach Director Helen Hung
2 Comments
@Anonymous Chat is the end of google. The university should use this to enhance learning.
@Anonymous Instead of complaining about ChatGPT or Wolfram, these products should be used to get students to do more advanced work at an earlier age. This is the Proactionary Imperative.