Students at Columbia University are using AI to make their workflow more efficient, and their professors are both denouncing and encouraging it.

Ever since generative artificial intelligence (AI) has risen in popularity, students across the United States have begun using tools like ChatGPT, Gemini, and Claude AI to assist with their assignments. With the ability to comprehend written prompts and generate text responses, these large language models (LLMs) have transformed the landscape of higher education. Granted, these popular AI tools are not the first LLMs to exist. In 1966, MIT computer science professor Joseph Weizenbaum developed ELIZA, the first AI chatbot able to mimic human conversation. Though rudimentary in comparison to today’s LLMs, ELIZA laid the groundwork for the AI programs populating students’ computers today. Since then, a plethora of language-processing chatbots have developed, including OpenAI’s ChatGPT, which was released for public use in November 2022. And with its prevalence, there’s no doubt that AI has changed the way both students and professors approach education.

Columbia University in particular has an interesting relationship with its students’ use of AI. On one hand, Columbia’s generative AI policy says they are “dedicated to advancing knowledge and learning, and embraces generative AI tools.” A prime example of this is the university’s recent announcement of a new AI minor for undergraduate students. But in order to preserve academic integrity, Columbia’s policy also states that “the use of Generative AI tools to complete an assignment or exam is prohibited.” Clearly, there is a difficult balance Columbia must strike between the pursuit for technological advancement and the integrity of students’ education. Even within Columbia’s fields of study, there seems to be different views on what this balance should be and how much each side of the equation should be weighed.

In 2024, a survey conducted by the Digital Education Council found that 86% of students pursuing higher education worldwide used AI in their studies. Students are using AI for a variety of tasks, ranging from summarizing key information to generating whole drafts of essays. “I’ll put a PDF of my notes into an LLM, and then I’ll have it make practice problems for me,” one Columbia College student said. Another Columbia College student uses ChatGPT to correct grammatical errors for short response assignments: “I’ll write an answer to something and then I’ll put it [into ChatGPT] and I’ll say bold what you change, and then I’ll see what changes and I’ll copy and paste, because going through one by one is a lot of work.” While these two students aren’t using LLMs to generate whole essays, they certainly agree that AI can be used to streamline assignments and improve efficiency. Instead of doing hours of research on their own, students are using LLMs to speed up the work process—for example, by asking LLMs to help them locate sources. With AI, students can cut down on the amount of time they’re spending on assignments. And with the intense workload that university brings, that’s a valuable asset. “I mean, people are getting their sleep back,” one of the students said.

Columbia recommends that professors include an AI policy in their syllabuses and that they discuss appropriate use of AI tools in class. Professors across the board adhere to this; however, what differs is how much AI use they tolerate. “Some professors are really strict on it, like my University Writing professor,” a student said. Their Literature Humanities professor has a similar mentality in regards to AI. Last semester, this student recalls, their Lit Hum professor’s assignments were mainly take-home ones. But recently, the course structure has changed. “He was cautious when people were like, can we do take-home stuff?” the student said. Now, their Lit Hum professor doesn’t allow them to do any take-home assignments, with the exception of one essay. Each Lit Hum section’s course structure differs based on the professor. In other Lit Hum classes, essays are a core component of students’ grades. Some students write up to four over the course of the year-long class. But in this particular Lit Hum section, students only write one essay and do in-class close reading quizzes instead. Why does this student’s professor conduct their class in this way? This student’s conclusion is simple: “Because he’s worried about us using AI.”

In order to curb the use of LLMs in writing assignments, professors like this one have put preventative measures in place so that students never get the chance to use LLMs to write in the first place. Some professors are favoring in-class assignments, where there is little risk of AI use because they’re able to monitor students as they work. Their concerns aren’t unfounded: a 2024 survey by the American Association of Colleges and Universities (AAC&U) in conjunction with Elon University reported that 59% of higher education leaders have seen increased cheating since generative AI has become widely available. In an environment where AI is so easily accessible and simultaneously difficult to monitor, it makes sense that some professors have taken steps to avoid it entirely. But interestingly, other professors have taken the opposite approach to AI use—some professors are encouraging it.

One student recalls an interaction they had with their chemistry professor last semester: “If you asked him a question that he thought was stupid, he would be like, you should know that if you’d asked ChatGPT.” In this case, the use of AI was actually expected of students. It was presented as a tool for them to turn to in order to get their questions answered, instead of something to avoid. This student also notes that there seems to be a difference in the way humanities professors and STEM professors at Columbia approach AI use. “You throw a rock in Havemeyer,” they said, “and you will seldom hit someone who does not use AI.” This student’s anecdotes provide a deeper insight into educators’ views on AI. Their experiences suggest that professors that teach subjects related to English or writing overwhelmingly denounce the use of LLMs for academic assignments, while professors teaching in non-writing or STEM focused fields may not have that same inclination. Either way, it’s obvious that the use of AI among professors has been increasing along with students’. Another 2025 survey from the Digital Education Council found that 61% of faculty at higher education institutions across the globe have used AI in their teaching. Indeed, as the prevalence of LLMs grows, higher education is growing with it by either incorporating AI into classrooms or attempting to remove it entirely.

Which of these two paths is the correct one? That’s hard to say. One student proposed that both STEM and humanities fields must adapt and integrate AI into their subjects before it replaces them. In some ways, AI could have a really positive impact on these fields, but they could also destroy them, they said. Among educators, the prospects for the future are differing, too. AAC&U and Elon’s survey also asked higher education leaders about the impact that generative AI will have on their institutions in the next five years. 45% of these leaders thought the impact will be more positive, 27% thought it will be equally positive and negative, and 17% thought it will be more negative. As for these predictions, we will have to wait and see.

It’s true that AI’s capabilities allow for quicker, more efficient work, and can enhance students’ learning process. At the same time, it can do the opposite when used in the wrong way. AI can hinder students’ education by doing all their thinking for them and therefore stunting their intellectual growth. Even Weizenbaum, the mind behind ELIZA, warned that “there are certain tasks which computers ought not to be made to do, independent of whether computers can be made to do them.” How much can AI be integrated into higher education? At what point must we stop? These are all questions students and educators alike must keep in mind as we move forward. After all, it’s no question that AI has and will continue to play an influential role in the future of higher education—it’s only a matter of how big of a role that will be.

Image via Bwog Archives