Columbia affiliates and professors met with a representative from OpenAI on Friday to offer perspectives and discuss the future of artificial intelligence.

Artificial intelligence is here, and it has no plans of going away. Of course, it is still in its early stages, but new technology has a habit of improving exponentially. In order to confront AI and learn how to use it, conversations between developers, subject experts, and average users must be conducted.

On Friday, Columbia held a webinar discussion in order to bring together various Columbia affiliates and professors, as well as a representative from OpenAI, the parent company of the ChatGPT software. Gaspare LoDuca, CIO and vice president of CUIT, opened the session by emphasizing the importance of the transition to artificial intelligence. Describing it as the most disruptive technology in history, LoDuca pushed the point that the movement spearheaded by OpenAI is unique. Artificial intelligence is powerful in many unprecedented ways due to its ability to affect practically all industries. Almost all markets would benefit in some way from AI implementations, whether that be through improved efficiency or cost-cutting. This effect is already present, but if advancements such as the ability for AIs to train other models are made, it will accelerate even further.

But there is more than one way that this can go. Frantz Merine, CIO of the law school’s information department, pressed that the future can go in various directions. Will AI lead to a utopia, with humans living in luxury and all unnecessary work eliminated, or will a doomsday scenario occur, as is so often described in science-fiction novels? Truthfully, the reality will likely be somewhere in the middle. However, this does not diminish the fact that AI has the potential to both benefit and harm us. OpenAI’s ChatGPT embodies this notion to some degree. Merine cited its description as a “muse, not oracle”: the software already provides useful summarization and proofreading functions at the law school, but it can also “hallucinate,” presenting false information as true. He recommends that education regarding ChatGPT be increased, teaching prompt engineering and ensuring the best output quality.

Luis Bello, the Deputy CIO at the law school, seconded this notion and posed some important questions. It is clear that AI can, and already does, play valuable roles in pedagogy, but how far should this be taken? Looking specifically at law schools, how can AI be used to improve legal research? These, as well as many other subsidiary questions, may still need concrete answers, but it is essential to keep them in our minds. Bello also noted that instructors at Columbia are already utilizing the software in unique ways, including using it as a legal assistant, conducting mock trials and interviews, and attempting to facilitate meta-cognition (evaluating the way people process thoughts).

During the session, Seth Cluett, a professor in Columbia’s music department and director of the Computer Music Center, offered an interesting perspective on AI and one of its more specialized applications. His area of expertise, which includes composition and audio-focused visual art, is one that has been highly affected by the rise of artificial intelligence. Synthesizers and data sonification (showing data as sounds or music), already use complex technology and programs, but AI forces it a step further. It has shown itself capable of writing code in Max8 (a composition language), creating models for musical circuits, and even composing music itself. Currently, the intervention of humans is still required to make quality work out of the raw material provided by the AIs, but there is no telling where the future may lead.

Finally, Lama Ahmad, the representative from OpenAI and a member of their policy research team spoke on her view of the direction that AI is heading, focusing on its implications in education. She pointed out that it is necessary for educators, policymakers, and technology companies to work together in order to construct standards for AI usage. The potential of artificial intelligence is high, but losing control of it can be easy. Currently, teachers, especially in middle and high schools, are utilizing AI and ChatGPT to often great effect. It can be implemented in order to draft lesson plans, design question types for tests, and many other use cases. Students are also highly aware of the software, and Ahmad advocated for better education to allow students to use it more effectively. Larger education companies have seen the potential of ChatGPT as well. Khan Academy’s Khanmigo and Canva’s Magic Write are just two AI programs that have been created for the purpose of widespread education. This transition is not likely to slow down. Ahmad cited surveys showing that 51% of teachers and 33% of students aged 12–17 use AI in their education, and the actual numbers may be even higher. OpenAI is aware of this, and future plugins hope to build on these forms of implementation.

However, artificial intelligence should not be used without care. Ahmad noted that OpenAI’s software is not perfect and that there are a number of limitations and ethical considerations that need to be assessed. Programs like ChatGPT are capable of producing harmful content, including results that incorporate a number of biases. Consequently, Ahmad suggests that AI should not yet be used on its own to examine student capability or test for things such as plagiarism. Overreliance needs to be avoided. While AI improves, every person can improve their own skills, learning to discern the quality of AI’s sources and verify the information.

The panel engaged in an open discussion, taking some questions from the online audience. The aforementioned individuals, as well as Robert Maniker, a professor of anesthesiology at the medical school, closed out the event by summarizing their conclusions and looking toward the future. Although AI models undoubtedly need to be worked on and improved, society must simultaneously prepare itself. And though OpenAI pledges to support the world with its technology, other companies and countries may have different sentiments. The panel concluded that a “pause” on AI is not the right answer—we instead need to face the future head-on and move forward with artificial intelligence while also ethically and logically evaluating it. AI will likely play a significant role here at Columbia, being used to digest information and hopefully increase equitable access to resources. To ensure that this future will be desirable, we will need to eliminate biases in both AI and human-constructed plugins. If the necessary actions are taken, artificial intelligence may allow education to become more personalized than ever, allowing each student to learn in their own optimal way.

Header via Bwog Archives