On September 12th, Staff Writer Emily Yi and Guest Writers Sylvia Chen and Spencer Davis attended a talk by The Mortimer B. Zuckerman Mind Brain Behavior Institute. The event, featuring Dr. Zenna Tavares and Dr. Kim Stachenfeld, centered on the question of whether AI can learn like humans. 

In the last year, advances in ChatGPT have heralded an unprecedented surge in the artificial intelligence industry. The development of AI, its uses, and its pitfalls have become more relevant than ever to our everyday lives. As we look to the frontier of AI technology, the question of whether machines can learn like humans is key to imagining the future capabilities of AI. 

On Tuesday, the first event in the Zuckerman Institute’s Stavros Niarchos Foundation Brain Insight lecture series addressed this crucial question by looking at the interdisciplinary connections between cognitive science, data science, neuroscience, and AI research. The event, moderated by Dr. Emily Mackevicius, a postdoctoral researcher at Aronov Lab and the Center for Theoretical Neuroscience at the Zuckerman Institute, featured data science researcher Dr. Zenna Tavares and Google DeepMind researcher Dr. Kim Stachenfeld. 

The event began with a short lecture from Dr. Tavares, a researcher at both the Data Science Institute and the Zuckerman Institute. Tavares is the first joint hire at two university-level Columbia institutes at and works on connecting the world of probabilistic programming with explorations of human decision making. 

In his lecture, Dr. Tavares explored how humans analyze the world by simulating reality and creating mental models—more specifically, we use information about reality to infer general patterns, thus abstracting a framework that allows us to address specific problems. This human learning pattern forms a basis for AI training through what’s known as causal probabilistic programming languages. 

Probabilistic models use data on a set scenario to predict the outcome when an element of the scenario is changed. Dr. Tavares illustrated one of the applications of such a program by describing a scenario which concerns every New York pedestrian—getting hit by a car. 

Let’s say a pedestrian emerges from behind a barrier and is hit by a speeding car. How can we tell who is at fault? A probabilistic model could consider factors such as the speed of the car and the distance of the barrier, then predict what might happen if they were different—would there still be a collision if there was no barrier, or if the car was going slower? 

Results of Probabilistic Modeling of Car Crash via Dr. Zenna Tavares

Outside of the implications for legal decision-making, Dr. Tavares thinks that probabilistic models can be expanded to more complex models of human minds. A key challenge in constructing models, however, is that a model will never be able to replicate reality in its entirety. As the famous saying by the statistician George Box goes, “all models are wrong, but some are useful.” 

To illustrate the complexities of this question, Dr. Tavares described an experiment in which human participants were shown an image of a small cube at the top of a ramp, with a black line drawn some distance away from the bottom of the ramp. The participants were then asked to predict whether the cube, after sliding down the ramp, would pass the black line. 

After a number of trials, a pattern would emerge—when the cube was red, it would come short of the line; when it was black, it would slide past. As one might expect, the more times a participant was asked to predict whether the cube would get past the line, the more reliably they could do so. 

But when participants were asked a surprise final question—to predict where, exactly, the cube would land—their success at answering this question actually decreased the longer they spent with the original model. In other words, by learning to adhere strictly to the rules of the model, the participants lost sight of real world factors–perhaps a cautionary tale for the dangers of letting AI models take precedence over human common sense. 

Another drawback of AI problem solving, as explored by Dr. Kim Stachenfeld, is its inability to solve completely novel problems. Using human problem solving strategies to inform AI training is a key interest for Dr. Stachenfeld, a senior research scientist at Google DeepMind and Columbia affiliate faculty member who researches at the intersection of neuroscience and AI.

According to Stachenfeld, “Machines have gotten pretty good at doing things, but they may fall short of understanding the squishy world that we live in.” For example, ChatGPT can execute complex tasks with a certain level of novelty, such as producing an image of a “wizard wearing a purple hat and pink robes riding an elephant to battle with the abstract concept of love” by drawing on a database of information about wizards, elephants, and past visual depictions of love. However, AI has yet to prove itself capable of generating solutions at a higher level of complexity than the data on which it has been trained. 

Stable Diffusion Web Depiction of an Elephant-Riding Love Wizard, via Dr. Kim Stachenfeld

Despite this limitation, machine learning strategies based on human psychology are currently yielding promising results. One such strategy is based on breaking complex problems down into familiar parts, a process drawn from our understanding of how humans solve new problems.

To demonstrate this principle, Dr. Stachenfeld showed an AI-generated model of a flow of water, directed either to the right or left. By training the AI to predict how water behaves when it interacts with small pieces of the system, rather than going straight from design to solution, researchers can get the systems to generate flows much closer to how liquid acts in reality. 

Water Flow Model via Dr. Kim Stachenfeld

In real-world applications, the solutions generated by AI after breaking the whole into parts are often more accurate and come closer to emulating reality. Nevertheless, reaching levels of complexity above that of training data and making sure that AI solutions connect with human goals in the real world remain areas of further research. 

Although the tone of the evening was hopeful about the future of machine learning, notes of caution were not absent. For example, Dr. Stachenfeld emphasized the importance of thinking critically about AI models and continually trying to understand how they work. If we neglect to do so, AI may do nothing but reinforce our existing thoughts, ideas, and delusions, and both machines and humans may stop “trying to think new thoughts.” 

An audience question also prompted a discussion on human-related pitfalls in AI training—after all, AI relies on biased human trainers, whose subjective judgements shape large language models such as ChatGPT. Can human trainers truly create intelligent, unbiased AI that will offer us unprecedented solutions and advance our understanding of the world beyond human limits? Or will we do nothing but recreate human biases and further confine us within the limits of human imagination? 

Explore future events from the Zuckerman Mind Brain Behavior Institute here. The entire series of Stavros Niarchos Foundation Brain Insight Lectures is available on the Zuckerman Institute’s Youtube channel, here

Drs. Zenna Tavares, Kimberly Stachenfeld, and Emily Mackevicius via Zuckerman Institute