On Friday, the Institute for the Study of Human Rights hosted “Neurorights: Human Rights Guidelines for Neurotechnology and Artificial Intelligence,” as part of their Technology and Human Rights Series. Featuring Rafael Yuste, the talk discussed the rapidly advancing technology of neurotechnology and the need to expand the explicit legal definitions of human rights.

Starting from the building blocks of neurotechnology, Yuste took us back to Cajal and Sherrington’s neuron doctrine, which suggested that through studying individual neurons one at a time, we could build a complete understanding of the brain. However, Yuste analogized this to trying to watch a movie from looking at the TV screen one pixel at a time, pointing out that emergent properties, or properties that emerge from interactions between individual units, were the key to truly understanding how the brain works. The brain, he said, was the “mother of all emergent properties,” and to understand the brain and all its emergent properties would mean understanding how our thoughts and actions are generated. Being able to understand, read, and manipulate neuronal activity would be a huge step in humanity, reshaping how we define ourselves and the course of our history. It would open a new world of possibilities for people with mental illness or neurological diseases and revolutionize technology and the economy. 

While such technology feels distant and fiction-like, Yuste argued that such technology is already here. Using the Human Genome Project as a model, the BRAIN Initiative was proposed to President Obama in 2013 with the goals of understanding, reading, and manipulating neuronal activity. The initiative has since grown to include over 500 labs and has been emulated by countries all over the world in a race to build neurotechnology. 

So far, fMRI scans of the cerebral cortex have allowed scientists to create computational maps of brain activity as they correspond to the visualization of specific images. With those maps, scientists have been able to create an algorithm that can read and decode neuronal activity that can read, in real-time, what people are thinking when watching scenes from movies (the words “man,” “woman,” and “talk” show up on the screen when neuron activity from someone watching a scene of a man and woman talking is read). With innovations in brain-computer interfaces (BCIs), paralyzed people can train robotic arms to move with their thoughts, and current research being done at Columbia opens the possibility for BCIs to be made into small, wireless chips that can be implanted on a blind person’s visual cortex and connected to a camera to provide low-grade vision. 

With these advantages, however, there are also dangers. Through manipulating the neuronal activity of a mouse, researchers found that the mice were unable to realize they were being controlled by an external source. “The animal interprets this activation of its neurons as its own perception, and we’re controlling the animal by controlling its perception, playing the animal like a puppet,” Yuste said. “The critical conclusion of the experiment is that the mouse interprets this external manipulation as internal. People have been trying to control each other’s behavior forever, but when you look at your newsfeed or Facebook, you know it’s external. Whereas in this case, you’re going to be dealing with internal decision-making.” 

To combat the ethical issues that arise from this, Yuste and a group of colleagues met up to create “neurorights,” ethical guidelines and priorities for neurotechnology and AI. With Pupin Hall, where the atomic bomb was developed, in full view of their meeting space, Yuste reflected on the collective sense of responsibility and duty the group had in laying the guardrails for this new technology that they believed would, like the atomic bomb, change the course of history and humanity. The framework for ethical guidelines created for neurotechnology emphasized four main goals: protection of mental privacy, identity (self), agency (free will), fair access to cognitive augmentation, and protection from bias and discrimination. They emphasized the need to follow the medical model of ethical guidelines, having a “technocratic oath” that would serve as the Hippocratic oath of neurotechnology, stating that medicine is, at its core, technology that manipulates the human body that is regulated by the deontology embedded in its practice and training. Because of the intimate way in which data collected from our brains is tied to our bodies and identities, they argued that buying and selling of neurodata should be treated as the buying and selling of organs. “We think this affects the heart of the human condition, this, more than anything I know. The brain is yourself. In fact, the way you talk about yourself in Basque, you say neure burua, ‘my head.’ So for the Basque already, in the ancient language, the head is you. You are your head. So now this is technology that gets into your head, so we think it’s a human rights issue.”

In Chile, a legislative implementation of neurorights as a constitutional amendment has already been passed in the Senate with unanimous support. If approved by the house, Chile will be the first country to have a neurorights amendment. Spain is following suit, with its national council of AI announcing a human rights chapter to protect neurorights that will soon be sent to parliament for approval. On a larger scale, Yuste urges that we need to incorporate neurorights into the UN’s definition of human rights. As this technology only continues to advance, he urges that we need to start imposing restrictions before it becomes too late. “It’s going to be very hard to regulate privacy if all these things are already out there,” Yuste said. “And as of now, these things don’t have any regulations on them, they’re treated as consumer electronics. Some people already say it’s too late. But I’m more optimistic.” As we edge towards the brink of “a new renaissance,” Yuste urges that we work towards interdisciplinary conversations and legislation that can ensure that our rights and dignities are protected when we inevitably find ourselves in a new era of technology.

Image via Institute for the Study of Human Rights Twitter