On Friday, November 4, Desirable AI kicked off its lecture series by hosting Dr. Olivia Erdélyi for a talk about the importance of interdisciplinary and multi-stakeholder cooperation in AI regulation.
Artificial intelligence plays an increasingly prominent role in our lives. Many scientists have warned that the situation can get out of hand if left unchecked. Consequently, it is critical that we set rules and regulations. Dr. Erdélyi, the first Mercator Visiting Professor for AI in the Human Context at the University of Bonn, offers her insight into the legal, economic, social, and anthropological contexts of this problem.
The concept of regulation, Dr. Erdélyi points out, is murky. Much different from the state-driven laws, regulation is decentered and not as binding. Moreover, it is often hard to define the goals of regulations. And even if we have a good idea of what to do, there is still no guarantee that regulators will execute it the way it is intended or that it will be accepted by society.
AI regulation is even more complicated. So far, we have had a limited understanding of AI’s potential political or economic impacts, let alone of ways to optimize the outcome. Implementation will only be more challenging. Ideally, successful regulation will require people from all works of life—lawyers, economists, politicians, regulators, and the general public—to work in unison. Sadly, each group’s distinct ways of thinking and their points of interest often hinder progress.
Nevertheless, Dr. Erdélyi reassures us that AI regulation is not a lost cause. At its root, AI development is not so different from any other technological innovation, which we have a good deal of experience dealing with. We know our objectives—encouraging innovation while ensuring safety, and developing an environment flexible enough to adapt to new challenges and transparent enough to foster trust. To that end, communication is key. Decision-makers should strengthen dialogue with the technical community to make a more informed decision, as well as with other stakeholders to incorporate all their voices.
Dr. Erdélyi gives one example of successfully regulating technological innovation: when the US declared wiretapping without a warrant a federal crime. At first, people had little awareness of the threats wiretapping could pose. In Olmstead v. United States, the court decided wiretapping did not constitute a “search” within the meaning of the Fourth Amendment, and thus did not require a warrant. But after learning more about the power of these electronic devices, the country changed its initial view and introduced a search and seizure law to protect people’s privacy.
There are still many questions unique to artificial intelligence for us to think about. For instance, AI technology lives on big data. Then should privacy laws apply to all data or just personal data? Moreover, some AIs resemble a neural network which makes it challenging to foresee liability. Who should we hold accountable when things go wrong?
Agencies around the world, such as IEEE, ISO, and the World Economic Forum, are working tirelessly to answer these questions. But that is not enough; regulating AI calls for collaboration from every one of us. Dr. Erdélyi aptly cites the famous quote from Spider-Man, “With great power comes great responsibilities.” Whether we harness AI’s power to create a utopian society or misuse it and plunge the world into ruin, the future is in our hands.
Robot via Wikimedia