Yesterday, new Bwoggers Solomia Dzhaman and Chloe Gong attended a lecture given by Dr. Ronald Baecker (UToronto) concerning the responsibilities of artificial intelligence, and the issues at stake.
Imagine that you’re sick. You’re presented with two options – either visit a human doctor for a diagnosis, or turn to Siri. The clear choice for most of us is the human doctor. But why? In his lecture “What Society Must Require of AI,” Dr. Ronald Baecker describes how much further we have yet to go, in order to achieve AI (artificial intelligence) with abilities comparable to those of human professionals.
Dr. Baecker, an Emeritus Professor of Computer Science at the University of Toronto, founded projects such as the Technologies for Aging Gracefully lab (TAGlab) and has made great contributions to Canada’s aging network AGE-WELL. Through his lecture, Dr. Baecker demonstrated his passion for using AI to make the world a better environment for all to live in.
We’ve all heard about, seen, or experienced AI in some shape or form in our lives. It doesn’t just exist in sci-fi movies like Star Wars or The Matrix; it’s all around us. From the Siri in our phones to the spam filters in our email inboxes, AI truly plays an indispensable role in modern society.
However, what happens when AI errs? In the case of Siri, it’s not that big of a deal. It might be frustrating to listen to “I don’t understand (insert common object),” but in the grand scheme of things, it won’t really affect your life. Yet, in other cases — such as medical diagnoses, self-driving cars, and autonomous weapons — a single glitch in the code could lead to serious consequences. Dr. Baecker calls artificial intelligence utilized for situations like these “consequential AI.” Right now, consequential AI has not been widely implemented by the public, because, frankly, they are still untrustworthy. Most people would prefer to see a human doctor over an AI robot for a checkup, because robots (as of now) lack common sense, and can severely misjudge a situation if it even slightly differs from the models it was programmed to deal with.
So how should we move towards a society that is compatible with and welcoming of consequential AI technology?
Dr. Baecker argues that there are six requirements AI must fulfill before it can be considered as a replacement for intelligent human work:
Dr. Baecker ended the lecture with some parting thoughts. To achieve all of these essential components of AI would be incredibly difficult (and some would say impossible). The result would be a machine far beyond the capabilities of a human. If we want to make AI a viable option soon, we need to carefully consider which of these elements we want to put at the forefront of research (and which can stay on the back-burner). In addition, he mentioned the often-forgotten need for human overseers in machinated systems – even the best AI isn’t perfect and needs human intervention.
The lecture was extremely informative and enlightening, but it also left us with more questions than answers. To what extent should humans rely on AI? How do we know when to trust a robot? Is it even possible to have a completely fair machine? These are problems we need to continue pondering before we can put our lives in the hands of artificial intelligence.
image via flickr.com