Yesterday, new Bwoggers Solomia Dzhaman and Chloe Gong attended a lecture given by Dr. Ronald Baecker (UToronto) concerning the responsibilities of artificial intelligence, and the issues at stake. 

Imagine that you’re sick. You’re presented with two options – either visit a human doctor for a diagnosis, or turn to Siri. The clear choice for most of us is the human doctor. But why? In his lecture “What Society Must Require of AI,” Dr. Ronald Baecker describes how much further we have yet to go, in order to achieve AI (artificial intelligence) with abilities comparable to those of human professionals.

Dr. Baecker, an Emeritus Professor of Computer Science at the University of Toronto, founded projects such as the Technologies for Aging Gracefully lab (TAGlab) and has made great contributions to Canada’s aging network AGE-WELL. Through his lecture, Dr. Baecker demonstrated his passion for using AI to make the world a better environment for all to live in.

We’ve all heard about, seen, or experienced AI in some shape or form in our lives. It doesn’t just exist in sci-fi movies like Star Wars or The Matrix; it’s all around us. From the Siri in our phones to the spam filters in our email inboxes, AI truly plays an indispensable role in modern society.

However, what happens when AI errs? In the case of Siri, it’s not that big of a deal. It might be frustrating to listen to “I don’t understand (insert common object),” but in the grand scheme of things, it won’t really affect your life. Yet, in other cases — such as medical diagnoses, self-driving cars, and autonomous weapons — a single glitch in the code could lead to serious consequences. Dr. Baecker calls artificial intelligence utilized for situations like these “consequential AI.” Right now, consequential AI has not been widely implemented by the public, because, frankly, they are still untrustworthy. Most people would prefer to see a human doctor over an AI robot for a checkup, because robots (as of now) lack common sense, and can severely misjudge a situation if it even slightly differs from the models it was programmed to deal with.

So how should we move towards a society that is compatible with and welcoming of consequential AI technology?

Dr. Baecker argues that there are six requirements AI must fulfill before it can be considered as a replacement for intelligent human work:

  • Competence, dependability, reliability: In essence, we need AI that works well, without fault or glitch. In addition, we need adaptable AI that possesses some sort of “common sense”, so that when it encounters new undefined problems, it can find good resolutions.
  • Openness, transparency, explainability: The user has to understand not only the solutions they are being given but also how the machine came to those solutions, so that AI is not just a mysterious “black box” machine.
  • Trustworthiness: Just like any human caretaker or provider, AI has to be trustworthy. Just like you wouldn’t go to a doctor operating out of the back of a van, you wouldn’t trust a robot with glitches to cut open your body. AI needs to perform to a set of standards that would be accepted by the general population.
  • Responsibility, accountability: When AI eventually messes up, we need to have a clear establishment of who’s responsible – the programmer? the data scientist who taught the machine? the company providing the service? the consumer themselves? Dr. Baecker proposed that soon, those in the AI sector will have to be held to accreditation standards similar to doctors, lawyers, and engineers.
  • Sensitivity, empathy, compassion: Dr. Baecker says that although we want companionship from AI, it should not strive to replace humans. When it tries to, it enters the uncanny valley – anthropomorphic robots that are for some reason uncomfortable, disconcerting, or just plain creepy. He proceeded to show a video of Sophia, the robot recently granted citizenship in Saudi Arabia. Instead, because humans love to love whatever we deem “cute”, making robotic versions of companion animals may be the solution.
  • Fairness, justice, ethical behavior: The people teaching the machine have an inherent responsibility to make it as fair as possible. There are many examples of teaching biases influencing the behavior of machines – from learning how to send racist tweets to racial profiling within the criminal justice system, small mistakes and biases add up quickly and can have disastrous consequences. AI must be fair and as far removed from bias as possible.

Dr. Baecker ended the lecture with some parting thoughts. To achieve all of these essential components of AI would be incredibly difficult (and some would say impossible). The result would be a machine far beyond the capabilities of a human. If we want to make AI a viable option soon, we need to carefully consider which of these elements we want to put at the forefront of research (and which can stay on the back-burner). In addition, he mentioned the often-forgotten need for human overseers in machinated systems – even the best AI isn’t perfect and needs human intervention.

The lecture was extremely informative and enlightening, but it also left us with more questions than answers. To what extent should humans rely on AI? How do we know when to trust a robot? Is it even possible to have a completely fair machine? These are problems we need to continue pondering before we can put our lives in the hands of artificial intelligence.

image via flickr.com