Startseite // Universität // Aktuelles // Diashow // Solving Ethical Dilemmas in AI

Solving Ethical Dilemmas in AI

twitter linkedin facebook google+ email this page
Veröffentlicht am Montag, den 21. Januar 2019

If the Las Vegas Consumer Electronics Show taught us one thing, it’s that artificial intelligence is still capturing the public imagination and leading headlines. Continuing developments in fields such as deep learning have ensured that all eyes were on the numerous social robots, driverless cars and smart fridges on display. But behind the obvious technical strides that have made these products possible, ethical issues are still presenting scientists with some of the most significant research challenges still to be addressed in AI.

“As machines work with ever greater independence, making choices without supervision, we need to be sure that they are capable of making ethical choices,” says Dr. Zohreh Baniasadi. Indeed, Baniasadi recently received a Best Paper Award at the Human Computer Interaction Conference 2018, also taking place in Las Vegas, for her work in machine ethics.

The challenge is to bridge the gap between how humans and machines reason and communicate; to translate human ethical values into a set of rules a machine can understand and follow.

“There are two major ethical approaches that have the potential to be mechanised and formalised,” continues Baniasadi. “Deontology looks at whether the actions themselves are good or bad. Utilitarianism, meanwhile, looks at the outcome, so the action that causes the least harm is the right action. Our model looks at how a machine can combine procedures from both approaches when faced with an impasse.”

Imagine a social robot faced with the decision of whether to give urgent medicine to a patient or respond to a fire alarm. “If there’s no obvious sign of fire, a deontological approach would see the robot deliver the medication first before investigating the fire alarm”, says Baniasadi. “The utilitarian approach, however, might prioritise responding to the fire alarm, given the potential for greater overall suffering.”

Zohreh Baniasadi with Pepper the robot

Baniasadi’s paper, A Model for Regulating of Ethical Preferences in Machine Ethics, proposes a framework for dealing with such dilemmas. “People tend to be dogmatic in following one approach or another, but in machine ethics there’s no one solve-all solution, because any individual ethical theory might result in an action that isn’t justifiable by human values. For example, we wouldn’t want a machine to remain inactive when faced with two bad decisions,” says Baniasadi.

“Our model ranks a set of possible actions according to both ethical approaches – it considers one of these approaches principal, and the machine will resolve dilemmas according to that approach wherever possible. However, if the machine encounters a situation where that ethical approach regards two or more actions as equally bad, the model will use the second approach to choose which action will cause the least harm.

“Taking a popular example, a runaway train approaches a fork in the line. If it goes left, it faces a head-on collision with one person, if it goes right it will collide with ten people. Here a deontological approach might see the accident as inevitable, effectively blocking the train from taking any decision. A utilitarian approach, however, would allow the car to kill one person, saving 10. So although the deontological approach might be principal, here the machine would act according to utalitarianism."

Thankfully, such extreme situations will be incredibly rare, with artificial intelligence expected to reduce accidents in a range of application areas. But the interdisciplinary work done by Baniasadi and her colleagues ensures that machine ethics and the debate around it is keeping pace with the ongoing rapid advances in artificial intelligence.

Reference: Baniasadi, Z., Parent, X., Max, C., & Creamer, M. (2018). A Model for Regulating of Ethical Preferences in Machine Ethics. Proceedings of International Conference on Human-Computer Interaction, (pp. 481-506)