Space & Innovation

VR Can Be Used to Help Automated Driving Systems Make Ethical Traffic Decisions

The ability to engineer decision-making using machine-learning algorithms could offer an acceptable path toward handing over the car keys to self-driving systems.

Now, for the first time, researchers using virtual reality have shown that computer models are effective at predicting moral decisions. The ability to engineer decision-making using machine-learning algorithms could offer a transparent and acceptable path toward handing over the car keys to self-driving systems.

“We’re at a point where we need to find an answer that is at least acceptable to a majority, so that we can have an ethical decision-making system in a car,” said Leon Sütfeld, a research assistant and Ph.D. candidate in the Institute of Cognitive Science, at Osnabrück University in Niedersachsen, Germany.

Sütfeld and his colleagues Gordon Pipa, Richard Gast, and Peter Königalso of the Institute of Cognitive Science at Osnabrück University published their study on Wednesday in the journal Frontiers in Behavioral Neuroscience.

RELATED: Intel Just Made a $15B Play for Self-Driving Car Technology

In conducting the research, Sütfeld and Pipa asked 105 people to a wear virtual reality headset and then respond to a variety of traffic scenarios on a simulated suburban street. While driving in the virtual world, two obstacles would appear onscreen and block the driver, who had one to four seconds to choose a lane and determine which object to hit. There were 17 different obstacles from three categories: human (children and adults), animal (e.g. a dog), and inanimate object.

As the research subjects responded to the obstacles, their reactions — who or what they hit and the amount of time they had to decide — were collected as data. The researchers then used the dataset from the virtual reality experiences to train three different computer models to think through such scenarios like a human. After the models were trained, the researchers evaluated them using novel traffic data that weren’t part of the original dataset.

Sütfeld and Pipa found that a simple “value-of-life” model, which in this study assigned a single value to each object, worked best. It was able to find a moral middle ground between all of the decisions made by the study participants, providing an average value for each of obstacle.

For example, this group deemed deer more valuable than goats. All other factors being equal, a self-driving car using this model would run over a goat to save a deer. Dogs were found to be more valuable than deer. Humans were found to be more valuable than animals. Children were also found to be more valuable than adults, although the difference was marginal and not statistically significant.

Such a model would be capable of taking other more nuanced factors into consideration. If society deemed the deer only slightly more valuable than a goat, but the car had to switch lanes in order to hit the goat, the model could compare the moral costs of both decisions — running over the slightly more valuable animal, or running over the less valuable one, but taking a specific action to do so.

“In principle, other factors, such as different probabilities of injury or death, could also be included in the model, but that was not within the scope of this study,” Sütfeld explained.

An alternative approach might use a more sophisticated form of machine learning that relies on neural networks to arrive at a decision. These are more complex algorithms similar in structure to biological brains, and have recently had great success in many areas of artificial intelligence, such as object recognition and game playing. However, these systems do have a down side.

RELATED: When Self-Driving Cars Crash: Who Lives?

“Neural networks are still mostly black boxes for us,” said Sütfeld. “We can see what we put into them and we can see what comes out, but we cannot really grasp what happens in between.”

A less-complex algorithm could be almost as accurate as neural network but offer far more transparency. In the case of a “value of life” model, for example, scientists know the assigned value of certain objects and how the algorithms is arriving at the decision. Models can also factor in other variables, such as whether someone ran into the street or made an illegal turn, for instance.

“This kind of transparency may be very important when it comes to public acceptance of these models,” Sütfeld pointed out.

Overall, machine-learning algorithms, whether complex or simple, are probabilistic models, and there is some debate about whether they’re capable of reflecting societal values more accurately than establishing categorical rules. Such rules were recently laid out by the German Federal Ministry of Transport and Digital Infrastructure, which last month issued explicit ethical guidelines for automated and connected cars that take into account safety, human dignity, individual self-determination, and data autonomy.

RELATED: Two Autonomous Passenger Drones Could Make Flying Taxis Possible

Although the guidelines are moving in the right direction, Sütfeld said that there is some disconnect between what the committee thinks is morally correct and how a person would actually behave. For example, one guideline states that computer algorithms cannot factor in age as a way to classify whether a potential victim is expendable.

If a human is put in the unavoidable situation of having to run over an elderly person or run over a child, what choice would they make? It’s the kind of unfortunate decision that self-driving vehicles will have to confront.

“Do we want them to behave as humans would, or adhere to categorical rules?” asked Sütfeld.

Of course, no system, however reasonable or moral, will be able to completely avoid a traffic outcome. But building models based on the most realistic scenarios is important.

“What we can say for now,” Sütfeld noted, “is that VR is in our opinion a logical starting point and a viable solution that should be considered.”