Sütfeld and Pipa found that a simple “value-of-life” model, which in this study assigned a single value to each object, worked best. It was able to find a moral middle ground between all of the decisions made by the study participants, providing an average value for each of obstacle.
For example, this group deemed deer more valuable than goats. All other factors being equal, a self-driving car using this model would run over a goat to save a deer. Dogs were found to be more valuable than deer. Humans were found to be more valuable than animals. Children were also found to be more valuable than adults, although the difference was marginal and not statistically significant.
Such a model would be capable of taking other more nuanced factors into consideration. If society deemed the deer only slightly more valuable than a goat, but the car had to switch lanes in order to hit the goat, the model could compare the moral costs of both decisions — running over the slightly more valuable animal, or running over the less valuable one, but taking a specific action to do so.
“In principle, other factors, such as different probabilities of injury or death, could also be included in the model, but that was not within the scope of this study,” Sütfeld explained.
An alternative approach might use a more sophisticated form of machine learning that relies on neural networks to arrive at a decision. These are more complex algorithms similar in structure to biological brains, and have recently had great success in many areas of artificial intelligence, such as object recognition and game playing. However, these systems do have a down side.
RELATED: When Self-Driving Cars Crash: Who Lives?
“Neural networks are still mostly black boxes for us,” said Sütfeld. “We can see what we put into them and we can see what comes out, but we cannot really grasp what happens in between.”
A less-complex algorithm could be almost as accurate as neural network but offer far more transparency. In the case of a “value of life” model, for example, scientists know the assigned value of certain objects and how the algorithms is arriving at the decision. Models can also factor in other variables, such as whether someone ran into the street or made an illegal turn, for instance.
“This kind of transparency may be very important when it comes to public acceptance of these models,” Sütfeld pointed out.
Overall, machine-learning algorithms, whether complex or simple, are probabilistic models, and there is some debate about whether they’re capable of reflecting societal values more accurately than establishing categorical rules. Such rules were recently laid out by the German Federal Ministry of Transport and Digital Infrastructure, which last month issued explicit ethical guidelines for automated and connected cars that take into account safety, human dignity, individual self-determination, and data autonomy.
RELATED: Two Autonomous Passenger Drones Could Make Flying Taxis Possible
Although the guidelines are moving in the right direction, Sütfeld said that there is some disconnect between what the committee thinks is morally correct and how a person would actually behave. For example, one guideline states that computer algorithms cannot factor in age as a way to classify whether a potential victim is expendable.
If a human is put in the unavoidable situation of having to run over an elderly person or run over a child, what choice would they make? It’s the kind of unfortunate decision that self-driving vehicles will have to confront.
“Do we want them to behave as humans would, or adhere to categorical rules?” asked Sütfeld.
Of course, no system, however reasonable or moral, will be able to completely avoid a traffic outcome. But building models based on the most realistic scenarios is important.
“What we can say for now,” Sütfeld noted, “is that VR is in our opinion a logical starting point and a viable solution that should be considered.”