Drivers Prefer Autonomous Cars That Don't Kill Them - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
IT Life
News
6/25/2016
12:06 PM
Connect Directly
Twitter
RSS
E-Mail
50%
50%

Drivers Prefer Autonomous Cars That Don't Kill Them

A new study shows that most people prefer that self-driving cars be programmed to save the most people in the event of an accident, even if it kills the driver. Unless they are the drivers.

Tesla Model 3, BMW i3: 10 Electric Vehicles To Own
Tesla Model 3, BMW i3: 10 Electric Vehicles To Own
(Click image for larger view and slideshow.)

A car is about to hit a dozen pedestrians. Is it better for the car to veer off the road and kill the driver but save the pedestrians? Or is it better to save the driver and kill all those other people? That's the thorny philosophical question that the makers of autonomous vehicles -- self-driving cars -- are grappling with these days, and a new study sheds some light on what people actually want that car to do.

It turns out that the answer depends on whether you are the driver of the car or not. People generally think it's a good idea for autonomous vehicles to save the most lives. The study calls this a utilitarian autonomous vehicle, which fits the utilitarian moral doctrine of the greatest good for the greatest number.

Generally, people would like it if everyone would use cars that saved the most lives, even if they had to sacrifice the life of the driver to do so. With one exception.

They don't want to have to buy this kind of car for themselves.

Generally, for themselves, people would prefer to buy the vehicle that would ensure for the greatest safety for the driver and passengers.

"Even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves," the authors of the study wrote.

These results have significant implications for whether manufacturers can be commercially successful with these utilitarian algorithms, and whether regulating autonomous vehicles to require these algorithms will be successful or not.

[Like the idea of self-driving cars? What about flying ones? Read Google's Larry Page Investing Millions in Flying Cars.]

"The study participants disapprove of enforcing utilitarian regulations for [autonomous vehicles] and would be less willing to buy such an AV," the study's authors wrote. "Accordingly, regulating for utilitarian algorithms may paradoxically increase casualties by postponing the adoption of safer technology."

This new study is authored by academics from three universities across the disciplines of economics, psychology, and social science: Jean-François Bonnefon of the Toulouse School of Economics, Azim Shariff of the University of Oregon department of psychology, and Iyad Rahwan of the media lab at MIT.

The researchers conducted six separate online surveys with 1,928 total participants in 2015. Each survey asked a different set of questions, progressively getting closer to the moral dilemma at hand.

The first survey, which included 182 participants, and the second survey, which had 451 respondents, presented a dilemma that varied the number of pedestrian lives that could be saved.

The third survey, which had 259 participants, introduced for the first time the idea of buying an autonomous vehicle that would be programmed to minimize casualties, even if that meant sacrificing the driver and passengers to save the lives of more pedestrians. This survey also introduced the result that participants would prefer the self-protective model for themselves.

(Image: Henrik5000/iStockphoto)

(Image: Henrik5000/iStockphoto)

Survey number four, which had 267 respondents, added more detail to that moral dilemma with a rating system for various algorithms. It provided a similar result to study three: "It appears that people praise utilitarian, self-sacrificing AVs and welcome them on the road, without actually wanting to buy one for themselves."

Report authors say that this is the "classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behavior that would lead to the best global outcome. One typical solution in this case is for regulators to enforce the behavior leading to the best global outcome."

That happens in some cases. For instance, some citizens object to regulations that require the immunization of children before they start school, the study's authors note.

The authors further tested this idea of regulating these utilitarian autonomous vehicles in the fifth survey, which included 376 participants. Results varied, depending upon how many pedestrians were actually saved. They were given scenarios of one to ten pedestrians. 

In the sixth survey, the authors asked if the 393 participants would buy vehicles with utilitarian algorithms mandated by the government. Participants said they were less likely to buy such a vehicle that came with this mandated algorithm than to buy a self-driving car that didn't have the algorithm.

"If both self-protective and utilitarian AVs were allowed on the market, few people would be willing to ride in utilitarian AVs, even though they would prefer others to do so," the authors concluded. "… Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether."

The study's authors said that figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today, and right now there's no simple answer.

Today's self-driving car regulations are really a patchwork of state regulations. In April, a handful of technology and automotive companies announced the formation of the Self-Driving Coalition for Safer Streets to accelerate federal regulations around the move to driverless cars.

Several carmakers and technology companies are working on making autonomous vehicles, including Toyota, Google, Acura, BMW, and many others.

Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG's Infoworld, Ziff Davis Enterprise's eWeek and Channel Insider, and Penton Technology's MSPmentor. She's passionate about the practical use of business intelligence, ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
<<   <   Page 2 / 2
JasonC810
50%
50%
JasonC810,
User Rank: Apprentice
6/27/2016 | 1:53:18 AM
Occupant of the car should be protected preferentially by the car
If I can count on an autonamous car saving a group of pedestrians over a single ocupant in the car I could get away with murder.  If there is someone I want to kill, I simply get a group of people together to jump out infront of the car of the person I want to kill and the car will do it for me and it will all be chalked up to an accident and the car doing its job.
PhaetonR917
50%
50%
PhaetonR917,
User Rank: Apprentice
6/26/2016 | 11:31:36 PM
How is it either ethical or legal to program machines to choose people and kill them?
So if you program a device to kill people, that is not murder?

Who are these experts on ethics who advising the developers?

Are neither the ethics experts or the developers subject to law?
ftglfv
50%
50%
ftglfv,
User Rank: Apprentice
6/25/2016 | 10:48:50 PM
Another zombie story
Why does this story keep getting republished every few months? I'm not sure if it's fearmongering or stupidity by the authors. The fact is that HUMAN DRIVERS don't make decisions like this. There are no split-second thoughts of "who should I kill."

As for automated vehicles, since they will drive more cautiously by default, and they will constantly be aware of their surroundings, it is extremely unlikely that this type of logic needs to be built in. They'll just stop, because they will have time to stop, because they are constantly aware of their surroundings AND they will be driving at safe distances allowed for stopping.
<<   <   Page 2 / 2
Slideshows
Data Science: How the Pandemic Has Affected 10 Popular Jobs
Cynthia Harvey, Freelance Journalist, InformationWeek,  9/9/2020
Commentary
The Growing Security Priority for DevOps and Cloud Migration
Joao-Pierre S. Ruth, Senior Writer,  9/3/2020
Commentary
Dark Side of AI: How to Make Artificial Intelligence Trustworthy
Guest Commentary, Guest Commentary,  9/15/2020
White Papers
Register for InformationWeek Newsletters
2020 State of DevOps Report
2020 State of DevOps Report
Download this report today to learn more about the key tools and technologies being utilized, and how organizations deal with the cultural and process changes that DevOps brings. The report also examines the barriers organizations face, as well as the rewards from DevOps including faster application delivery, higher quality products, and quicker recovery from errors in production.
Video
Current Issue
IT Automation Transforms Network Management
In this special report we will examine the layers of automation and orchestration in IT operations, and how they can provide high availability and greater scale for modern applications and business demands.
Slideshows
Flash Poll