Driverless Cars Present Ethical Challenges

How will a driverless car make life or death decisions, and whose live takes priority? It's an ethical question that will have to be addressed as self-driving cars take to the road.

Ariella Brown, Ariella Brown

August 18, 2015

4 Min Read

Driverless cars are said to be safer because they reduce accidents that occur as a result of human error. However, accidents will still happen, and in a situation in which it cannot be avoided, what will direct the driverless car?

Say, you’re driving at 30 miles an hour when a child suddenly chases a ball right into the path of your car. You would brake if you can stop in time. If you can’t brake you’d swerve to avoid hitting the child. But what if swerving forces you either to hit another car with passengers in it or a truck that would cause harm to those in your car? Does self-preservation override all other consideration? Would we be driven by the emotional pull of saving a child over all else? Or would we be paralyzed into doing nothing because we can’t bring ourselves to take part in any action that causes harm?

These are the types of questions that bring ethics specialists and engineers together in addressing the challenge of directing driverless cars. Two names that come up repeatedly in articles concerned with ethically directed algorithms for such scenarios are Chris Gerdes, professor of mechanical engineering at Stanford University, and Patrick Lin, professor of philosophy at California Polytechnic State University and director of the university’s Ethics + Emerging Sciences Group.

Essentially, it boils down to a variation on the decades old Trolley Problem, which dealt with what to do with a runaway trolley. Instead of a trolley, there may be a bus full of school children. While the driverless car can be programmed to avoid hitting the bus, the question arises, what if avoiding it causes it to hit someone or something else.

As Gerdes has observed, “These are very tough decisions that those who design control algorithms for automated vehicles face every day.” In such scenarios, there are no choices that will avoid all harm, only possibilities to mitigate the harm, say, hitting the car with fewer people in it or the bus because it can survive the impact better. Or perhaps your car would drive itself off a cliff, committing suicide to avoid harming anyone else.

That’s what the article, Driverless Cars -- Pandora's Box Now Has Wheels illustrated with this “Unavoidable Death Warning.”

That picture posits that the choice will be given to a human rider; however, part of the problem anticipated for a driverless car in that situation is that the outcome will have to be determined ahead of time by the human who sets up the algorithm that directs it.

In fact, the outcome would have to be planned in advance by someone who is not directly affected by the autonomous car’s choice. That gives rise to a complaint many post in comments on articles that raise the question about algorithms making such life-and-death decisions. Some say that, no matter what, they’d want to save their own or their kids’ lives and would not want any solution involving crashing their car in a way that threaten their safety.

car.jpg

Objectively speaking, though, saving one’s own family is not necessarily the best moral choice when the movement to secure the safety of one results in a larger number of deaths or injuries. Another thing to consider is that any time a person agrees to be driven by another, whether that person is a taxi driver, bus driver, train conductor, or airline pilot, one is subject to another’s decision about what to do in such situations. Certainly, anyone driving others should have certain rules to follow that could be implemented in programming.

What I would suggest is a consistent code that is fully transparent with an opt-in agreement required for all who ride in the car. That means they have to be aware of the autonomous car’s programming and the possible risks they may assume in riding in it.

As for the decisions themselves, the cars would have to follow Mr. Spock’s dictum: “The needs of the many outweigh…the needs of the few…or the one.” The autonomous car would have to assess what would cause the least harm to life overall without assigning any different weights to one life over another. The people in the driverless car would not be considered of greater value than the people outside of it who might be struck, as each human life would be considered equal.

A Vulcan solution is not necessarily a human solution, but the argument for overall better safety for driverless cars is predicated on the fact that a machine will act more logically and therefore more safely than a person.

 

About the Author(s)

Ariella Brown

Ariella Brown

Ariella holds a PhD in English and has taught writing to college and graduate students. Since 2005 she has served as a scorer for the SAT essay. She is the owner of Write Way Productions, which publishes Kallah Magazine. Her freelance writing services include articles, press releases, letters, blogs, Web content development, editing, and ad copy, as well as ad design.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights