The pinnacle of artificial intelligence is to copy people, but is it ethical?

David Wagner, Executive Editor, Community & IT Life

October 17, 2014

3 Min Read
(Image: <a href="https://www.flickr.com/photos/gordontarpley/5743241664/"target="new">Gordon Tarpley</a>)

of the human. They concentrate on the deception, as opposed to the robot's feelings.

Fair enough, but this old geek detects a problem there. The assumption is that robots will never be more sophisticated than they currently are. Here's the thing from my point of view: Let's assume we can create a robot that passes a Turing Test. That means it passes as human, so it has human speech patterns and human behavior patterns. It probably emulates human emotions to a certain extent. And it is ethical, so it has learned quite a lot of human values.

Would you treat something that has human speech and behavior, human emotions, and human values the same as your car or vacuum? Would you worry only about how the deception of it passing as human bothered you, or would you start to wonder how your behavior affected it? I mean, to have a program sophisticated enough to pass a Turing Test, wouldn't it have to be adaptive? Would poor behavior change its "being?"

r2d2c3po.jpg

Allow me to geek out here. C-3PO and R2-D2 wouldn't pass a Turing Test. One beeps and whistles, and the other has stilted speech patterns that would mark it as a robot even if you were talking on the phone. Yet aren't you a little uncomfortable with the slave relationship they have with their "masters" in the Star Wars movies? When Jabba the Hutt's servants brand that little robot while it screams "no," don't you wiggle in your seat a bit? When the Jawa puts a "restraining bolt" on R2-D2, don't you cringe a bit? Truthfully, we have to assume they are programmed to serve, and they don't really mind it.

But would we treat these robots who couldn't pass a Turing Test like a Dirt Devil? What would we do when we met the robot that could pass it?

I used to think it was an incredible and laudable goal to copy human intelligence, but all this leads me to agree that it might be entirely unethical even to think about building "human-like robots." Forget the deception and what it does to us. Forget about our own "metaphysical entropy." I'm sure we can handle all that in time. What I worry about most is what it does to us and to the robots when we succeed in making a human-like robot. We don't seem to treat one another very well. I'm not sure I want to know what we'd do to a bunch of things that act like people that we treat like objects.

What do you think? Should we press on with artificial intelligence? Is making human-like robots a good goal? Is it unethical for a computer to pretend to be a human? Should a computer be forced to identify itself as such when it calls you? Comment below.

While there's a role for PhD-level data scientists, the real power is in making advanced analysis work for mainstream -- often Excel-wielding -- business users. Here's how. Get the Analytics For All issue of InformationWeek Tech Digest today (free registration required).

About the Author(s)

David Wagner

Executive Editor, Community & IT Life

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Enterpriseefficiency.com. Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, leadership, and innovation. He has also been a freelance writer for many top consulting firms and academics in the business and technology sectors. Born in Silver Spring, Md., he grew up doodling on the back of used punch cards from the data center his father ran for over 25 years. In his spare time, he loses golf balls (and occasionally puts one in a hole), posts too often on Facebook, and teaches his two kids to take the zombie apocalypse just a little too seriously. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights