Will Our Love Of 'Imperfect' Robots Harm Us? - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
IT Life
Commentary
10/16/2015
12:06 PM
David Wagner
David Wagner
Commentary
Connect Directly
Twitter
RSS
50%
50%

Will Our Love Of 'Imperfect' Robots Harm Us?

Flawed robots make people more comfortable in certain settings, which is fine. But what happens when we need robots to be perfect?

These 8 Technologies Could Make Robots Better
These 8 Technologies Could Make Robots Better
(Click image for larger view and slideshow.)

We are drawn to robots that have the same kind of cognitive biases and flaws that we do, according to a report from researchers at the University of Lincoln in the UK. Because of this, we may need to consider making robots less perfect in order to build positive, long-term relationships between humans and robots.

This is an especially important finding considering Gartner recently predicted that by the end of 2018, 3 million people worldwide will have a robot for a boss. If we will soon be interacting with robots at work, even having some of them ordering us around, is it a good idea to make them less perfect to make us comfortable? 

The University of Lincoln researchers, who presented their findings at the International Conference on Intelligent Robots and Systems (IROS) conference in Hamburg earlier this month, didn't tackle that specific question. Instead, they focused on robots used in education for children on the autism spectrum and those that support caregivers for the elderly.

The researchers introduced the cognitive biases of forgetfulness and "empathy gap" into two different robots: the ERWIN (Emotional Robot with Intelligent Network), which can express five basic emotions, and the small yellow robot toy Keepon that's been used to study child social development. In both instances, half the interactions with these robots included cognitive biases and half of them did not.

Overwhelmingly, human subjects said they enjoyed a more meaningful interaction with the robots when machines made mistakes.

(Image: CBS Television via Wikipedia)

(Image: CBS Television via Wikipedia)

"The cognitive biases we introduced led to a more humanlike interaction process," Mriganka Biswas, the lead researcher explained in a press release. "We monitored how the participants responded to the robots and overwhelmingly found that they paid attention for longer and actually enjoyed the fact that a robot could make common mistakes, forget facts and express more extreme emotions, just as humans can."

He went on to say something a little more controversial, in my mind: "As long as a robot can show imperfections which are similar to those of humans during their interactions, we are confident that long-term human-robot relations can be developed."

Granted, this study was on children and the elderly. The needs of these groups are clearly different from those of people in an office setting. At the same time, the notion that humans enjoy seeing flaws and biases in robots because it makes them seem more like us is worrisome.

Some humans have a bias toward racism. No doubt a racist robot would be pleasing to those people. Sure, that's an extreme example. Cognitive biases take all forms, but we try to train ourselves out of as many as possible in a business setting. For instance, many people have decision-making cognitive biases like those that cause us to go with heuristic shortcuts (or gut feelings) that lead to fast, but not always accurate, decisions. Do we want robots that shoot from the hip (or look like they do)? Aren't we trying to run data-driven businesses?

For most people, exposure to robots has been limited to science fiction. We're willing to accept the android Lieutenant Commander Data from Star Trek, because he has no emotions. We're OK with him remembering everything and being faster and stronger because he's lacking something essentially human. We can handle C-3PO from the Star Wars movie franchise, because he's a coward and a bumbling fool, even though he is fluent in more than 6 million forms of communication and can calculate probability faster than humans. These flaws allow us to accept our weaknesses in front of machines that are potentially superior to us.

[What's wrong with robot masters anyway? Read 10 Reasons Why Robots Should Rule the World.]

What happens in a business setting? Do we keep the flaws in robots to make people happy or do we learn to accept our own inadequacies in the name of better business? We're not there yet. Robots aren't superior to humans.

But if Gartner is right, it isn't long until a robot gives you an order. Will you trust the order? Will you take its judgment over your own? Will it need to pretend to forget things just so you can accept its orders? Long before we have to worry about robots being our new masters, we need to think about how we will work together, side-by-side with companion robots. Daryl Plummer, a Gartner vice president and Fellow said, "In the next few years, relationships between people and machines will go from cooperative, to co-dependent to competitive."

If we can't handle being cooperative without having to dumb down robots, how are we going to handle being competitive with them?

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Enterpriseefficiency.com. Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
Michelle
50%
50%
Michelle,
User Rank: Ninja
11/1/2015 | 10:58:51 AM
Re: Roboboss
@Sachinee what could happen while the boss is being updated? Area there backup human bosses to take over while the robot is updated? I don't see robot bosses working out very well in the future...
SachinEE
50%
50%
SachinEE,
User Rank: Ninja
10/31/2015 | 11:18:46 PM
Re: Roboboss
@Michelle. True. More alarming when these software patches would require weeks to formulate. Then we'd have to withstand the defective boss with great risk.
SachinEE
50%
50%
SachinEE,
User Rank: Ninja
10/31/2015 | 11:16:59 PM
Re: Roboboss
@David: Robotic managers? I think that is a long way to go. Moreover. How do you think people would react to that? All those employees who were working less under the nose of the human manager now have to work more in the same time constraint.
Michelle
50%
50%
Michelle,
User Rank: Ninja
10/27/2015 | 5:41:23 PM
Re: Roboboss
Hmm... I think we're due for a long wait on that.
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
10/19/2015 | 12:51:00 PM
Re: Roboboss
@technocrait- There is a potential upside. A robot can be trained to think "humans should only work 40 hours." A human manager might say, "why are these people leaving? They're weak."

A robot manager won't see you with bias if you work within its expectations. A robot manager will see your work for what it really is and won't dislike it because they don't like the joke you made about their haircut last week.

A robot manager can be taught to treat each person with respect.

The interesting thing is that we might program our better natures into our robots than we do real people.
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
10/19/2015 | 12:47:24 PM
Re: oxymoron
@pedrogonzlez- It isn't about that kind of perfection. Sure, robots will malfunction. But it is about known human biases. We know, for the sake of our brains we take shortcuts in our thinking. We can only juggle so much informaiton. We can only include so much data in our thinking. This saves energy and for the most part, it works for people. Think of it like "just good enough" product creation.

Humans can untrain themselves from these biases in short bursts and think very deeply, but for most functions it isn't necessary.

However, a robot or AI doesn't have these energy issues. A robot or AI doesn't need biases or shortcuts. So we can, in fact, program an AI to think more clearly than a person does on most decisions. 

think about it this way: If i asked you to pick between two cars you were thinking of buying, what kind of informaiton would you look at? You might look at price, the color, the size of the seats. Research says you'd also look for the cupholders. You'd see how comfortable it was. You'd basically pick 8 or 10 things that were important to you in a car.

What would happen if I knew you were buying a car and dumped all that informaiton on you plus the blueprints of the car, the names of the people who designed it, the names of the facotiry workers, the detailed specs of the factory machines that built it, the moleclar structure of the paint on the car, the source code for the infotainment system, etc, etc, etc. You would likely ignore this information. It would be too much for your brain to handle.

It would not be too much at all for an AI to handle. But if a robot said to you, "don't buy car A, because I saw a flaw in the molecular structure of the pain" you'd laugh at it. That's the kind of imperfection/perfection we're taking about.
David Wagner
50%
50%
David Wagner,
User Rank: Strategist
10/19/2015 | 12:39:42 PM
Re: Roboboss
@michelle- I don't know. I'm still wiating of rmost people to be programmed with compassion. :)
progman2000
50%
50%
progman2000,
User Rank: Ninja
10/19/2015 | 5:25:19 AM
Re: Roboboss
Or to more closely mimic the office environment, sell the boss robots in sets of like 2 or 3. Some have compassion, some don't. But they all bug you about unrealistic expectations anyway.

I'm not bitter...
Technocrati
50%
50%
Technocrati,
User Rank: Ninja
10/18/2015 | 11:20:25 PM
Re: Roboboss
@Michelle   So true.   I can hear it now.  Future updates on the way for issues of compassion and common sense.
Michelle
50%
50%
Michelle,
User Rank: Ninja
10/18/2015 | 9:15:51 PM
Re: Roboboss
We have plenty of trouble with software bugs in our computers and devices. We don't need more in our future bosses too!
Page 1 / 2   >   >>
Slideshows
What Digital Transformation Is (And Isn't)
Cynthia Harvey, Freelance Journalist, InformationWeek,  12/4/2019
Commentary
Watch Out for New Barriers to Faster Software Development
Lisa Morgan, Freelance Writer,  12/3/2019
Commentary
If DevOps Is So Awesome, Why Is Your Initiative Failing?
Guest Commentary, Guest Commentary,  12/2/2019
White Papers
Register for InformationWeek Newsletters
State of the Cloud
State of the Cloud
Cloud has drastically changed how IT organizations consume and deploy services in the digital age. This research report will delve into public, private and hybrid cloud adoption trends, with a special focus on infrastructure as a service and its role in the enterprise. Find out the challenges organizations are experiencing, and the technologies and strategies they are using to manage and mitigate those challenges today.
Video
Current Issue
The Cloud Gets Ready for the 20's
This IT Trend Report explores how cloud computing is being shaped for the next phase in its maturation. It will help enterprise IT decision makers and business leaders understand some of the key trends reflected emerging cloud concepts and technologies, and in enterprise cloud usage patterns. Get it today!
Slideshows
Flash Poll