Affective computing systems impact human emotions. They aren’t necessarily able to recognize or respond to emotions well yet, but that will change.

Lisa Morgan, Freelance Writer

July 18, 2019

7 Min Read
Image: Prostock-Studio - stock.adobe.com

Affective computing systems, including care robots and virtual assistants, can facilitate more intimate human-machine relationships. Already, systems have been designed to treat post-traumatic stress disorder (PTSD), depression and dementia. Meanwhile, individuals are being nudged in ways that impact their consumption and political choices, whether they realize it or not.

Evoking emotion is the easiest problem to solve, as evidenced by the 2016 U.S. election tampering. Recognizing emotion and responding appropriately to it are more difficult problems, let alone creating AI systems that actually experience emotion. Nevertheless, humans want AI to at least sense emotion now because they're tired of screaming at interactive voice recognition (IVR) systems, chatbots and virtual assistants out of frustration.

"It’s easy for people to take for granted what a person does and just [assume one can] build a machine to do that," said Phillip Alvelda, CEO of big data recruiting company Brainworks. "We're starting to peel apart what's happening in human brains and every year we have a new machine that mimics a new part of the brain."

Phillip_Alvelda-brainworks.jpg

When Alvelda was a program manager at the Defense Advanced Research Projects Agency (DARPA), the organization funded research at the University of California at Berkeley and Carnegie Mellon University that used MRI imaging to understand which parts of the brain were involved in humans’ emotional responses to spoken information. The researchers were able to map how different concepts are represented in the brain. An interesting finding was that those maps demonstrated a 95% similarity across research subjects.

"By exploiting some of these new neuroscience discoveries, we can begin to imagine new generations of machines that are vastly more capable," said Alvelda.

Nudging

Algorithmic nudging evokes human emotion with the goal of affecting behavior for capitalistic or political gain, as well as to improve individual and societal well-being. Despite the fact nudging is becoming more common, an outstanding issue is bias, even in systems that are designed to improve well-being.

Briana_Brownwell-PureStrategy2.jpg

For example, a person might be nudged to become more social because that person recently withdrew from their usual level of social interaction. Sudden social withdrawal can indicate depression and suicidal ideation. Theoretically, if a nudge can intervene at an appropriate point, then those negative emotional states might be avoided or minimized.

Evoking emotion is the easiest problem to solve in the synthetic emotion realm because it doesn't require emotional recognition. It simply requires a basic understanding of the stimuli that are most likely to produce the desired outcome.

Recognizing emotion

Recognizing emotion should not be confused with sentiment analysis, which is less nuanced. Positive and negative sentiment are less precise than the six basic emotions identified in research funded by the University of Glasgow, which are anger, disgust, fear, happy, sad and surprise. A more recent study by Greater Good Science Center identified 27 emotions.

"You need to train an emotional recognition algorithm on really subtle details, such as how people’s faces look [when they respond to a stimulus] which is really challenging," said Briana Brownell, founder and CEO of cognitive platform provider PureStrategy. "We still have a way to go to truly understanding human emotion."

Justin_Richie-Nerdery.jpg

Identifying emotion can be done in a number of ways such as analyzing written or spoken words, intonation, facial expression, body language and changes in the body’s physical state (e.g., increased heart rate or pupil dilation). Combining various sensory inputs can enable more accurate results.

Justin Richie, director of data science at digital business consulting company Nerdery has approached emotion recognition from an app monitoring point of view, but human behavior can differ from human opinion, even when both involve the same individual.

"We'd track people's behaviors in apps and then tune the app to that behavior. Then when we'd talk to that person directly with surveys they'd say, 'I don't care about that," said Richie. “So we'd say, ‘[Your in-app behavior] indicates you care,’ but they said, 'That's not how we really feel.' [So you have to] converge how people act online and how they say they're going to act online.”

Responding to emotion

Gawain Morrison, CEO and co-founder of empathic AI company Sensum produced the world's first emotional response horror film, which premiered at SXSW in 2011. Audience members were outfitted with wearables that monitored their responses to content. They were also given an earpiece, so, for example, the person whose heart rate was the most elevated at a particular point in the movie might receive a personalized message such as a whisper that said, "Look behind you." The goal was to deliver personalized experiences within a group experience.

Gawain_Morrison-sensum.jpg

"The futures of entertainment and gaming have this kind of personalized feedback and the ability to interact. You’re trying to push people into an emotional state and a journey," said Morrison. "You’re really looking for aggregates of movements and plot points, so if going down a dark alley elevates the heart rate by 30%, we've won."

While some forms of AI have been designed to deal with particular emotions, AI systems generally are not capable of recognizing emotion.

"Even if Siri understood your emotional state, it couldn’t respond appropriately to it,"  said Yan Fossat, VP and principal investigator of the labs team at health agency Klick Health. "Right now, in the lab or in research environments, you can make a machine that reads emotion pretty accurately but having a computer deal with emotion effectively is not there yet."

Yan_Fossat-Klick_Labs.jpg

Dr. Hossein Rahnama is working on a framework with MIT that allows individuals to create a digital version of themselves. One can interact with one’s own avatar using a chatbot or a voice assistant. Flybits co-founder and CEO Rahnama, who is also professor of machine intelligence at Ryerson University and a visiting professor at MIT Media Labs, said an aging financial institution CEO is having a digital version of himself created to ensure his values outlive him. To affect that, the CEO is working with Rahnama’s team to passively mine the semantics of his interactions with his direct reports to understand his reactions. Eventually, people in the organization will be able to converse with the digital representation of the CEO long after he's gone.

Part of the secret sauce is a context engine into which four or five data streams flow. The context engine has integrations with Slack and other corporate communication channels, which collectively enable Flybits to map keywords. More importantly, all of the data points and all of the semantics go into an ontology map that has enabled the creation of templates or themes of communication derived from grammar, tone and email length.

Hossein_Rahnama-Flybits.jpg

"If you look at the advancements in the AI space, the problem they have is they don’t understand context," said Rahnama. "I think we’re finally realizing that instead of building massive AI labs, we need to bring four or five complementary skillsets together that can help understand context when building these systems."

In addition to the need to understand situational context, there is the unique context of the individual that includes a person’s experiences, biases and beliefs, among other things. While categorizing emotions and labeling emotional responses helps advance emotion recognition, more research is necessary to enable more natural and more meaningful human-machine relationships.

Sensum's Morrison is attempting to advance the state of synthetic emotion as vice chair of the new IEEE P7014 emulated empathy standard. The standard will define a model for ethical considerations and practices in the design, creation and use of empathic technology. It specifically targets affective systems "that have the capacity to identify, quantify, respond to or simulate affective states."

"I think anyone working in the affective computing and even AI areas realizes [these technologies] require greater responsibility," said Morrison. "We've been approached by organizations looking to use CCTVs to do motion response measurement or using cameras and microphones in the prison system to prevent self-harm. You can see potential value in this stuff, but you may not always trust the infrastructure of the people who are going to be deploying them necessarily." 

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights