Lead artificial intelligence advisor to the United Nations, Neil Sahota shares his perspectives on major UN AI projects and the major challenges to AI today.

Jessica Davis, Senior Editor

February 22, 2021

8 Min Read
Image: koya979 - stock.adobe.com

Once just science fiction, the use of artificial intelligence is taking hold across industries and governments in a multitude of use cases. But these days we're not so much worried about being overthrown by our robot minions like in so many movies.

Rather, there are bigger questions that impact our lives today. For instance, how and when should we share data for the greater good and when should we keep it for proprietary uses? Are particular artificial intelligence use cases, like facial recognition, ethical? How do we know we can trust the results of artificial intelligence? How do we know when AI is biased, and how can we fix that?

These issues are front and center as a new and very different administration takes over the executive branch of the US government. But there are plenty of projects underway in both government and business that will be impacted by these issues in the years to come.

To get a perspective on the state of AI's big issues, InformationWeek spoke with Neil Sahota, lead artificial intelligence advisor to the United Nations. He also works with the UN on their AI for Good Global Summit, and is the author of Own the AI Revolution.

Here are excerpts of that conversation, edited for this format.

What's the most interesting place that AI is being used right now?

Actually, it's an area called artificial empathy. While machines don't feel emotions, we've actually been able to teach them how to recognize it in people. The use of body language, the tone of the voice, even word choice or a hand gesture are data points that can help machines learn to decipher the emotional state of a person in real time.

There's so much focus on using this to give people an outlet for mental health issues. There's actually an organization in the UK called the Rainbow Project that's providing an outlet for victims of domestic violence [that is using a chatbot help people recognize whether they are victims of abuse or not].

It's not really a replacement for human relationships, but a safe space that people can always engage, no matter what time of day.

This is leading edge. There's a lot of work that's being done -- probably 25 solutions out there for people to currently use. Those who are working on these projects need to know psychology and linguistics, but generally therapists haven't pursued AI, so it's very much an emerging space. But there are some basic tools out there people are trying to use.

On the other hand, I think this technology got held up was because a lot of people didn't think this was possible -- that there's no way a machine could understand a person's emotion.

Now that we've actually seen it, that’s no longer the case. I think that will open up a lot more doors.

Neil_SahotaCP.jpg

How are the machines trained?

It's a combination of psychology -- what the different emotional states are and some of the things associated with them -- as well as teaching kinesiology, body language, linguistics, and essentially the ability to code language. So even something that would seem subtle to us, like using the word buddy instead of friend, actually conveys a lot of meaning.

What kind of training data is used to teach human emotion to the machine? Is it crisis center calls or customer service calls?

It could be those things. But what we've actually found is social media is actually really good.

People, could use, like, an essay written for a college or for a job interview, but they may not actually be using their real voice. The more real-world, real-voice type of moments we can get the better. Social media is a great source for that.

Are there open-source data libraries available to help with that?

Not so much. Data is a challenge in itself. Data is the new oil, and people are not so willing to share, at least not without monetization, apparently, which has become a hold up. There's some data, obviously, we should not share. But the fact that everyone, even if they're not able to use the data, don't want to give it away either is causing other challenges, especially in healthcare.

We don't have enough data to do some things that we'd like to do to advance medical research.

What are some of the challenges with medical research? How are organizations overcoming those challenges?

The machine unfortunately just doesn't know things by itself. We need the data to teach it. If you want to ask it to look at possible findings or diseases, that means you have to have a lot of data. 

For lungs specifically you need lung x-rays of healthy lungs, lungs that have stage one cancer, stage two cancer, emphysema, etcetera. You need lots and lots of those data sets.

A lot of places like the Cleveland Clinic or the Mayo Clinic may not have enough data across the board to actually do this, and so they're either trying to get it by seeing more and more patients or they're trying to license data from other people other, like research centers and universities that might have it.

The United Nations is trying to create a healthcare data repository stripped of PHI (personal health information). We could create large sets of data for researchers to actually use. 

That's one of the projects you have worked on at the UN, right? That sounds like a federated data project where organizations can share data but not lose out on their proprietary benefits?

That's exactly it. Basically, everyone's agreeing to put their data in there for general use but stripped of identifying information. It gives everyone bigger data sets to work with.  

In your work with this project, you are basically going to health organizations and saying, hey, wouldn't it be great if we could share this data? Wouldn't it be valuable for you?  

I do. I'm a big advocate for it because we're intentionally slowing ourselves down as a result of not sharing. We've seen instances where one organization is embarking down a research path that another organization tried seven years ago, and it led to a dead end. We know that there's a waste of people's time and effort and money as a result.

You mentioned lung cancer. Is that an area in development?

They are actually already lung cancer solutions out there. Sloan Kettering has one. They're using it to detect lung cancer from X rays. I think the system is about 90% accurate now. But all the machine can do is look for cancer. It can't look for anything else.

Would Sloan Kettering then license that to other organizations?

At the moment? No. They're keeping it in-house for themselves because they believe it gives them an advantage over their competitors. We are not used to coopetition. We are not used to social enterprise. You make money or you do good as a nonprofit. But you can do both!

Getting back to your UN work, tell me some of the challenges when you go into these organizations and say, "Hey, let's share your data? It could be good."

Even if they don't know how to use their data or are just not able to use their data, they don't want someone else to get rich from what they have.  We don't really know how to value data right now.

There are more open minded or forward thinking -- I'm not sure what the right phrase might be here -- but companies that realize that actually sharing some of these things actually increases opportunities for everybody. A rising tide lifts all boats. But again, it's a different mindset to than traditionally most organizations are used to.

What's your best pitch then when you go into these places?

You look for the win-win opportunities. You look for something that says both you guys will benefit in some way or you try to appeal to the altruistic side of the company.

Or, one of the things I found more effective now is more companies are realizing that Generation Z is very passionate about values and working for companies that generally believe in trying to do good. These companies are starting to realize now that Gen Z is turning down jobs with big name companies, nice titles, big salaries because they say we don't believe in the same things. So there's a business reason that's kind of driving some of the changes.

What are some of the big issues that AI is facing right now in the world?

There are really two core ones. One is the whole ethical, responsible AI. Just because we can do something doesn't mean that we should. The other big one is just the truth and trust in technology. We have an expectation that AI should be perfect. It never will be.

It's going to make mistakes. There will be flaws in the training, because people have implicit biases.

These are really the two big challenges that we're facing right now.

 

For more on AI, read these articles:

How Data, Analytics & AI Shaped 2020, and Will Impact 2021

A Question for 2021: Where’s My Data?

How to Create a Successful AI Program

Analytics Salaries Steady Amid COVID Crisis

 

About the Author(s)

Jessica Davis

Senior Editor

Jessica Davis is a Senior Editor at InformationWeek. She covers enterprise IT leadership, careers, artificial intelligence, data and analytics, and enterprise software. She has spent a career covering the intersection of business and technology. Follow her on twitter: @jessicadavis.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights