Elon Musk puts his money where his mouth is by helping fund 37 projects that could hopefully make AI safer and more useful to humans.

David Wagner, Executive Editor, Community & IT Life

July 6, 2015

4 Min Read
<p align="left">(Image: <a href="http://www.terminatormovie.com/images/page-bkg/home-bkg-Duo.jpg" target="_blank">Paramount</a>)</p>

Disney's Tomorrowland Past And Present: A Celebration

Disney's Tomorrowland Past And Present: A Celebration


Disney's Tomorrowland Past And Present: A Celebration (Click image for larger view and slideshow.)

Through a $10 million grant from Elon Musk, the Future of Life Institute is awarding 37 grants to fund research that they believe will keep AI "robust and beneficial."

Even if you aren't in the alarmist camp of Musk, Bill Gates, and Stephen Hawking and believe that AI is a danger to humanity, the grants represent the sort of basic, foundational research that we need to improve AI.

The Future of Life Institute was cofounded by MIT cosmologist Max Tegmark and Skype cofounder Jaan Tallinn. It includes such big-name advisors as Musk, Hawking, Alan Alda, and Morgan Freeman. It was founded with the mission to save humanity from the existential threats they perceive from AI.

To prove their point, the institute's website opens with the ominous phrase: "Technology has given life the opportunity to flourish like never before … or to self-destruct."

If it all sounds a little Hollywood, maybe that's on purpose. The press release for the new grants mentions the new Terminator movie.

Still, this isn't some Hollywood movie where a benevolent organization is out to stop what they perceive to be an evil idea. The goal seems to be to do AI right and to do it with good science. This is the institute's stated mission: "FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges."

[So will AI really kill us all? No, AI Won't Kill Us All.]

So what are the 37 projects they funded? You can check out the full list on the institute's site.

One of the most interesting is one that could be colloquially described as, "What would John Doe do?"

Paul Christiano from UC Berkeley is researching ways to teach autonomous AI to respond to situations that it doesn't understand in ways a human would, without intervention. One of the biggest fears of those who think AI is a danger is that of what an AI might do if it encounters a situation it doesn't understand. Christiano is hoping to create efficient mechanisms to provide human oversight. There are two similar projects that revolve around the idea of allowing AIs to observe humans to help them understand what humans want from them.

Manuela Veloso of Carnegie Mellon was given a grant to study how to make AIs explain their actions so we can better understand why they are doing something and take corrective action. If an autonomous car, for example, took a right turn when you expected a left, you could ask it why in order to make sure that the decision made sense.

Michael Webb of Stanford University is being a bit more practical. He's studying the economic and social impact of how AI could eventually replace us all. How do you build an economy where most of us don't have to work to keep it running? How do you distribute wealth and other resources? Most importantly, how do you make the transition to an economy like that?

There are other studies, including one on what happens if an AI breaks the law, another that examines the ethical implications that exist for AI by judging all potential outcomes of a situation with no regard to ethics, and many on how to teach ethics to AI.

While some of these may seem a little silly at first, they are a necessary step in the programming of intelligence.

As Tom Dietterich, president of the Association of the Advancement of Artificial Intelligence, says in the press release:

"In its early days, AI research focused on the 'known knowns' by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the 'known unknowns' by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the 'unknown unknowns': How can an AI system behave carefully and conservatively in a world populated by unknown unknowns -- aspects that the designers of the AI system have not anticipated at all?"

This and other research, if successful, should make AI safer and more effective.

About the Author(s)

David Wagner

Executive Editor, Community & IT Life

David has been writing on business and technology for over 10 years and was most recently Managing Editor at Enterpriseefficiency.com. Before that he was an Assistant Editor at MIT Sloan Management Review, where he covered a wide range of business topics including IT, leadership, and innovation. He has also been a freelance writer for many top consulting firms and academics in the business and technology sectors. Born in Silver Spring, Md., he grew up doodling on the back of used punch cards from the data center his father ran for over 25 years. In his spare time, he loses golf balls (and occasionally puts one in a hole), posts too often on Facebook, and teaches his two kids to take the zombie apocalypse just a little too seriously. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights