Monday, 6 July 2015

Elon Musk Gives $10 Million In Grants To Study Safe AI

(Image: Paramount)
Elon Musk puts his money where his mouth is by helping fund 37 projects that could hopefully make AI safer and more useful to humans.


Through a $10 million grant from Elon Musk, the Future of Life Institute is awarding 37 grants to fund research that they believe will keep AI "robust and beneficial."
Even if you aren't in the alarmist camp of Musk, Bill Gates, and Stephen Hawking and believe that AI is a danger to humanity, the grants represent the sort of basic, foundational research that we need to improve AI.
The Future of Life Institute was cofounded by MIT cosmologist Max Tegmark and Skype cofounder Jaan Tallinn. It includes such big-name advisors as Musk, Hawking, Alan Alda, and Morgan Freeman. It was founded with the mission to save humanity from the existential threats they perceive from AI.
To prove their point, the institute's website opens with the ominous phrase: "Technology has given life the opportunity to flourish like never before … or to self-destruct."
If it all sounds a little Hollywood, maybe that's on purpose. The press release for the new grants mentions the new Terminator movie.
(Image: Paramount)
Still, this isn't some Hollywood movie where a benevolent organization is out to stop what they perceive to be an evil idea. The goal seems to be to do AI right and to do it with good science. This is the institute's stated mission: "FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges."
[So will AI really kill us all? No, AI Won't Kill Us All.]
So what are the 37 projects they funded? You can check out the full list on the institute's site.
One of the most interesting is one that could be colloquially described as, "What would John Doe do?"
Paul Christiano from UC Berkeley is researching ways to teach autonomous AI to respond to situations that it doesn't understand in ways a human would, without intervention. One of the biggest fears of those who think AI is a danger is that of what an AI might do if it encounters a situation it doesn't understand. Christiano is hoping to create efficient mechanisms to provide human oversight. There are two similar projects that revolve around the idea of allowing AIs to observe humans to help them understand what humans want from them.
Manuela Veloso of Carnegie Mellon was given a grant to study how to make AIs explain their actions so we can better understand why they are doing something and take corrective action. If an autonomous car, for example, took a right turn when you expected a left, you could ask it why in order to make sure that the decision made sense.
Michael Webb of Stanford University is being a bit more practical. He's studying the economic and social impact of how AI could eventually replace us all. How do you build an economy where most of us don't have to work to keep it running? How do you distribute wealth and other resources? Most importantly, how do you make the transition to an economy like that?
There are other studies, including one on what happens if an AI breaks the law, another that examines the ethical implications that exist for AI by judging all potential outcomes of a situation with no regard to ethics, and many on how to teach ethics to AI.
While some of these may seem a little silly at first, they are a necessary step in the programming of intelligence.
As Tom Dietterich, president of the Association of the Advancement of Artificial Intelligence, says in the press release:
"In its early days, AI research focused on the 'known knowns' by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the 'known unknowns' by using probability distributions to represent and quantify the likelihood of alternative possible worlds. The FLI grant will launch work on the 'unknown unknowns': How can an AI system behave carefully and conservatively in a world populated by unknown unknowns -- aspects that the designers of the AI system have not anticipated at all?"
This and other research, if successful, should make AI safer and more effective.