The Threat Of Artificial Intelligence
At the end of June, a group of computer scientists gathered at the Information Technology and Innovation Foundation in Washington, D.C., to debate whether super-intelligent computers are really a threat to humanity.
The discussion followed reports a few days earlier of two self-driving cars that, according to Reuters, almost collided. Near-misses on a road aren't normally news, but when a Google self-driving car comes close to a Delphi self-driving car and prompts it to change course, that gets coverage.
To hear Google tell it, the two automated cars performed as they should have. "The headline here is that two self-driving cars did what they were supposed to do in an ordinary everyday driving scenario," a Google spokesperson told Ars Technica.
Ostensibly benevolent artificial intelligence, in rudimentary form, is already here, but we don't trust it. Two cars driven by AI navigated around each other without incident -- that gets characterized as a near-miss. No wonder technical luminaries who muse about the future worry that ongoing advances in AI have the potential to threaten humanity. Bill Gates, Stephen Hawking, and Elon Musk have suggested as much.
The panelists at the ITIF event more or less agreed that it could take anywhere from 5 to 150 years before the emergence of super-human intelligence. But really, no one knows. Humans have a bad track record for predicting such things.
But before our machines achieve brilliance, we will need half-a-dozen technological breakthroughs comparable to development of nuclear weapons, according to Stuart Russell, an AI professor at UC Berkeley.
Russell took issue with the construction of the question, "Are super-intelligent computers really a threat to humanity?"
AI, said Russell, is "not like [the weather]. We choose what it's going to be. So whether or not AI is a threat to the human race depends on whether or not we make it a threat to the human race."
Problem solved. Computer researchers can simply follow Google's example: Don't be evil.
However, Russell didn't sound convinced that we could simply do the right thing. "At the moment, there is not nearly enough work on making sure that [AI] isn't a threat to the human race," he said.
Ronald Arkin, a computing professor at Georgia Tech, suggested humanity has more immediate concerns. "I'm glad people are worrying about super-intelligence, don't get me wrong," he said. "But there are many, many threats on the path to super-intelligence."
Arkin pointed to lethal autonomous weapon systems, an ongoing challenge confronted by military planners, policymakers, and people around the world.
What's more, robots without much intelligence can be deadly, as an unfortunate Volkswagen contractor in Germany discovered the day before the ITIF talk. The 21-year-old technician was installing an industrial robot with a co-worker when he was struck by the robot and crushed, according to The Financial Times. The technician was inside a safety cage intended to keep people at a distance.
An investigation into the accident has begun. But the cause isn't likely to be malevolent machine intelligence. Human error would be a safer bet. And that's really something to worry about.