Sunday 9 August 2015

(Image: jlmaral via Flickr under CC By 2.0)

Ban AI Weapons, Scientists Demand

Roboticists and experts in artificial intelligence want to prohibit offensive autonomous weapons.


Theoretical physicist Stephen Hawking, Tesla CEO Elon Musk, and Apple co-founder Steve Wozniak are among the hundreds of prominent academic and industry experts who have signed an open letter opposing offensive autonomous weapons.
The letter, published by the Future of Life Institute in conjunction with the opening of the2015 International Joint Conference on Artificial Intelligence (IJCAI) on July 28, warns that an arms race to develop military AI systems will harm humanity.
"If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," the letter states.
Such systems, by virtue of their affordability, would inevitably come to be ubiquitous and would be used for assassinations, destabilizing nations, ethnic killings, and terrorism, the letter asserts.
Hawking and Musk serve as advisors for the Future of Life Institute, an organization founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn to educate people about the ostensible risk that would follow from the development of human-level AI. Both have previously spoken out about the potential danger of super-intelligent AI. Musk has suggested advanced AI is probably "our biggest existential threat."
(Image: jlmaral via Flickr under CC By 2.0)
The potential danger posed by AI has become a common topic of discussion among technologists and policymakers. A month ago, the Information Technology and Innovation Foundation in Washington, D.C., held a debatewith several prominent computer scientists about whether super-intelligent computers really represent a threat to humanity.
Stuart Russell, an AI professor at UC Berkeley who participated in the debate and is also a signatory of the letter, observed, "[W]hether or not AI is a threat to the human race depends on whether or not we make it a threat to the human race." And he argued that we need to do more to ensure that we don't make it a threat.
The U.S. military presently insists that autonomous systems be subordinate to people. A 2012 Department of Defense policy directive states, "Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."
Yet human control over these systems remains imperfect. In 2014, human rights group Reprieve claimed that U.S. drone strikes had killed 28 unknown individuals for every intended target.
The DoD policy on autonomous weapons must be recertified within five years of its publication date or it will expire in 2022. And it's not obvious that political or military leaders will want to maintain that policy if other nations continue to pursue the development of autonomous systems.
In a 2014 report, the Center for a New American Security (CNAS), a Washington, D.C.-based defense policy group, claimed that at least 75 other nations are investing in autonomous military systems and that the United States will be "driven to these systems out of operational necessity and also because the costs of personnel and the development of traditional crewed combat platforms are increasing at an unsustainable pace."
If CNAS is right and the economics of autonomous systems are compelling, a ban on offensive autonomous weapons may not work.
Economic Appeal
Economics play an obvious role in the appeal of weapon systems. The Kalashnikov rifle owes much of its popularity to affordability, availability, and simplicity. Or consider the landmine, an ostensibly defensive autonomous weapon that's not covered by the letter's proposed ban on "offensive autonomous weapons."
Landmines cost somewhere between $3 and $75 to produce, according to the United Nations. The agency claims that as many as 110 million landmines have been deployed across 70 countries since the 1960s. In addition, undiscovered landmines from wars before may still be operational.
Banning landmines is having an effect: Since the Mine Ban Treaty was enacted in 1999, daily casualties from landmines have declined from an average of 25 per day to nine per day. But the ban on mines is not respected everywhere or by everyone.
Better AI might actually help here. The basic landmine algorithm -- if triggered: explode -- could be far more discriminating about when to explode, whether the mine's mechanism is mechanical or electronic. The inclusion of an expiration timer in landmines, for example, could prevent many accidental deaths, particularly when conflicts have concluded. And more sophisticated systems could be even more discriminating with regard to valid targets.
Offensive autonomous weapons already exist. Beyond landmines, there are autonomous cyber weapons. Stuxnet, for example, has been characterized as AI. Rather than banning autonomous weapon systems, it may be more realistic and more effective to pursue a regime to govern them.