Ban AI Weapons, Scientists Demand - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Government // Leadership
News
7/27/2015
06:06 PM
Connect Directly
LinkedIn
Twitter
RSS
E-Mail
50%
50%

Ban AI Weapons, Scientists Demand

Roboticists and experts in artificial intelligence want to prohibit offensive autonomous weapons.

Windows 10: 10 Things To Know At Launch
Windows 10: 10 Things To Know At Launch
(Click image for larger view and slideshow.)

Theoretical physicist Stephen Hawking, Tesla CEO Elon Musk, and Apple co-founder Steve Wozniak are among the hundreds of prominent academic and industry experts who have signed an open letter opposing offensive autonomous weapons.

The letter, published by the Future of Life Institute in conjunction with the opening of the 2015 International Joint Conference on Artificial Intelligence (IJCAI) on July 28, warns that an arms race to develop military AI systems will harm humanity.

"If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," the letter states.

Such systems, by virtue of their affordability, would inevitably come to be ubiquitous and would be used for assassinations, destabilizing nations, ethnic killings, and terrorism, the letter asserts.

Hawking and Musk serve as advisors for the Future of Life Institute, an organization founded by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn to educate people about the ostensible risk that would follow from the development of human-level AI. Both have previously spoken out about the potential danger of super-intelligent AI. Musk has suggested advanced AI is probably "our biggest existential threat."

(Image: jlmaral via Flickr under CC By 2.0)

(Image: jlmaral via Flickr under CC By 2.0)

The potential danger posed by AI has become a common topic of discussion among technologists and policymakers. A month ago, the Information Technology and Innovation Foundation in Washington, D.C., held a debate with several prominent computer scientists about whether super-intelligent computers really represent a threat to humanity.

Stuart Russell, an AI professor at UC Berkeley who participated in the debate and is also a signatory of the letter, observed, "[W]hether or not AI is a threat to the human race depends on whether or not we make it a threat to the human race." And he argued that we need to do more to ensure that we don't make it a threat.

The U.S. military presently insists that autonomous systems be subordinate to people. A 2012 Department of Defense policy directive states, "Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

Yet human control over these systems remains imperfect. In 2014, human rights group Reprieve claimed that U.S. drone strikes had killed 28 unknown individuals for every intended target.

The DoD policy on autonomous weapons must be recertified within five years of its publication date or it will expire in 2022. And it's not obvious that political or military leaders will want to maintain that policy if other nations continue to pursue the development of autonomous systems.

In a 2014 report, the Center for a New American Security (CNAS), a Washington, D.C.-based defense policy group, claimed that at least 75 other nations are investing in autonomous military systems and that the United States will be "driven to these systems out of operational necessity and also because the costs of personnel and the development of traditional crewed combat platforms are increasing at an unsustainable pace."

If CNAS is right and the economics of autonomous systems are compelling, a ban on offensive autonomous weapons may not work.

Economic Appeal

Economics play an obvious role in the appeal of weapon systems. The Kalashnikov rifle owes much of its popularity to affordability, availability, and simplicity. Or consider the landmine, an ostensibly defensive autonomous weapon that's not covered by the letter's proposed ban on "offensive autonomous weapons."

Landmines cost somewhere between $3 and $75 to produce, according to the United Nations. The agency claims that as many as 110 million landmines have been deployed across 70 countries since the 1960s. In addition, undiscovered landmines from wars before may still be operational.

Banning landmines is having an effect: Since the Mine Ban Treaty was enacted in 1999, daily casualties from landmines have declined from an average of 25 per day to nine per day. But the ban on mines is not respected everywhere or by everyone.

Better AI might actually help here. The basic landmine algorithm -- if triggered: explode -- could be far more discriminating about when to explode, whether the mine's mechanism is mechanical or electronic. The inclusion of an expiration timer in landmines, for example, could prevent many accidental deaths, particularly when conflicts have concluded. And more sophisticated systems could be even more discriminating with regard to valid targets.

Offensive autonomous weapons already exist. Beyond landmines, there are autonomous cyber weapons. Stuxnet, for example, has been characterized as AI. Rather than banning autonomous weapon systems, it may be more realistic and more effective to pursue a regime to govern them.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Oldest First  |  Newest First  |  Threaded View
Page 1 / 2   >   >>
Gary_EL
50%
50%
Gary_EL,
User Rank: Ninja
7/28/2015 | 2:00:55 AM
The genie can't be put back into the bottle
Whether Stephen Hawking likes it or not, these weapons will be developed. If we don't, China will. If we do, China will. And, there is always collateral damage, whether it's a "smart" weapon or a "dumb" weapon. The trick is to build it to tighter specifications, so more of the correct targets and fewer of the incorrect targets get hit. Humans will never stop fighting. It's hoped that in the decades to come, there are more and more military weapons and targets in space, so more of the fighting takes place there.
Whoopty
50%
50%
Whoopty,
User Rank: Ninja
7/28/2015 | 7:44:29 AM
Re: The genie can't be put back into the bottle
Personally, I'd much prefer a robotic assassin that had the potential to take out a couple of high profile targets without collateral damage, instead of us bombing the hell out of a region and causing multitudes of innocencts to be killed in the process. 

That said, the arms race would work defensively too, with robots protecting us against robots. I would much rather we worked to develop safeguards internationally, rather than putting a ban in place, only for a rogue nation to figure it out on their own. 
Susan Fourtané
50%
50%
Susan Fourtané,
User Rank: Author
7/28/2015 | 8:02:49 AM
Re: The genie can't be put back into the bottle
Gary, I agree. Unfortunately, humans will never stop fighting for one reason, or another. Even though I support AI for the countless benefits and possibilities that will prove to be good and useful for humanity, I positively know there are going to be evil minds behind some developments that will only contribute to destruction. -Susan
Susan Fourtané
50%
50%
Susan Fourtané,
User Rank: Author
7/28/2015 | 8:22:49 AM
Blaming AI for human destructive behavior
"Such systems, by virtue of their affordability, would inevitably come to be ubiquitous and would be used for assassinations, destabilizing nations, ethnic killings, and terrorism, the letter asserts." --- I wonder, all that hasn't existed without the existence of AI? -Susan
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
7/29/2015 | 12:36:46 AM
Re: Blaming AI for human destructive behavior
Controlling AI is not difficult. However it is the concept of a self aware AI that disturbs me.Stephen Hawking and Bill Gates agree that a future that is dependent on robots and artificial intelligence needs more safety and insurance. A wrongly programmed drone could kill of hundreds and if hacked, AI would be difficult to manage.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
7/29/2015 | 12:39:42 AM
Re: The genie can't be put back into the bottle
Maybe people should make robots to solve problems like handicap and population explosion among other things. They should make robots to build colonies on different habitable planets and moons of the solar system. They should make robots to facilitate space exploration.
SunitaT0
50%
50%
SunitaT0,
User Rank: Ninja
7/29/2015 | 12:41:46 AM
Re: The genie can't be put back into the bottle
@gary: true. The Chinese government is strong and they have had different weapons testing through the years. I think they are a cold threat to other countries.
Gary_EL
50%
50%
Gary_EL,
User Rank: Ninja
7/29/2015 | 1:03:35 AM
Re: The genie can't be put back into the bottle
I don't think building robots of war precludes building robots for peace. History suggests the opposite - war technology often gets used for peaceful purposes – look at semiconductors. A robot that can outthink a human is certainly possible, but despite the popular TV show, a human author cannot create a conscious being that is self-aware. The notion is silly. And, yes, since 1945, the major instigator of technological change worldwide has been the USA. But there's a new actor on stage – China. From now on, people need to disabuse themselves of the idea that if we don't do it, there won't be someone else who will.
Susan Fourtané
50%
50%
Susan Fourtané,
User Rank: Author
7/29/2015 | 3:31:36 AM
Re: Blaming AI for human destructive behavior
Sunita, what solution do you propose? -Susan
driverlesssam
50%
50%
driverlesssam,
User Rank: Strategist
7/30/2015 | 11:05:18 AM
Too late to ban AI weapons
Its too late to ban AI weapons!  "Fire and Forget" weapons have long been used in war.  Maybe you think that an air-to-air missile seeking and destroying the hottest target it "sees" is not AI.  Well that's another argument to have someday, isn't it.

I have always liked Prof. Allen Perlis' definition of AI from the 1960's: "A system is intelligent if you think it is".
Page 1 / 2   >   >>
News
Top 10 Data and Analytics Trends for 2021
Jessica Davis, Senior Editor, Enterprise Apps,  11/13/2020
Commentary
Where Cloud Spending Might Grow in 2021 and Post-Pandemic
Joao-Pierre S. Ruth, Senior Writer,  11/19/2020
Slideshows
The Ever-Expanding List of C-Level Technology Positions
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/10/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Why Chatbots Are So Popular Right Now
In this IT Trend Report, you will learn more about why chatbots are gaining traction within businesses, particularly while a pandemic is impacting the world.
Slideshows
Flash Poll