Will there be a ban on killer robots?
Without a nonproliferation agreement, some diplomats fear the world will plunge into an algorithm-driven arms race.
 

“A lot of A.I. technologies are being developed outside of government and released to the public. These technologies have generic capabilities that can be applied in many different domains, including in weaponization.”

— Jack Clark, spokesman for OpenAI, a group that advocates for more measured adoption of artificial intelligence


 

An autonomous missile under development by the Pentagon uses software to choose between targets. An artificially intelligent drone from the British military identifies firing points on its own. Russia showcases tanks that don’t need soldiers inside for combat.

A.I. technology has for years led military leaders to ponder a future of warfare that needs little human involvement. But as capabilities have advanced, the idea of autonomous weapons reaching the battlefield is becoming less hypothetical.

The possibility of software and algorithms making life-or-death decisions has added new urgency to efforts by a group called the Campaign To Stop Killer Robots that has pulled together arms control advocates, humans rights groups and technologists to urge the United Nations to craft a global treaty that bans weapons without people at the controls. Like cyberspace, where there aren’t clear rules of engagement for online attacks, no red lines have been defined over the use of automated weaponry.

Full story at the New York Times

More stories about Computing and AI

 

Suggested links

MIT College of Computing FAQ

MIT College of Computing coverage at NYT