Rise of the machines

In April, the US christened the Sea Hunter, a prototype of an autonomous, submarine-hunting boat that will be able to stay at sea for months, without a crew, or even remote control. Such systems — attractive to governments looking to reduce the cost of military operations — represent a distinct step up from the now-ubiquitous drones, which are remotely piloted by a human being.

“There’s no reason to be afraid of a ship like this,” Robert Work, US deputy defence secretary, said at the vessel’s launch. Others may disagree. A few days later, a summit in Geneva convened experts and policymakers to discuss the possibility of banning or limiting the development of autonomous weapons systems.

Opposition to ‘killer robots’, as opposed to simpler drones, centres around the definition of meaningful human control — the responsibility for actions taken by an autonomous system that exists outside of a human chain of command would fall into a legal and moral grey area.

“We don’t want to be in a situation where we’re arguing in favour of armed drones, but it’s a different order of problem, in a way, to the reduction of human control in the action to fire a missile or a gun,” says Thomas Nash, the director of Article 36, a campaign group focused on arms control.

Nash was part of the successful campaign to ban the use of cluster munitions in the early 2000s. Then, however, their work was spurred on by a huge body of evidence of the damage done by the weapons; this time around, they are trying to get parties to pre-emptively agree a moratorium on something which has not yet been deployed in combat. There are precedents — blinding laser weapons were banned pre-emptively in the mid-1990s — but they are rare.

Activists fear that the long, slow process of getting governments to agree on a set of rules will come too slowly, and that the pace of technological change will overtake them. Some parties may be deliberately stalling.

In response to this disconnect, the Stop Killer Robots campaign has targeted the companies and researchers working on machine learning and artificial intelligence. The tactic has paid off, with more than 3,000 computer scientists — including the chief executive of Google’s DeepMind subsidiary — signing a letter calling for a ban on autonomous weapons systems.

“We don’t have to wait — perhaps we can’t wait — for governments to agree before we start to engage with the companies whose work is being used to drive this,” Nash says.