We’re running out of time to stop killer robot weapons
It’s five years this month since the launch of the Campaign to Stop Killer Robots, a global coalition of non-governmental groups calling for a ban on fully autonomous weapons. This month also marks the fifth time that countries have convened at the United Nations in Geneva to address the problems these weapons would pose if they were developed and put into use.
The countries meeting in Geneva this week are party to a major disarmament treaty called the Convention on Certain Conventional Weapons. While some diplomatic progress has been made under that treaty’s auspices since 2013, the pace needs to pick up dramatically. Countries that recognise the dangers of fully autonomous weapons cannot wait another five years if they are to prevent the weapons from becoming a reality.
Fully autonomous weapons, which would select and engage targets without meaningful human control, do not yet exist, but scientists have warned they soon could. Precursors have already been developed or deployed as autonomy has become increasingly common on the battlefield. Hi-tech military powers, including China, Israel, Russia, South Korea, the UK and the US, have invested heavily in the development of autonomous weapons. So far there is no specific international law to halt this trend.
Experts have sounded the alarm, emphasising that fully autonomous weapons raise a host of concerns. For many people, allowing machines that cannot appreciate the value of human life to make life-and-death decisions crosses a moral red line.
Legally, the so-called “killer robots” would lack human judgment, meaning that it would be very challenging to ensure that their decisions complied with international humanitarian and human rights law. For example, a robot could not be preprogrammed to assess the proportionality of using force in every situation, and it would find it difficult to judge accurately whether civilian harm outweighed military advantage in each particular instance.
Fully autonomous weapons also raise the question: who would be responsible for attacks that violate these laws if a human did not make the decision to fire on a specific target? In fact, it would be legally difficult and potentially unfair to hold anyone responsible for unforeseeable harm to civilians.
There are also security concerns. Without any legal restraints on fully autonomous weapons, militaries could engage in an arms race, vying to develop deadly technology that may lower the need to deploy soldiers – while possibly lowering the threshold to armed conflict.