One Woman’s Battle to Stop a Killer Robot Army from Inciting an International Arms Race

Published on December 29, 2019

“Killer robot” isn’t a buzzword created to horrify the technologically challenged. The term refers to autonomous weaponry, a very real and growing sector of civil defense. Drones, unmanned aerial surveillance, and now lethal autonomous weapons (LAWs) are being developed in secret across the globe in a brand new arms race.

World War A.I.

If these robots could prevent young men and women from enduring the horrors of war, why are the world’s leading technologists urging national governments to sign a treaty banning their development? Stephen Hawking, Elon Musk, Steve Wozniak, and thousands of others have signed an open letter requesting a ban on offensive autonomous weapons lacking in human control.

To learn more about the real and perceived dangers of killer robots, we spoke with Mary Wareham, global coordinator for the Campaign to Stop Killer Robots. Wareham won the Nobel Peace Prize for her work on the International Campaign to Ban Landmines in 1997 and serves as the advocacy director of the Arms Division of the Human Rights Watch.

Mary Wareham
Innovation & Tech Today: Why would people support banning a technology that has yet to cause any harm?

Mary Wareham: I hope people realize that we could prevent an avoidable tragedy…

These are weapon systems that would be able to select, identify, and engage targets without any meaningful human control. Once activated, the machine or robot would be in control of undertaking those functions which, for the campaign, crosses a moral line…

We are in this hard situation now where there are hardly any countries who will admit to developing killer robots, but we still see massive investments in weapons technology that is going in this direction and it underscores the need for regulation…

Today’s armed drones are just the beginning of a revolution in military war fighting, which is going to change everything about how war is conducted in the future, and that’s why there’s an urgent need to get down some rules of the road.

Needless to say, the campaign is not only concerned about potential use of killer robots in warfare, but we’re equally concerned about potential use in law enforcement, policing, crowd control, and border control and enforcement. There are many other scenarios in which killer robots may be used and lots of people realize this could be on the streets, eventually, and that this is not a path the world should be going down…

The killer robots challenge is very real and it needs a home in the form of new international law. That’s why we’re calling for the treaty.

I&T Today: Based on your experience with the landmine treaty, how would the killer robots treaty be implemented and enforced?

MW: Of all the different arms that have been prohibited to date, only the chemical weapons convention has an intrusive verification and compliance regime. All of the other instruments that I’ve been involved in, including the landmine treaty and the convention on cluster munitions, are more a combination of human rights, humanitarian, and asylum provisions based on the notion that if a state signs up for it, they’re willingly going to abide by it.

We don’t come at this from the assumption that everybody is going to cheat and therefore there will need to be lots of means to verify and enforce compliance, though there will be compliance and verification provisions in the eventual treaty.

Nobody is talking about how to stop artificial intelligence or prevent militaries from incorporating autonomy into their uses, but where we’re talking about drawing the line is incorporating A.I. into weapon systems to the extent that it is no longer under meaningful human control.

The whole reason why the AI experts, roboticists, computer scientists, and others have been supporting this effort is that they compare themselves to the nuclear scientists of the past, to the chemists of the past, who were concerned about the weaponization of that tech and who assisted to create new international law that commissioned the field of chemicals to be pursued and studied and expanded on for peaceful uses, but prohibited the use of the chemicals as a weapon of warfare.

I think it’s a similar thing we’re looking to do here on killer robots and that’s definitely not impossible.

I&T Today: How does the unpredictability of artificial intelligence add to your concerns about its use in warfare?

MW: What it all comes down to is the fact that machine learning and AI can be completely unpredictable, especially when it is employed in what I’ve heard called “cluttered environments.”

These are environments that are not static, environments that are constantly changing, and that have lots of different factors involved. A cluttered environment is a city, for example, with lots of people and movement happening around it.

The majority of war is now on foot in urban areas, so a concern there is, if you put killer robots in the mix, a weapon system that might be able to change its mission perimeters or change its target on the way to its destination, there are many different things the experts tell us could go wrong. This lack of predictability is certainly a big reason for that and it’s been a big question and a big driver for everybody to knuckle down here and try to determine how to deal with this.

I&T Today: If killer robots could be programmed with morals, would that make them more appealing?

MW: It’s currently not possible to program the laws of war into a machine. The laws of war were written for humans, so to try and program them into a machine and have them make these binary decisions based on some very complex content is really… most people doubt that it is possible right now and have serious reservations about whether or not it would be possible in the near or distant future.

Meanwhile, we are going to see fully autonomous weapons on the battlefield much earlier than that, and the concern is that you’re going to see the stupid autonomous weapons used before the smart ones that can do all these new fancy things and distinguish civilian from combatant. There are very few roboticists who will make the case that it is possible to create an ethical robot.

It’s not possible now and what people are realizing is that humans do the programming and humans are also biased and bias can be programmed in. The systems may be very unreliable and unable to make the determinations that need to be made if you are going to avoid killing civilians in warfare. So this technical fix is something that people want to know about, but I think the concern that we’re crossing a moral line is also far more prevalent.

We went to a market research company that conducted a brief poll of 23 countries on killer robots two years ago and thought it would be really interesting to run that survey again with that exact same question, “How do you feel about these autonomous weapon systems being used in war?”

It was very basic question, but it was nicely translated and easily understood because the first time they ran the poll at the end of 2016, it found 56 percent of those people were opposed to killer robots, that’s three in every five people.

The second survey, which came out at the beginning of 2019, went back to those same twenty-three countries and three more and found that the numbers have risen to 61 percent opposed to killer robots.

The survey showed public opinion is hardening against such weapon systems and the second survey also asks a follow-up question to the people who were concerned, asking them to elaborate on their concerns. Was it because such weapons might be unlawful or because of technical issues? The top concern expressed across the board in almost every country was this notion: by allowing killer robots on the battlefield, we’re outsourcing killing to machines. And we’re permitting machines to kill rather than humans; that’s a line too far.

The other top concern was about accountability and the very likely accountability gap in the event that a fully autonomous weapon is deployed and a war crime is committed. The inability to hold people responsible if innocent people were killed, it would be very challenging, if not impossible, to hold the designer or the programmer accountable, or the manufacturer, or everybody involved even down to the commander who sends it off onto the operation.

Human Rights Watch actually just conducted a report on this called “Mind the Gap,” so those are some of the reasons why I wanted to keep coming back to you on this notion of programming ethics into machines, it’s a very hot button subject in other areas as well, but in this one it’s steaming hot.

We’re talking about people’s lives here.

The article One Woman’s Battle to Stop a Killer Robot Army from Inciting an International Arms Race first appeared on Innovation & Tech Today.

Innovation & Tech Today is a national magazine showcasing the latest cutting edge, innovative technologies and those individuals driving the industry forward. Every quarter our experts and writers aim to find fascinating stories, products, and people across a variety of industries that we can feature in our magazine, and while we often focus on education, STEM, wearables, smart home technology, business, and sustainability, those are ultimately just a few of the many areas we cover. No matter where the innovation is happening, we’ll be there capturing it all.

Read more

More GD News