Killer robots will leave humans ‘utterly defenceless’ warns professor
Robots, called LAWS – lethal autonomous weapons systems – will be able to kill without human intervention
Killer robots which are being developed by the US military ‘will leave humans utterly defenceless‘, an academic has warned.
Two programmes commissioned by the US Defense Advanced Research Projects Agency (DARPA) are seeking to create drones which can track and kill targets even when out of contact with their handlers.
Writing in the journal Nature, Stuart Russell, Professor of Computer Science at the University of California, Berkley, said the research could breach the Geneva Convention and leave humanity in the hands of amoral machines.
“Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans,” he said.
“Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined.
“In my view, the overriding concern should be the probable endpoint of this technological trajectory.
“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”
Some experts say armed killer robots are just a ‘ small step’ away
The robots, called LAWS – lethal autonomous weapons systems – are likely to be armed quadcopters of mini-tanks that can decided without human intervention who should live or die.
DARPA is currently working on two projects which could lead to killer bots. One is Fast Lightweight Autonomy (FLA) which is designing a tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. The other and Collaborative Operations in Denied Environment (CODE), is aiming to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible.
Last year Angela Kane, the UN’s high representative for disarmament, said killer robots were just a ‘small step’ away and called for a worldwide ban. But the Foreign Office has said while the technology had potentially “terrifying” implications, Britain “reserves the right” to develop it to protect troops.
Professor Russell said: “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’
“Debates should be organized at scientific meetings; arguments studied by ethics committees. Doing nothing is a vote in favour of continued development and deployment.”
• However Dr Sabine Hauert, a lecturer in robotics at the University of Bristol said that the public did not need to fear the developments in aritifical intelligence.
“My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the ocean,” she said.