The Rise of Killer Robots

0


Disclaimer: The views expressed within this article are entirely the author’s own and are not attributable to Wessex Scene as a whole.

Facial recognition technology is on the rise, and being used by the police in the public sphere to make targeted arrests and dissuade protesters and activists. But law enforcement is not the only sector with a vested interest in the development of such technologies: tech companies are gradually working autonomous systems into our lives, often through features which simplify daily tasks. Isn’t it great that Google can save you time by predicting what you’re typing? Or how your phone unlocks itself by recognising your face or fingerprint?

But the very same artificial-intelligence-loaded systems making our lives easier are also being increasingly weaponized. Computer systems can perform tasks that normally require human intelligence, such as visual perception, speech recognition and decision-making. Weapons are edging towards full autonomy while humans are fading out of the loop for certain military operations. The weapon manufacturer Kalashnikov, for example, has equipped a self-orienting machine gun with the same recognition feature that unlocks our phones. The software can identify and fire at objects and faces without any human intervention. The Russian military is trialling driverless tanks which use the same technology as Tesla’s autonomous cars. Fully autonomous weapons could locate, select and engage targets based on sensory rather than human inputs. So-called killer robots don’t yet exist, but all the building blocks are in place.

Defence companies argue that this technology could make war safer and more humane. Machines lack the emotional and physical limitations of human soldiers; they do not suffer from war-associated trauma and they can react much quicker. The truth is that autonomous weapons are likely to magnify the impact of war on those that do not possess this technology. A new robotic arms race looms as the US, China, Israel, Russia, South Korea and the UK increase the autonomy of weapons. There are serious concerns that this technology crosses a moral threshold and will have unanticipated and potentially tragic consequences. Human judgement isn’t perfect, hence neither are the machines we create. Take Amazon’s facial recognition software, for example: according to The New York Times, women were misidentified as men 19% of the time, and in the case of dark-skinned women, that figure rose to 31%.

On top of this, it is unclear who would be held responsible for unlawful acts committed by autonomous weapons. The designer, programmer, manufacturer, commander or government? The machine itself?

The solution seems clear: fully autonomous weapons should be banned before they even go into production, and meaningful human control over military decisions needs to be maintained. Tech companies and individuals working in AI and robotics should pledge not to contribute to this lethal technology. Universities likely to be at the forefront of the research and development of autonomous systems must safeguard innovations to prevent their use in killer robots.

avatar

Leave A Reply