Killer Robots Technically Infeasible… For Now.

0


Disclaimer: The views expressed within this article are entirely the author’s own and are not attributable to Wessex Scene as a whole.

Although the age of killer robots may seem far off, is it closer than we’d like. Can we afford to give machines the power to kill humans?

UK Student/Young Pugwash (SYP) is holding its first Festival of Ethical Science, a month-long event targeted at STEM students. From the climate emergency to open-source research, the series of webinars tackle some of the ethical and political challenges faced by science and technology in the 21st century. I attended the talk ‘Code for Coders – Killer Robots, Ethics and the Law’ which gave comforting reasons as to why fully autonomous weapons seem technically unfeasible, at least for now.

Lethal autonomous weapon systems (LAWS), also known as killer robots, are weapon systems equipped to select, engage and attack targets without meaningful human control. In effect, this technology delegates, to robots, the decision to deploy lethal force. To read more about the arguments for pre-emptively banning LAWS, see my article ‘The Rise of Killer Robots’. On top of the legal, security and ethical concerns underlined by academics and roboticists worldwide, LAWS pose a severe technical conundrum. As Taniel Yusef from the UK Women’s International League for Peace and Freedom (WILPF) argued in the talk ‘Code for Coders’, the fog of data in warfare would inevitably confuse robots.

Imagine this: an autonomous weapon is trained by developers to recognize a battleship in the lab. A few million images are fed into the system and after many tests and trials, the robot eventually recognizes ships as long, 3-dimensional objects with tall masts against a palette of blue and grey backgrounds. Now imagine the diversity of shapes, sizes, mast heights, weather conditions and environments a ship may take or find itself in. Under infinite and unpredictable maritime warfare scenarios, could a robot distinguish between enemy battleships and one of its own? Could it reliably recognize an aircraft carrier as a ship? How much uncertainty are we comfortable with?

Even the most basic target selection functions are coded with countless attachments, inscribed software updates and reinforced through machine learning. Upon deployment in the field, robots would invariably encounter and perceive incomplete or obscured objects. Machines are expected to operate reliably in unknown circumstances and despite data ‘noise’. To overcome irregularities and anticipate adversary actions, they must learn from previous encounters. This could lead to many unnecessary deaths, sure the machine wouldn’t make that mistake again, but are lives expendable?

We cannot account for, let alone control, the decisions made by a self-learning algorithm. There is also the undesirable matter of hacking and machines falling into the wrong hands. The consequences could be Frankensteinian.

Despite these technical impracticalities, I believe, several weapon systems already demonstrate a worrying trend to increasing autonomy. Unless constraints are put in place now, lethal autonomous weapons may be deployed in the coming years rather than decades.

avatar

Leave A Reply