X
Innovation

The Terminator scenario: Perhaps not so fictional after all

Sometime in the future--Military robots have joined forces and are targeting humans using Google Latitude. Tribes of robots have banded together in various metropolitan areas.
Written by Larry Dignan, Contributor

Sometime in the future--Military robots have joined forces and are targeting humans using Google Latitude. Tribes of robots have banded together in various metropolitan areas. Early reports indicate a programming malfunction.  

Sounds like Terminator doesn't it? It could be real life some day if we don't get our robot programming ducks in a row. Military robots need to be taught warrior code and ethics or the world could be in for a world of hurt, according to a report.  

That conclusion was included in a big report by Cal Poly researchers for the U.S. Department of Navy, Office of Naval Research. The report, which was detailed in a Times Online story on Feb. 16, contains a few interesting passages to ponder ahead of the weekend. 

Also see: Gallery: Armies of combat robots

According to the report:

The use of military robots represents a new era in warfare, perhaps more so than crossbows, airplanes, nuclear weapons, and other innovations have previously. Robots are not merely another asset in the military toolbox, but they are meant to also replace human soldiers, especially in ‘dull, dirty, and dangerous’ jobs. As such, they raise novel ethical and social questions that we should confront as far in advance as possible—particularly before irrational public fears or accidents arising from military robotics derail research progress and national security interests.

On the bright side, autonomous military robots would save lives since we wouldn't risk human life. The problem: All of this software code, hardware and other odds and ends used to create our future soldiers may not mesh so well in the field. 

Here's a hierarchy of a robot system today:

Any of those layers could lead to problems. As for me I picture these robots going rogue and tapping into Google (Skynet in this scenario). In any case it can't be good. Here's risk-reward scenarios outlined in the report:

Imagine the face of warfare with autonomous robotics: Instead of our soldiers returning home in flag draped caskets to heartbroken families, autonomous robots—mobile machines that can make decisions, such as to fire upon a target, without human intervention—can replace the human soldier in an increasing range of dangerous missions: from tunneling through dark caves in search of terrorists, to securing urban streets rife with sniper fire, to patrolling the skies and waterways where there is little cover from attacks, to clearing roads and seas of improvised explosive devices (IEDs), to surveying damage from biochemical weapons, to guarding borders and buildings, to controlling potentially hostile crowds, and even as the infantry frontlines.

These robots would be ‘smart’ enough to make decisions that only humans now can; and as conflicts increase in tempo and require much quicker information processing and responses, robots have a distinct advantage over the limited and fallible cognitive capabilities that we Homo sapiens have. Not only would robots expand the battlespace over difficult, larger areas of terrain, but they also represent a significant force multiplier—each effectively doing the work of many human soldiers, while immune to sleep deprivation, fatigue, low morale, perceptual and communication challenges in the ‘fog of war’, and other performance?hindering conditions.

But the presumptive case for deploying robots on the battlefield is more than about saving human lives or superior efficiency and effectiveness, though saving lives and clearheaded action during frenetic conflicts are significant issues. Robots, further, would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost. Indeed, robots may act as objective, unblinking observers on the battlefield, reporting any unethical behavior back to command; their mere presence as such would discourage all?too?human atrocities in the first place.

Technology, however, is a double-edge sword with both benefits and risks, critics and advocates; and autonomous military robotics is no exception, no matter how compelling the case may be to pursue such research. The worries include: where responsibility would fall in cases of unintended or unlawful harm, which could range from the manufacturer to the field commander to even the machine itself; the possibility of serious malfunction and robots gone wild; capturing and hacking of military robots that are then unleashed against us; lowering the threshold for entering conflicts and wars, since fewer US military lives would then be at stake; the effect of such robots on squad cohesion, e.g., if robots recorded and reported back the soldier’s every action; refusing an otherwise legitimate order; and other possible harms.

Creating autonomous military robots that can act at least as ethically as human soldiers appears to be a sensible goal, at least for the foreseeable future and in contrast to a greater demand of a perfectly ethical robot. However, there are still daunting challenges in meeting even this relatively low standard, such as the key difficulty of programming a robot to reliably distinguish enemy combatants from non-combatants, as required by the Laws of War and most Rules of Engagement.

As I leaf through this report I can't help but think of malfunctions and robots turning against us. 

How do we avoid this potential Terminator scenario? New programming of course:

Serious conceptual challenges exist with the two primary programming approaches today: top down (e.g., rule following) and bottom up (e.g., machine learning). Thus a hybrid approach should be considered in creating a behavioral framework. To this end, we need to a clear understanding of what a ‘warrior code of ethics’ might entail, if we take a virtue ethics approach in programming.

And you thought multi-core programming was going to be tricky. Let's hope programmers aren't as stupid as some people think.

Editorial standards