The Terminator scenario: Perhaps not so fictional after all

The Terminator scenario: Perhaps not so fictional after all

Summary: Sometime in the future--Military robots have joined forces and are targeting humans using Google Latitude. Tribes of robots have banded together in various metropolitan areas.

SHARE:
TOPICS: Emerging Tech
34

Sometime in the future--Military robots have joined forces and are targeting humans using Google Latitude. Tribes of robots have banded together in various metropolitan areas. Early reports indicate a programming malfunction.  

Sounds like Terminator doesn't it? It could be real life some day if we don't get our robot programming ducks in a row. Military robots need to be taught warrior code and ethics or the world could be in for a world of hurt, according to a report.  

That conclusion was included in a big report by Cal Poly researchers for the U.S. Department of Navy, Office of Naval Research. The report, which was detailed in a Times Online story on Feb. 16, contains a few interesting passages to ponder ahead of the weekend. 

Also see: Gallery: Armies of combat robots

According to the report:

The use of military robots represents a new era in warfare, perhaps more so than crossbows, airplanes, nuclear weapons, and other innovations have previously. Robots are not merely another asset in the military toolbox, but they are meant to also replace human soldiers, especially in ‘dull, dirty, and dangerous’ jobs. As such, they raise novel ethical and social questions that we should confront as far in advance as possible—particularly before irrational public fears or accidents arising from military robotics derail research progress and national security interests.

On the bright side, autonomous military robots would save lives since we wouldn't risk human life. The problem: All of this software code, hardware and other odds and ends used to create our future soldiers may not mesh so well in the field. 

Here's a hierarchy of a robot system today:

Any of those layers could lead to problems. As for me I picture these robots going rogue and tapping into Google (Skynet in this scenario). In any case it can't be good. Here's risk-reward scenarios outlined in the report:

Imagine the face of warfare with autonomous robotics: Instead of our soldiers returning home in flag draped caskets to heartbroken families, autonomous robots—mobile machines that can make decisions, such as to fire upon a target, without human intervention—can replace the human soldier in an increasing range of dangerous missions: from tunneling through dark caves in search of terrorists, to securing urban streets rife with sniper fire, to patrolling the skies and waterways where there is little cover from attacks, to clearing roads and seas of improvised explosive devices (IEDs), to surveying damage from biochemical weapons, to guarding borders and buildings, to controlling potentially hostile crowds, and even as the infantry frontlines.

These robots would be ‘smart’ enough to make decisions that only humans now can; and as conflicts increase in tempo and require much quicker information processing and responses, robots have a distinct advantage over the limited and fallible cognitive capabilities that we Homo sapiens have. Not only would robots expand the battlespace over difficult, larger areas of terrain, but they also represent a significant force multiplier—each effectively doing the work of many human soldiers, while immune to sleep deprivation, fatigue, low morale, perceptual and communication challenges in the ‘fog of war’, and other performance?hindering conditions.

But the presumptive case for deploying robots on the battlefield is more than about saving human lives or superior efficiency and effectiveness, though saving lives and clearheaded action during frenetic conflicts are significant issues. Robots, further, would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost. Indeed, robots may act as objective, unblinking observers on the battlefield, reporting any unethical behavior back to command; their mere presence as such would discourage all?too?human atrocities in the first place.

Technology, however, is a double-edge sword with both benefits and risks, critics and advocates; and autonomous military robotics is no exception, no matter how compelling the case may be to pursue such research. The worries include: where responsibility would fall in cases of unintended or unlawful harm, which could range from the manufacturer to the field commander to even the machine itself; the possibility of serious malfunction and robots gone wild; capturing and hacking of military robots that are then unleashed against us; lowering the threshold for entering conflicts and wars, since fewer US military lives would then be at stake; the effect of such robots on squad cohesion, e.g., if robots recorded and reported back the soldier’s every action; refusing an otherwise legitimate order; and other possible harms.

Creating autonomous military robots that can act at least as ethically as human soldiers appears to be a sensible goal, at least for the foreseeable future and in contrast to a greater demand of a perfectly ethical robot. However, there are still daunting challenges in meeting even this relatively low standard, such as the key difficulty of programming a robot to reliably distinguish enemy combatants from non-combatants, as required by the Laws of War and most Rules of Engagement.

As I leaf through this report I can't help but think of malfunctions and robots turning against us. 

How do we avoid this potential Terminator scenario? New programming of course:

Serious conceptual challenges exist with the two primary programming approaches today: top down (e.g., rule following) and bottom up (e.g., machine learning). Thus a hybrid approach should be considered in creating a behavioral framework. To this end, we need to a clear understanding of what a ‘warrior code of ethics’ might entail, if we take a virtue ethics approach in programming.

And you thought multi-core programming was going to be tricky. Let's hope programmers aren't as stupid as some people think.

Topic: Emerging Tech

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

34 comments
Log in or register to join the discussion
  • Terminator Scenario unlikely but misuse is a definite.

    I don't buy into the Terminator scenario very much. The idea of robots
    rising up in a coordinated effort is incredibly unrealistic at this point.
    Though I will admit its possible it is unlikely.

    What I don't doubt however is that these robots will be misused.
    There are unpopular wars going on now and a lot of that unpopularity
    comes from the death of soldiers and for good reason. What happens
    when the government can guarantee that there will be no human
    loses? More war.

    You say that robots can be more humane because they do not suffer
    from stress and fear. While a good thing it is also dangerous from the perspective that they do not care. They will kill, when ordered, no
    questions asked. If anyone seriously thinks that the military will
    acquire robots with a programmed conscience that might disobey if
    given unlawful orders then you are fooling yourself. They will do
    whatever is commanded of them.

    You say that since robots are uncaring observers that they will be
    calling back to base what is truly going on, without human bias. That
    is a good thing for sure but do you really think that the command
    center is then going to release unedited tapes about any incidents? Or
    for that matter even say that it happened.
    Having human beings involved is a good thing because they have
    guilt. If something horrible does happen it eventually comes out. If all
    you have are centrally controlled data files then good luck ever
    getting the truth.

    Hacking/Viruses/Bugs - These robots will be built with software and
    human error or intentional cracking will occur. I find this aspect of
    robots the most frightening. There will have to be a way for the
    robots to call home and this point of entry will be exploited.

    This technology will not stay in the US. If you think that the US is
    going to rest on its robot laurels then you are in a fantasy land. The
    last thing we need is a cold robot war.
    I don't know about you but any unmanned weapons platform should
    be opposed. I don't care if its controlled by a human or not but having
    these weapons platforms in any form opens up a very steep slippery
    slope. They should be universally banned.
    ChrisOPeterson
  • Now(ish)

    The swords platform pictured above is effectively an automatic target acquisition and dispatch device. It's "Warrior Code" is rather simple shoot the area around muzzle flash. Presumably it can in the near future trace ballistics on incoming fire to its source when flash suppression become effective in a target. What more do you need? If it shoots kill it. Everything else is recon, and demolitions, or a remote.

    The point of the robot solider is not to make decisions but to complete tasks. You can make a machine that shoots anything that moves, but that should be clearly wrong even to an eleven year old. You can make something that kills everything humanoid, or humanoid and above a certain height (or below if you're a real sicko, not that above ain't sick). You could also make it kill everything humanoid and carrying a metal object, but there are strategic disadvantages, even if the ethics are "good enough" for war 1.0. Or you can shoot back from an armored platform with the kind of reflexes that bulls eye double digits (and even triple digits)each second. It's not rocket science, it's target filtration.
    thomasmarshall3@...
    • And...

      You can program the logic at the chip level, and program another robot to shoot anything that tries to access the building the chip is in. There are less ethical options that possess tactical advantages, such as shooting anything that moves and is not carrying an appropriately encoded transponder device, which in turn can be sold to an enemy to allow passage through something like the Korea DMZ, and in turn affords tremendous intel gathering options. Shoot em till they die, then shoot em till they spy.
      thomasmarshall3@...
  • The AI has a logng way to go before this is an issue

    Ethics are a lot of fun to discuss, but given the level of AI we are at, these systems are not so autonomous that having them go out of control is an issue, yet.
    happyharry_z
  • Would future wars ever end if no human soldiers are harmed?

    It's more likely that the military would lose sight of the original objective(s) because they enjoy winning so much. Or put another way, not much scope for losing.

    Could you imagine this technology in the hands of a military dictatorship? or attempted military coup?

    Yes it's at an early stage, but it will happen one day. And they won't bat an eyelid to sell this technology to the other side.
    Custard_over_2x_Pie
    • Short answer yes

      The side with the larger industrial capacity would destroy robots faster
      than the other side could replace them. Then they would start killing
      people. Then the other side would surrender and the war would be over.
      frgough
      • An EMF pulse could take them out

        simultaneously!

        Also, I was thinking back to an article about how movie restoration still requires human intervention, because parts of an image that a computer can't fathom, such as, in the Wizard of Oz. Dorothy's sparkly shoes were different in every frame and the computer got confused.

        So... just wear a sparkly suit, and you'll be able to get a few shots off before the robot works out what the heck is standing in front of it.

        'Don't shoot me I'm only the piano player'
        :p
        Custard_over_2x_Pie
      • The side...

        ..with the larger industrial capacity isn't automatically guaranteed a win. It could never be that simple. Having a larger industrial capacity doesn't imply that the industry is not dependent on foreign resources to maintain its productive capacity. If, for instance, critical microprocessor technology is being outsourced, the enemy need only eliminate that outside source to cripple productive capacity.

        It's also likely that cyber warfare would play major roles in determining the winner of such a war. If the supposedly weaker country hacks the stronger country's robotic arsenal, they could use that country's weaponry against itself. Could you imagine what would happen if robots were ever given the responsibility of handling nukes? Such robots could be just one hack away from launching Armageddon on their masters.
        eMJayy
        • RE: Cyber war: So that's a Bot botnet then?

          Ultimately any sort of hi-tech solution is at the mercy of tiny details, which contribute to the whole endeavour. Any one of which could scupper their efforts, if not taken account of at an early stage in development.

          Not unlike huge software project failures of recent times.
          Custard_over_2x_Pie
      • except

        except they would not need to kill people to make them submit. they would all get tazed or incapicitated in some less harmful manner
        albeit
    • Captain Kirk could save us...

      ...like he did Vendikar and Eminiar VII in "A Taste of Armageddon".. it was kinda the same thing.

      All that would be missing would be the disintegration chambers.
      Hallowed are the Ori
      • Umm.. I can't remember many of those early trek episodes

        They were memorable for all the wrong reasons.

        Fighting almost every alien civilisation they encountered, or baking hot/frigidly cold alien planets.

        Could you explain the episode? :)

        Custard_over_2x_Pie
        • Sure...

          Courtesy of IMDB:
          ********
          On a mission to establish diplomatic relations, Kirk and Spock beam down to the planet to learn that its inhabitants have been at war with a neighboring planet for over 500 years. They can find no damage however and no evidence of destruction. They soon learn that the war is essentially a war game where each planet attacks the other in a computer simulation with the victims voluntarily surrendering themselves for execution after the fact. When the Enterprise becomes a victim of the computer simulation and ordered destroyed, Kirk decides it's time to show them exactly what war means.
          ******

          He essentially destroys the war computers on Eminiar VII, abrogating the agreement with Vendikar, forcing each planet to either wage [b]real[/b] war or to talk peace.

          As I said, it's [i]kinda[/i] the same thing... machines are doing the fighting... but people are still dying.
          Hallowed are the Ori
          • Thanks.

            I seem to recall a similar kind of scenario used in Stargate SG1.

            Anyway, the Trek episode didn't jog my memory. But I get where you're coming from with your comment. :)
            Custard_over_2x_Pie
          • SG:A, actually

            The episode is called 'The Game', IIRC, and features McKay and Sheppard playing a RTS instead of chess or some other competitive game.

            Only problem is, the Ancient idiots who built the thing actually meant the interface to 'guide' two separate civilisations, and what John and Rodney thought was a game was being enacted in real life, with themselves as demigods of a sort.

            <sigh> Looking forwards to the movie and SG:U.
            Da-G
  • RE: The Terminator scenario: Perhaps not so fictional after all

    Hell, while we are at it we should create our own Cylons. Lets see what they can do :)
    The one and only, Cylon Centurion
    • Careful what you wish for

      http://www.msnbc.msn.com/id/20249628/

      eMJayy
      • Creepy

        :S
        The one and only, Cylon Centurion
  • RE: The Terminator scenario: Perhaps not so fictional after all

    I think the threat of humans using robots to abuse and enslave other humans is more likely than the robots becoming independent and doing it on their own. I believe it is inevitable though. If we have second thoughts about developing these robots others will not
    China, Russia maybe even Iran. Welcome to the Robo-Wars.
    Potatochip2001
  • RE: The Terminator scenario: Perhaps not so fictional after all

    Far more likely is robot-executed genocide
    albeit