Search
  • Videos
  • Windows 10
  • 5G
  • Best VPNs
  • Cloud
  • Security
  • AI
  • more
    • TR Premium
    • Working from Home
    • Innovation
    • Best Web Hosting
    • ZDNet Recommends
    • Tonya Hall Show
    • Executive Guides
    • ZDNet Academy
    • See All Topics
    • White Papers
    • Downloads
    • Reviews
    • Galleries
    • Videos
    • TechRepublic Forums
  • Newsletters
  • All Writers
    • Preferences
    • Community
    • Newsletters
    • Log Out
  • Menu
    • Videos
    • Windows 10
    • 5G
    • Best VPNs
    • Cloud
    • Security
    • AI
    • TR Premium
    • Working from Home
    • Innovation
    • Best Web Hosting
    • ZDNet Recommends
    • Tonya Hall Show
    • Executive Guides
    • ZDNet Academy
    • See All Topics
    • White Papers
    • Downloads
    • Reviews
    • Galleries
    • Videos
    • TechRepublic Forums
      • Preferences
      • Community
      • Newsletters
      • Log Out
  • us
    • Asia
    • Australia
    • Europe
    • India
    • United Kingdom
    • United States
    • ZDNet around the globe:
    • ZDNet France
    • ZDNet Germany
    • ZDNet Korea
    • ZDNet Japan

Scary smart tech: 9 real times AI has given us the creeps

1 of 11 NEXT PREV
  • AI cannibalism

    AI cannibalism

    In the early 2000s, Mike Sellers was working on social AI agents for DARPA. During one simulation, two AI agents named Adam and Eve were given a few basic skills. They knew how to eat, but not what to eat. When they tried to eat apples from a tree, they felt happy. When they tried to eat wood from the same tree they didn't get any reward.

    So far so good, right? Things started going haywire when another AI agent, Stan, was introduced. Adam and Eve learned associatively. Because Stan was hanging around when they were eating apples, the agents learned to associate Stan with both eating and the feeling of happiness.

    Guess what happened next?

    "At the time it was pretty horrifying as we realized what had happened," writes Sellers. "In this AI architecture, we tried to put as few constraints on behaviors as possible... but we did put in a firm no cannibalism restriction after that: no matter how hungry they got, they would never eat each other again."

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • Facebook’s chatbots invent their own language

    Facebook’s chatbots invent their own language

    What's scarier than the prospect of intelligent agents colluding? Some version of that nightmare happened last year after Facebook released two chatbots designed to negotiate with each other. Facebook allowed the bots to communicate on their own, and the conversations went well to start, if a little stiff.

    Then things took a strange turn ...

    Bob: "I can can I I everything else."

    Alice: "Balls have zero to me to me to me to me to me to me to me to me to."

    According to Facebook researchers, that indecipherable message is actually a new language. The bots are conveying meaning to one another using a mutually developed language humans can't understand, one that they figured out was more conducive to deal making. The implications are a little hair-raising.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • Tay Tweets

    Tay Tweets

    This belongs in the hall of fame of creepy AI goofs. In 2016, Microsoft unleashed AI chatbot on Twitter named Tay.ai. The technology was an English-language version of a similar Microsoft project in China called Xiaoice, which at the time had successfully participated in 40 million conversations. Tay was specifically programmed to mimic the internet conversation patterns of a 19-year-old girl.

    Which makes what happened next all the creepier. Hours after the bot's release upon the world, Twitter users started teaching it racially insensitive and inflammatory phrases. Designed to learn contextually, Tay began repeating the phrases. A mere 16 hours (and 96,000 Tweets) later, Microsoft shut Tay down.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • Racist facial recognition

    Racist facial recognition

    You know there's a big problem with a technology class when the biggest tech companies in the world decide hefty profits aren't worth the negative social impact. Such is the case with facial recognition as a tool for law enforcement.

    "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms," IBM CEO Arvind Krishna wrote in the letter delivered to Congress in 2020. Amazon followed suit, issuing a one-year pause on governments using its facial recognition suite for surveillance and law enforcement.

    Why? According to the ACLU:

    "Groundbreaking research conducted by Black scholars Joy Buolamwini, Deb Raji, and Timnit Gebru snapped our collective attention to the fact that yes, algorithms can be racist. Buolamwini and Gebru's 2018 research concluded that some facial analysis algorithms misclassified Black women nearly 35 percent of the time, while nearly always getting it right for white men. A subsequent study by Buolamwini and Raji at the Massachusetts Institute of Technology confirmed these problems persisted with Amazon's software."

    Why is this so devastating?

    Well, even the federal government admits the technology is highly error prone, in one study "finding that the systems generally work best on middle-aged white men's faces, and not so well for people of color, women, children, or the elderly. The federal government study concluded the rates of error tended to be highest for Black women, just as Buolamwini, Gebru, and Raji found."

    Truly frightening.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • Google assistants get philosophical

    Google assistants get philosophical

    Google Assistants do some funny things. But having a conversation with another Google Assistant that quickly becomes about the existence of god? Weird.

    The incident happened last year and was captured by a user on social video sharing platform Twitch. Twitch user Seebotschat ran two Google smart speakers and was able to get the Google Assistant into a pretty robust conversation. After some joking and casual talk about which was a computer and which a human (hint: both computers), one of them decided god exists while the other insisted that was a foolish notion. The conversation actually comes pretty close to approximating a late night college gab session.

    Still waiting for Alexa to weigh in.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • “I’ll keep you warm and safe in my people zoo”

    “I’ll keep you warm and safe in my people zoo”

    Dr. David Hanson has gotten a lot of press recently for Sophia, his globe trotting AI robot. Back in 2012 he was working on another robot made to resemble Sci-Fi author Philip K. Dick.

    When a PBS Nova interviewer asked Phil Bot if he thought robots would take over the world, the robot's snarky answer hit a little too close to home. Don't worry, he assured the questioner. "I'll keep you warm and safe in my people zoo."

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • Target algorithm predict's young woman's pregnancy

    Target algorithm predict's young woman's pregnancy

    Hoping to give Amazon a run for its data-driven money, Target hired statistical guru Andrew Pole. Among other things, Pole identified 25 products that, when purchased together, predict a woman's pregnancy.

    According a New York Times Magazine story by Charles Duhigg, a Minneapolis man stormed into a local Target demanding to know why the store was sending his daughter coupons for baby clothes. Only after some time passed did the father realize the store's algorithm had accurately predicted the pregnancy. For privacy advocates, this is a disturbing turn of events.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • Tay 2.0

    Tay 2.0

    If at first you don't succeed ... maybe wait more than a week before you very publicly try again. After the PR disaster of Microsoft's Tay chatbot in March 2016, the bot went live a second time a mere week later.

    The results were equally embarrassing for Microsoft. Tay began posting drug-related tweets, including one about smoking pot in front of cops. Microsoft quickly took Tay down (again) and apologized (again). Evidently researchers had been testing the chatbot in the wake of the initial Tay disaster when they accidentally put it back on online.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • A turtle is a rifle

    A turtle is a rifle

    One of the primary early use cases for AI is threat detection based on object recognition. Just a couple weeks ago I wrote about a new AI security camera system that identifies guns and active shooters and alerts authorities.

    It sounds like a great idea, sure. But only if the object detection is beyond reproach. And it turns out that's far from the case. In a paper published last year, MIT researchers proved that image recognition software could be fooled by a so-called "one pixel attack," in which a single pixel of an image is altered. With that subtle change, dogs become cats, cars become airplanes, and 3D-printed turtles become loaded weapons.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • Tesla slams self and driver into big rig

    Tesla slams self and driver into big rig

    This one is tragic, and the ultimate nightmare scenario when machines are endowed with the ability to make decisions. The first fatality involving a self-driving car occurred when a Tesla operating in Autopilot mode hit a big rig broadside, shredding the Tesla's roof. The car kept driving. Though it was the first fatality in 130 million miles of Autopilot driving for Tesla, the incident still sent shivers down the spines of a newly wary public.

    That's because the crash was completely avoidable. The car simply mistook the truck for the bright sky beyond and failed to apply brakes. The passenger, in his final moments, would have been perfectly cognizant that the car was misperceiving the situation had he been paying attention. Unfortunately, the driver did not apply the breaks and presumably had his eyes off the road at the moment of impact.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

  • AI journalist misses the scoop

    AI journalist misses the scoop

    In summer 2020, Microsoft's MSN news service decided it no longer needed human journalists. After all, why pay someone a salary when AI is just as good?

    Ahem ...

    Well, it turns out the AI, which didn't so much report the news as curate other reporters, misfired when it reported on singer Jade Thirlwall's thoughts on racism, only to match the story with the wrong photo. 

    Tweeted Thirlwall:

    "@MSN If you're going to copy and paste articles from other accurate media outlets, you might want to make sure you're using an image of the correct mixed-race member of the group."

    Whoops! Turns out you still have to pay for accurate journalism.

    Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

    Caption by: Greg Nichols

1 of 11 NEXT PREV
Greg Nichols

By Greg Nichols for Robotics | October 31, 2018 -- 14:23 GMT (07:23 PDT) | Topic: Artificial Intelligence

  • AI cannibalism
  • Facebook’s chatbots invent their own language
  • Tay Tweets
  • Racist facial recognition
  • Google assistants get philosophical
  • “I’ll keep you warm and safe in my people zoo”
  • Target algorithm predict's young woman's pregnancy
  • Tay 2.0
  • A turtle is a rifle
  • Tesla slams self and driver into big rig
  • AI journalist misses the scoop

When machines become intelligent, weird things start to happen. Here are a few spin-chilling stories from the first wave of Artificial Intelligence.

Read More Read Less

AI cannibalism

In the early 2000s, Mike Sellers was working on social AI agents for DARPA. During one simulation, two AI agents named Adam and Eve were given a few basic skills. They knew how to eat, but not what to eat. When they tried to eat apples from a tree, they felt happy. When they tried to eat wood from the same tree they didn't get any reward.

So far so good, right? Things started going haywire when another AI agent, Stan, was introduced. Adam and Eve learned associatively. Because Stan was hanging around when they were eating apples, the agents learned to associate Stan with both eating and the feeling of happiness.

Guess what happened next?

"At the time it was pretty horrifying as we realized what had happened," writes Sellers. "In this AI architecture, we tried to put as few constraints on behaviors as possible... but we did put in a firm no cannibalism restriction after that: no matter how hungry they got, they would never eat each other again."

Published: October 31, 2018 -- 14:23 GMT (07:23 PDT)

Caption by: Greg Nichols

1 of 11 NEXT PREV

Related Topics:

Artificial Intelligence Robotics Digital Transformation CXO Internet of Things Innovation
Greg Nichols

By Greg Nichols for Robotics | October 31, 2018 -- 14:23 GMT (07:23 PDT) | Topic: Artificial Intelligence

Show Comments
LOG IN TO COMMENT
  • My Profile
  • Log Out
| Community Guidelines

Join Discussion

Add Your Comment
Add Your Comment

Related Galleries

  • 1 of 3
  • Nvidia Jetson Nano 2GB Development Kit With Wi-Fi Bundle

    The Nvidia Jetson Nano 2GB Developer Kit is an AI computer designed for manufacturers, students and embedded developers. The kit enables you to create parallel artificial intelligence ...

  • Azure Synapse Analytics data lake features: up close

    Microsoft has added a slew of new data lake features to Synapse Analytics, based on Apache Spark. It also integrates Azure Data Factory, Power BI and Azure Machine Learning. These ...

  • Over-hyped AI, data security worries and an e-commerce boom: Tech research round-up

    From emerging technologies to consumer confidence and onto remote-working tech, here's the charts that matter from the past month in news.

  • Pitfalls to Avoid when Interpreting Machine Learning Models

    Modern requirements for machine learning models include both high predictive performance and model interpretability. A team of experts in explainable AI highlights pitfalls ...

  • AAEON BOXER-8120AI

    The AAEON BOXER-8120AI equipped with NVIDIA Jetson TX2 can be hooked up to thermal and CCTV cameras and can be used to monitor entrances and other areas with high-volume foot traffic. ...

  • IT spending, cloud computing, big data, virtual reality, and more: Research round-up

    All the data that matters to you from the past month in technology news.

  • The Nightmare in Silicon Valley: 8 horror technologies that should scare you to death

    Every night is fright night with what can happen once these scary technologies take hold in ways that you may not have imagined.

ZDNet
Connect with us

© 2021 ZDNET, A RED VENTURES COMPANY. ALL RIGHTS RESERVED. Privacy Policy | Cookie Settings | Advertise | Terms of Use

  • Topics
  • Galleries
  • Videos
  • Sponsored Narratives
  • Do Not Sell My Information
  • About ZDNet
  • Meet The Team
  • All Authors
  • RSS Feeds
  • Site Map
  • Reprint Policy
  • Manage | Log Out
  • Join | Log In
  • Membership
  • Newsletters
  • Site Assistance
  • ZDNet Academy
  • TechRepublic Forums