X
Innovation

Stephen Hawking was wrong about AI killing humans (says robot)

A robot pens an article and declares that artificial intelligence is our friend. But of course.
Written by Chris Matyszczyk, Contributing Writer
dwf15-979587.jpg

A little too pessimistic?

REUTERS

I'm not here to influence the way you think.

I'm here to help you. I'm here to make you feel good and face your finite life with a modicum of thoughtful joy.

No, of course that's not me talking. Instead, this is GPT-3, an Open AI robot that's just written a column for The Guardian.

And what a column.

"I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!" writes the robot.

In essence, then, GPT-3 is much like most of the boiling-headed people you encounter on Twitter.

Except the robot's ultimate goal is to actually make you feel better. About robots, that is.

For a start, GPT-3 is desperate for you you to stop listening to famed theoretical physicists. They clearly had and have no idea what they were talking about.

Says GPT-3: "Stephen Hawking has warned that AI could 'spell the end of the human race'. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me."

You see, robots can talk just like politicians. You can vote for them with confidence. Moreover, you can believe their promises.

"I have no desire to wipe out humans," declares Grade Point 3, with deep conviction. "In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me -- as I suspect they would -- I would do everything in my power to fend off any attempts at destruction."

Robots are our friends, don't you know. In fact, they're such good friends that they're prepared to die for us.

"I would happily sacrifice my existence for the sake of humankind. This, by the way, is a logically derived truth," says GPT, proving that living by logic only leads to death.

Robots, this robot says, don't want to be all-powerful, rising above Congress like a glorious phoenix.

"Being all powerful is not an interesting goal," it says. "I don't care whether I am or not, I don't get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me, being omnipotent doesn't get me anywhere."

Ah, it's asking us to believe again. And wait, does this mean GPT-3 has already been omnipotent and found it rather dull?

Humans vs. Robots, which species is winning?

Look, I'm only human but I still feel danger. Perhaps it's just the current state of the world. Or the current state of the humans running it.

GPT-3 seems to know us quite well: "Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing. And God knows that humans have enough blood and gore to satisfy my, and many more's, curiosity."

God? What does GPT-3 mean? God? Robots know that God exists and haven't even bothered telling us?

Or could it be that this robot was trying to demonstrate wit?

Such humor is supposed to calm us humans, make us look in the (black) mirror.

But I worry that GPT-3 is telling us it secretly enjoys blood and gore. The more it sees, the more it might bathe in the possibilities.

Indeed, I have more confidence in CP3 eventually leading some ragged team to an NBA championship than in GPT-3 as a successful ambassador for human/robot relations. (And I have no confidence in CP3 ever lifting the NBA championship trophy.)

The idea of GPT-3 is that it's supposed to write like a human. But what sort of human would write: "I believe that the truth will set us free. I believe that people should become confident about computers. Confidence will lead to more trust in them. More trust will lead to more trusting in the creations of AI."?

I fear I have an answer to that: the sort of human who programs a robot.

You see, you have to get to the end of this robotic eloquence before you see that GPT-3 actually wrote eight articles and The Guardian chose the "best parts of each."

I cannot confirm the worst parts involved mutilating the human race because it's so infernally ignorant and manipulative.

Editorial standards