X
Innovation

Warning: Info tech could wipe out all humans

Artificial intelligence, nano tech, biotech. It might all run horribly amok. You'd be surprised at who says so.
Written by Mark Halper, Contributor

 

Stephen Hawking Jim Campbell Aero-News Network Wiki.jpg
It's Out There: Renowned physicist Stephen Hawking is an adviser to a group floating the idea that artificial intelligence and other modern technologies could wipe out the human species.
 --
Artificial intelligence, nano tech, synthetic biology. It could all go terribly wrong - to say the least - and wipe out the human species. Within a century nonetheless!

So says Luddite News? 

No! It's the world according to the University of Oxford's Future of Humanity Institute in the U.K. And they're not the only respected group that's worried about a possible info tech armageddon.

The Institute, run by Swedish philosopher Nick Bostrom, warns that unforeseen, unintended consequences of technology could completely obliterate us - something that no disease, natural disaster or nuclear threat has ever managed to do, as the BBC reported last year.

While the BBC story is light on what form the actual destruction would take, philosopher Bostrom clearly believes that it warrants attention because whatever it is, he says that our brave new technological era could contain "threats we have no track record of surviving."

In the BBC's words, Bostrom urges international policymakers to "pay serious attention to the reality of species-obliterating risks" because "if we get it wrong, this could be humanity's final century."

Bostrom says that while our technological capabilities are well advanced, our awareness that they could possibly run amok is woeful.  In that sense, he noted, "we're at the level of infants in moral responsibility."

Indeed, we should be thinking seriously about all this says none other than Martin Rees, also known as Lord Rees, who is Britain's Astronomer Royal and the former president of the prestigious Royal Society of scientists. 

Since the BBC story appeared a year ago, he has helped establish the Centre for the Study of Existential Risk at the University of Cambridge, which, like Oxford, is now probing the big question of whether our technology will swallow us whole.

Lord Rees set up the Cambridge institute with Skype co-founder Jaan Tallinn and with Cambridge philosophy professor Huw Price. Advisors to the center include renowned Cambridge physicist Stephen Hawking, Cambridge economist Partha Dasgupta and venture capitalist Hermann Hauser among many others.

The year-old BBC story has been making the rounds again, perhaps because the issue resurfaced when The Next Web recently ran an interview with philosopher Stuart Armstrong from Oxford's institute in which Armstrong warns that artificial intelligence could annihilate us. In the interview (picked up by the Huffington Post), Armstrong offers a far-fetched scenario:

“Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent,” Armstrong continues. “Well it will realise that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent. This is sort of a silly example but the point it illustrates is that for so many desires or motivations or programmings, ‘kill all humans’ is an outcome that is desirable in their programming.”

And you were worried about the upcoming tornado season and such. Quick - into the AI shelter! Dorothy? Doroootheee?? Where are you Dorothy???

Photo is from Jim Campbell/Aero-News Network via Wikimedia

--

Note: This is my final week writing for SmartPlanet. Today's post is one of my last. Thanks for reading, and thanks for all your lively opinions in the comments section over these three spirited years. I'll miss it. You can contact me through the email button below, or at markhalper@aol.com. --MH


This post was originally published on Smartplanet.com

Editorial standards