AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight

The man building a spaceship to send people to Mars has used his South by Southwest appearance to reaffirm his belief that the danger of artificial intelligence is much greater than the danger of nuclear warheads.

Video: Is regulating AI a bad idea?

Entrepreneur Elon Musk has long held the position that innovators need to be aware of the social risk artificial intelligence (AI) presents to the future, but at South by Southwest (SXSW) on Sunday, the SpaceX founder pieced together his plan for the second coming of the Dark Ages, noting AI "scares the hell" out of him.

special feature

AI and the Future of Business

Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.

Read More

Making an appearance on a couch with his friend, creator of science fiction western series Westworld Jonathan Nolan, Musk said that although he's not usually an advocate for regulation and oversight, the AI proposition is where he can make an exception.

TechRepublic: IBM Watson CTO: The 3 ethical principles AI needs to embrace

"This is a case where you have a very serious danger to the public, therefore there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely -- this is extremely important," he said.

"Some AI experts think they know more than they do and they think they're smarter than they are ... this tends to plague smart people, they define themselves by their intelligence and they don't like the idea that a machine can be way smarter than them so they just discount the idea, which is fundamentally flawed.

"I'm very close to the cutting edge in AI and it scares the hell out of me."

Pointing to AlphaGo and its predecessor AlphaGo Zero, Musk said AI is capable of vastly more than anyone knows, and said the rate of improvement is also exponential.

In the span of six to nine months, AlphaGo went from being unable to beat a reasonably good Go player, to then beating a string of current and former world champions. AlphaGo Zero then crushed AlphaGo 100-0. It learned by playing itself and can play basically any game after it's fed the rules.

"No one predicted that rate of improvement," Musk said.

Predicting a similar fate for self-driving vehicles, Musk expects by the end of next year, self-driving will encompass essentially all modes of driving and be at least 100-200 percent safer than a human driver.

"The rate of improvement is really dramatic, but we have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that's the single biggest existential crisis that we face, and the most pressing one," he said.

"The danger of AI is much greater than the danger of nuclear warheads, by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want -- that would be insane.

"Mark my words: AI is far more dangerous than nukes, by far, so why do we have no regulatory oversight, this is insane."

The most important thing Musk believes needs to happen is laying the framework for creating digital super intelligence, if humanity collectively decides that is the right move.

"We're already a cyborg in the sense that your phone and your computer are kind of an extension of you ... a low-bandwidth [extension]. I think we've got to build an interface -- we didn't evolve to have a communications jack -- there's got to be essentially a vast number of tiny electrodes that are able to read right from your brain," he explained.

"A digital extension of you, that is an AI, the AI extension of you, a tertiary layer of intelligence, so you've got your limbic system, your cortex, and the tertiary layer which is a digital AI extension of you, and high bandwidth connection is what achieves a tight symbiosis.

"I think that's the best outcome -- I hope so, if anyone's got better ideas, I'd love to hear it."

Musk previously stated that the race for AI could start World War III. Coupled with his SpaceX venture aimed at sending people into outer space, Musk said he's preparing for the possibility that another Dark Ages will hit the Earth.

special feature

How to Implement AI and Machine Learning

The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started.

Read More

"If there's likely to be another Dark Ages, which my guess is that there probably will be at some point ... particularly if there is a third world war, then we want to make sure that there's enough of a seed of human civilisation somewhere else to bring civilisation back and perhaps shorten the length of the Dark Ages," Musk said.

"A self-sustaining base ... ideally on Mars ... it's more likely the Mars base will survive than a Moon base. I think a Mars base and a Moon base that could help regenerate life back on Earth would be really important and to get that done before a possible World War III.

"Last century we had two massive world wars, three if you count the Cold War, I think it's unlikely that we'll never have a world war again ... this has been our pattern in the past."

Calling AI one of the two most stressful things in his life at present, Musk said it's also the production of the Tesla Model 3 that is keeping him up at night.

Related Coverage


You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All