X
Innovation

Artificial intelligence: Should we be as terrified as Elon Musk and Bill Gates?

Artificial intelligence will power the next wave of IT automation, but the time is now for the tech industry to put guidelines in place to guard against its long term dangers.
Written by Jason Hiner, Editor in Chief
musk-gates-1.jpg
Elon Musk (left) and Bill Gates (right) have both raised concerns about artificial intelligence.
Images: CNET

Elon Musk and Bill Gates have been as fearless as any entrepreneurs and innovators of the past half century. They have eaten big risks for breakfast and burped out billions of dollars afterward.

But today, both are terrified of the same thing: Artificial intelligence.

In a February 2015 Reddit AMA, Gates said, "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern ... and [I] don't understand why some people are not concerned."

In a September 2015 CNN interview, Musk went even further. He said, "AI is much more advanced than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person... What's not obvious is a huge server bank in a vault somewhere with an intelligence that's potentially vastly greatly than what a human mind can do. And it's eyes and ears will be everywhere, every camera, every device that's network accessible... Humanity's position on this planet depends on its intelligence so if our intelligence is exceeded, it's unlikely that we will remain in charge of the planet."

Gates and Musk are two of humanity's most credible thinkers, who have not only put forward powerful new ideas about how technology can benefit humanity, but have also put them into practice with products that make things better.

And still, their comments about AI tend to sound a bit fanciful and paranoid.

Are they ahead of the curve and able to understand things that the rest of us haven't caught up with yet? Or, are they simply getting older and unable to fit new innovations into the old tech paradigms that they grew up with?

To be fair, others such as Stephen Hawking and Steve Wozniak have expressed similar fears, which lends credibility to the position that Gates and Musk have staked out.

What this really boils down to is that it's time for the tech industry to put guidelines in place to govern the development of AI. The reason it's needed is that the technology could be developed with altruistic intentions, but could eventually be co-opted for destructive purposes--in the same way that nuclear technology became weaponized and spread rapidly before it could be properly checked.

In fact, Musk has made a direct correlation there. In 2014, he tweeted, "We need to be super careful with AI. [It's] potentially more dangerous than nukes."

How to talk about the singularity and look smart doing it

AI is already creeping into military use with the rise of armed drone aircraft. No longer piloted by humans, they are carrying out attacks against enemy targets. For now, they are remotely controlled by soldiers. But the question has been raised of how long it will be until the machines are given specific humans or groups of humans--enemies in uniform--to target and given the autonomy to shoot to kill when they acquire their target. Should it ever be ethical for a machine to make a judgment call in taking a human life?

These are the kinds of conversations that need to happen more broadly before AI technology continues its rapid development. Certainly governments are going to want to get involved with laws and regulations, but the tech industry itself can pre-empt and shape that by putting together its own standards of conduct and ethical guidelines ahead of nations and regulatory bodies hardening the lines.

Stuart Russell, computer science professor at the University of California, Berkeley, has also compared the development of AI to nuclear weapons. Russell spoke to the United Nations in Geneva in April about these concerns. Russell said, "The basic scenario is explicit or implicit value misalignment--AI systems [that are] given objectives that don't take into account all the elements that humans care about. The routes could be varied and complex--corporations seeking a supertechnological advantage, countries trying to build [AI systems] before their enemies."

Russell recommended putting guidelines in place for students and researchers to keep human values at the center of all AI research.

Private sector giant Google--which has long explored AI and dove even deeper with its 2014 acquisition of DeepMind--set up an ethics review board to oversee the safety of the technologies that it develops with AI.

All of this begs for a public-private partnership to turn up the volume on these conversations and put well thought-out frameworks in place.

Let's do it before AI has its Hiroshima.

For more on how businesses are going to use AI, see our ZDNet-TechRepublic special feature AI and the Future of Business.

Previously on the Monday Morning Opener:

Editorial standards