Elon Musk has called on law makers to regulate companies building artificial intelligence (AI) before the technology becomes a risk to civilization, at which point it would be too late.
Musk's cautious stance on AI is well-known, which he's backed up with OpenAI to develop safer AI. Speaking at the US National Governors Association meeting, the boss of Tesla and Space X told delegates that AI is a "fundamental risk to the existence of human civilization" and needs proactive regulatory oversight.
"I have exposure to the very most cutting-edge AI and I think people should be really concerned about it. I keep sounding like an alarm bell, but until people see robots going down the street killing people, they don't know how to react because it seems so ethereal," he told governors.
"AI is a rare case where I think we need to be proactive in regulation instead of reactive, because I think by the time we are reactive, regulation is too late."
AI also differs from the dangers of car accidents, airplane crashes, faulty drugs, or bad food, all which are regulated, Musk continued: "They were harmful to a set of individuals in society of course, but they were not harmful to society as a whole," he said.
His comments appeared to awaken some governors to the threat of AI once it surpasses human intelligence.
Musk was responding to a question about what impact AI would have on the jobs and the workforce. He eventually did address the question of jobs, saying "robots will be able to do everything better than all of us".
He said a regulatory agency should be established to ensure the public good is served in a competitive environment that currently compels companies to race ahead with AI development unchecked.
"You've got companies that kind of have to race to build AI to remain competitive. If your competitor is racing to build AI and you don't, they will crush you," he said.
"That's where you need the regulators to come in say, 'Hey guys, you all need to pause and make sure this is safe, and when it's cool and the regulator is convinced this is safe to proceed, then you can go, but otherwise slow down'.
"You need regulators to do that for all the teams in the game otherwise the shareholders will be going, 'Why are you building AI faster because your competitor is?'"
He said later that the government "doesn't even have insight", but "once there is awareness people will be extremely afraid, as they should be".