With artificial intelligence, the fast pace of research and the scale of its potential benefits mean that tech companies are increasingly moving full-steam ahead without considering the risks.
At the beginning of this year, Elon Musk, the creator of Tesla and SpaceX, donated $10m to the Future of Life Institute to fund a global research program aimed at making sure AI benefits the human race. The institute is a volunteer-run research and outreach organization co-founded in March last year by Jaan Tallinn, one of the most well-known Estonian tech entrepreneurs for his role as the founding engineer of Skype and Kazaa.
The Boston-area institute focuses on researching potential risks from the development of human-level artificial intelligence, and its scientific advisory board members include Stephen Hawking, Elon Musk, and several professors from universities including MIT, Oxford, Univeristy of California at Berkeley, and Cambridge, among other experts.
"Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering," Tallinn said at the time of Musk's investment.
Tallinn told ZDNet that, thanks to almost a decade of experience in the gaming industry, he has always been interested in artificial intelligence, but his interest in ensuring the safety of the technology is a more recent development.
In 1986 at just 16, he started his career writing software for a local company assembling eight-bit computers for use in public schools. Three years later Tallinn became involved in the creation of Kosmonaut, the first Estonian computer game to be published outside the country, and in 1993 he co-founded Bluemoon Software, which created and developed the FastTrack P2P protocol and well-known music-sharing application Kazaa which used it.
After that, along with his fellow Bluemoon cofounders Ahti Heinla and Priit Kasesalu, Tallinn helped created the software for Skype, using Kazaa's backend.
He has also co-founded two organizations studying AI's risk to humans: besides the Future of Life Institute, he helped establish The Cambridge Centre for the Study of Existential Risk (CSER) with Cambridge professors Huw Price and Martin Rees.
Technology and its impact on the future of humankind has been always fascinating for him.
"For many years now, I've been following and supporting the so-called 'X-risk Ecosystem'," where various organizations are trying to lower "existential risks" from technology. "The Future of Life Institute started after [cosmologist] Max Tegmark finished his book and said that he wanted to do something important with the time that was freed up as a result. Since I knew Max as a very capable person, I immediately decided to join and support his project," said Tallinn.
Tallinn is mostly interested in the "control problem": figuring out how to predict and/or robustly constrain smart autonomous systems. "It's both one of the most challenging problems in the field as well as quite under-appreciated," he says.
According to Tallinn, humankind faces a number of different risks derived from the rapid development of AI, and it's not just the technological singularity that should concern us.
"If we count the potential trillions of lives of people who are yet unborn (as we should, I believe), then existential risks completely dominate everything else. If we only count the lives of people already alive, then there are other challenges, such as smart systems disrupting the job market," said Tallinn.
He believes that although we still cannot predict when exactly the breakthrough in singularity will take place, we should nevertheless be ready for if and when it does happen.
"Several surveys of AI experts indicate a 50 percent probability of human-level or superhuman AI by the middle of this century. Of course, such surveys aren't reliable predictors, but being uncertain about an important future event is not grounds for complacency either," he said.
Tallinn believes that the rules and agreements regarding the development of AI should emerge from the industry rather than from international political institutions.
"I haven't seen any indication of the policy makers understanding AI safety problems, so regulation will almost certainly do more harm than good at this stage. I think it's important to first arrive at a consensus within the AI industry about potential policies and regulations," he said.
Tallinn expects the next big development in AI that the consumers will experience in their everyday lives to be self-driving cars.
"It will be a huge change once they arrive, because of the cascading effects they would have on the economy. Also - perhaps a bit further along - AI-assisted augmented reality will probably be huge," he said.