X
Innovation

The ghost in the machine: Vicarious and the search for AI that can rival the human brain

Startup Vicarious is aiming to build the first general artificial intelligence system -- just don't expect it any time soon.
Written by Jo Best, Contributor
Cyborg Woman

Vicarious cofounder D. Scott Phoenix: "Artificial intelligence is the next major fundamental technology that will empower the world."

Vicarious doesn't plan to have a product out next year. Or the year after. Or even five years from now. By the time Vicarious launches its first full product, it could be 2031.

And that far-distant timeline doesn't seem to be putting off investors: already $70m has been put into the company from tech luminaries including Mark Zuckerberg, Elon Musk, and Jeff Bezos, despite the fact the company isn't registered as a for-profit.

Taking 15 or 20 years to launch a product proper sounds an eternity in a Silicon Valley where fortunes rise and fall in the three months between quarterly results, but Vicarious isn't just working on any old product.

"The people who invested in Vicarious, and are investing in Vicarious, understand that artificial intelligence is the next major fundamental technology that will empower the world. If you look at technological evolutions like the automobile or the telephone or the personal computer, it's on that level, and I would say even far bigger than that level, in terms of the impact it will have on the world," D. Scott Phoenix, cofounder of the company, told ZDNet.

Artificial intelligence isn't a new phenomenon: in a world where IBM's Watson can win Jeopardy! and cars can drive themselves, AI is increasingly common. Such systems, however, are narrow AIs -- systems that are smart enough to do one complex task, but just that one task.

Vicarious is aiming to build a general AI -- that is, a different type of artificial intelligence that can adapt to whatever task it's confronted with, in much the same way as a human can.

"If they wanted Siri to play Jeopardy!, they would basically have to start over. Even though it seems it's kind of similar, the difference between Siri and Watson is essentially the entire codebase of Siri and the entire codebase of Watson -- it's generality 'that solves that problem'. If you could have one system that is able to accomplish the full range of tasks that humans are able to do, using the same training data that humans get, you achieve something that can help us to solve really hard problems and especially problems that it hasn't encountered before," Phoenix said.

General AI, or strong AI as it's sometimes known, is a far broader concept. It indicates a system that can work much like the human brain, understanding new concepts, grasping feedback from several senses at once, and finding creative solutions to unfamiliar problems, all with minimal training data. For some, it's seen as trying to recreate the human brain itself.

Vicarious is using the brain as its inspiration, taking a computational approach to the brain's information processing abilities.

"We're not implementing a virtual set of dendrites and neurons and trying to understand AI, we're trying to understand what is the lift, thrust, and friction of intelligence and codify those algorithmically and use those fundamental principles to implement them on a computer," Phoenix said.

According to Phoenix, much of the AI in use at companies such as Facebook or Google today has its roots in 1970s neuroscience.

"We've learned a lot about the brain since those early experiments, and that knowledge hasn't really diffused very rapidly into the machine learning community, so if anything, you could think of Vicarious as arbitrage between what has been discovered about the brain and what exists in modern machine learning algorithms," he added.

Vicarious, for example, is prioritising feedback rather than feedforward systems, which are more in keeping with human cognition. Similarly, Vicarious is putting a larger emphasis on the role of time and embodiment in intelligence -- that is, having systems that learn by doing and interacting with the world, rather than learn by seeing sets of tagged-up photos pass them by.

Eventually, the aim is for Vicarious' AI to be able to take on tasks that may be too dirty or too dangerous for human workers: cleaning up after a nuclear reactor spill or treating patients with highly infectious, and highly dangerous, diseases, for instance.

The company is using its own in-house data sets and metrics to measure progress towards those ultimate goals, as well as those more commonly used by the AI community -- for example, how well the system can cope with motor movements or recognise objects in a scene, or combinations of such elements.

"We're not trying to tests for everything all at once. We have a regimen of tests that check for different things and a combined regimen that tests for everything.

"There are specific qualities we're looking for: is this a system that can efficiently represent the full range of experiences a human can have, or are there some fundamental limitations? Are we making assumptions that are too strong or too weak when we're designing difference aspects of the system?" said Phoenix.

While the company doesn't routinely disseminate details of how far its technology development has reached, there are occasional milestones made public: back in 2013, for example, Vicarious announced its algorithms could solve captchas, including Google's, in a similar way to humans.

Captcha solving abilities may not be general intelligence, but they're a step on the way. Such spinoffs from Vicarious' main research are likely to make their way into the world before its fully-fledged general AI does.

"Anything that we ship as a product or technology will be a side effect of the research being done here. As we fundamentally work on making ground-breaking progress on general artificial intelligence, there will inevitably be pieces of it that we can slice off and license and utilize in a product, but that doesn't drive the research that we do here," Phoenix said.

But with corporate investors like Samsung, Wipro, and ABB already onboard, it's easy to see that components of Vicarious' work could end up in our phones or industrial robots before too long.

"There are a lot of ways we can help make robots, devices, and computers smarter, that are much more imminent [than general AI], and that's the reason why it makes sense to have partners like Samsung and ABB and so on.

"Each company has a different way they see their technology interfacing with ours and it's something we're actively exploring and we'll see where things go," Phoenix added. "As we enter the age of intelligent devices, the applications for AI systems like the ones we're working on at Vicarious are going to grow exponentially."

While for now, artificial intelligence is very much on the narrow side -- and more likely to be sorting spam from email than taking over the world -- many high-profile figures in the tech world are already considering whether artificial intelligence poses a risk to humanity (and have invested $10m in an institute to consider the problem.)

As artificial intelligence moves from narrow to general, those concerns are only likely to increase. What's at the root of this fear? Is it just the standard issue hand-wringing that accompanies the rise of any new technology, or something new -- if AI systems are capable of matching and exceeding the capabilities of the human brain, does it represent a challenge to our understanding of what it means to be human?

"If you look back at the early nanotechnology and DNA sequencing press, there's lots of scare stories about that, and even going back to the first steam engines, there were scare stories about them coming to life and taking over towns.

"At the same time, every time with every new technology, there really are risks and there really are downsides, from nuclear power to DNA sequencing to organic chemistry, so you have to makes sure that as you're developing a new technology, you're testing it to make sure it's safe and it behaves as you expect it to... At some time in the future, it may be that artificial intelligence is something we need to spend more time ensuring the safety of, but the research is at a really early stage now," Phoenix said.

Perhaps it's that fear of being superseded that drives our AI worries. But to be truly intelligent and perfectly adaptable, does a general AI need to be self-aware? Does artificial intelligence go hand-in-hand with machine consciousness?

"I think," says Phoenix, "I'll leave that particular question to the philosophers."

Read more about artificial intelligence

Editorial standards