Video: Your brain might be more digital than analog, scientists claim
AI is artificial, no doubt. But it isn't very intelligent, which is one reason training requires enormous datasets. What if you could rent an artificial rat brain, or even better, an artificial human brain that could learn much faster? A startup is working to make that happen within 5 years.
At last week's Flash Memory Summit I sat down with the founder and CEO of Tachyum, Rado Danilak, to learn more about the company's plans. Rado, is a PhD with a track record of successful startups, including SandForce and Skyera.
Rado has been a leader in extending the life of flash storage. The key immediate selling point for Tachyum's Prodigy chip is that it makes it possible to build all-flash hyperscale datacenters by extending quad level flash - the cheapest - life by 300 times, making its life - and cost - competitive with high capacity disks.
That's pretty cool (and probably worth a post itself), but that's not all the chip does. It's also an I/O monster, with 32 PCIe 5.0 lanes, 8 DDR5 host bus memory links, and 2 400Gb/s Ethernet ports. This enables Tachyum-equipped servers, in concert with a 12.8Tb/sec switch chip under development by Innovium, to vastly simplify current hyperscale network infrastructure.
Dr. Danilak believes that combining flash performance with a much lower cost and latency hyperscale LAN will enable the real-time simulation of large-scale neural networks. Large scale as in rat brains and, eventually, human brains. In real time.
Simulating the human brain
Research has been nibbling at this problem for more than a decade. In the US there's the Brain Project, while the EU is funding the billion Euro Human Brain Project. One of the latest advances in computational neuroscience is the neuromorphic hardware SpiNNaker (Spiking Neural Network Architecture) processor, based on ARM.
Connectivity is the central problem. The human brain has 100 billion neurons, and each one connects to 10,000 synapses. The SpiNNaker project developed a multi-cast packet switch network optimized for large numbers of very small packets to manage this problem.
Dr. Danilak focuses on what the Tachyum chip will do, rather than how, but it's clear that the chip has borrowed from the SpiNNaker research. The current estimate is that it would take more than 500,000,000 processors to simulate the human brain, something not possible today even with GPU-based supercomputers.
Enter the cloud
The gist of Tachyum's pitch to the hyperscale firms - Google, Amazon, Microsoft, Alibaba, Facebook, etc. - is:
"You've got huge infrastructures that get, maybe, 40% utilization. Use my chip to move to all-flash storage, paying for it with power and space savings. Once my chips are on all your servers you can, along with the Innovium switches, then use the on-chip SpiNNaker cores to build real time rat and, someday soon, human brain equivalent AI infrastructure. You can sell that capability - one few others can replicate - and it only costs you the additional power needed to keep your infrastructure busy."
Assuming Tachyum can execute, that is an attractive proposition.
The Storage Bits take
Of course, there's the rub. While I believe Tachyum can achieve a 300x life improvement in quad-level flash, the AI part of the plan depends on a number of moving parts that are not under Tachyum's control, such as the Innovium switches and the success of the SpiNNaker architecture.
It's a stretch goal, but at least Tachyum has a good business case. If only the technology will cooperate.
RELATED AND PREVIOUS COVERAGE
Haven Life adds DNA testing, discounts, wills and digital storage to AI
Haven Life aims to give policyholders more perks for when they're alive.
Intel says it sold more than $1B in AI chips in 2017
The chipmaker also shared more information about its product roadmap, including next-generation Xeon processors.
MapR platform update brings major AI and Analytics innovation...through the file system
MapR goes back to its roots by innovating on the file system, resulting in a major boost for AI and analytics. If that sounds like a non sequitur, read on to why it works.
Can humans get a handle on AI?
On the face of it, the answer(s) should be obvious. Or are they? And by the way, how much data do we really need?
Courteous comments welcome, of course.