The rise in deep learning forms of AI, and the abundance of real-world data behind it, is pressing the need for a new kind of computer to replace the typical Von Neumann machines expressed in processors from Intel and AMD, and graphics chips from Nvidia.
But that AI revolution in hardware, as big as it is, may be just the tip of the iceberg.
So says Rodrigo Liang, chief executive of startup SambaNova Systems, which has gotten over $450 million in venture capital funding to build that new kind of computer.
"We are at the cusp of a fairly large shift in the computer industry," Liang told ZDNet this week in an interview by phone. "It's been driven by AI, but at a macro level, over the next twenty to thirty years, the change is going to be bigger than AI and machine learning."
The last thirty years in computing, said Liang, has been "focused on instructions and operations, in terms of what you optimize for."
"The next five, to ten, to twenty years, large amounts of data and how it flows through a system is really what's going to drive performance."
It's not just a novel computer chip, said Liang, rather, "we are focused on the complete system," he told ZDNet. "To really provide a fundamental shift in computing, you have to obviously provide a new piece of silicon at the core, but you have to build the entire system, to integrate across several layers of hardware and software."
Liang's remarks about a complete computing system place SambaNova is in the same camp as startups such as Cerebras Systems and Graphcore, which are selling appliances rather than just chips the way MobileEye and Habana Systems do.
The details of SambaNova's efforts are under wraps, and Liang's comments come across as somewhat cryptic. But a glance at SambaNova's pedigree helps to illuminate things.
Liang's co-founders include Stanford professor Kunle Olukotun, a pioneer in the design of multi-core processors, who founded a startup twenty years ago, called Afara Websystems. There he developed a relationship with Liang, who was head of engineering. Afara was subsequently sold to workstation maker Sun Microsystems.
Another SambaNova co-founder is Stanford machine learning professor Christopher Ré, whose work includes how to develop neural networks trained with very little labeled data, known as "weak supervision."
These scientists' research provides clues to what new systems perhaps should do. Olukotun's work has included "Spatial," a computing language that can take programs and de-compose them into operations that can be run in parallel, for the purpose of making chips that can be "reconfigurable," able to change ltheir circuitry on the fly.
Ré and others have developed industrialized versions of his weak supervision approach, called Snorkel Drybell, about which ZDNet wrote a year ago.
In a keynote address in December of 2018 at the NeurIPS conference on AI, Olukotun tied the two areas together. Snorkel is part of the shift of computer programming from hard-coded to differentiable, in which code is learned on the fly, commonly referred to as "software 2.0."
A programmable logic device, said Olukotun, similar to a field-programmable gate array (FPGA), could change its shape over and over to align its circuitry that differentiated program, with the help of a smart compiler such as Spatial.
In an interview in his office last spring, Olukotun laid out a sketch of how all that might come together. In what he refers to as a "data flow," the computing paradigm is turned inside-out. Rather than stuffing a program's instructions into a fixed set of logic gates permanently etched into the processor, the processor re-arranges its circuits, perhaps every clock cycle, to variably manipulate large amounts of data that "flows" through the chip.
"You can map most of the operations in a TensorFlow graph to a set of primitives that look like map and reduce," he said, referring to the MapReduce programming model for distributed systems. "You say, hey, I want to take this graph of operations and map it in hardware, and then I'm going to flow the data through the graph, and then every cycle, I am going to get a new result."
Today's chips execute instructions in an instruction "pipeline" that is fixed, he observed, "whereas in this reconfigurable data-flow architecture, it's not instructions that are flowing down the pipeline, it's data that's flowing down the pipeline, and the instructions are the configuration of the hardware that exists in place, kind of like an assembly line: Here come the cars, and then at every station, something happens."
Something like that vision is exemplified in a 2017 research project by Olukotun called "Plasticine."
Such an exotic approach would have to contend with a massive, deeply entrenched market for x86 CPUs, GPUs, and all the systems technology and tools built around them. CEO Liang is mindful of the obstacles to a new form of computing. After Afara was sold to Sun, Liang spent fourteen years running hardware for first Sun, and then Oracle after Oracle acquired Sun. The high-performance computing empire that Sun built in the nineteen-nineties was ultimately marginalized by the expansion of standard x86-based processing.
"We had a great run," he reflected. The standardization of x86, and then the GPU wave following it, have limited what people can do, he said. "If you look at the last ten to twenty years, the silicon portion has been commoditized, and then the software standardized, so there has not been a lot of degrees of freedom in between to bring innovation to the end user."
To try and change that, Liang and partners have assembled quite a war chest. "For this type of technology, you need to be well-funded," Liang observed.
"We're in a very strong funding position now," he said, after the latest round, a C round consisting of $250 million from a group of investors composed of accounts managed by private equity giant BlackRock, along with existing investors including Google Ventures, Intel Capital, Walden International, WRVI Capital and Redline Capital. A source close to SambaNova told ZDNet the company's post-money valuation is north of $2 billion, but Liang declined to comment on valuation and ZDNet was unable to confirm the amount.
More than the money, Liang sees the confluence of AI, masses of data, the arrival of industry-standard frameworks such as TensorFlow and PyTorch, and the death of Moore's Law, being a potent mix that cannot fail to open up doors for new approaches.
"This is a change in the entire computing industry," said Liang, "it's not just about AI, it's not just a niche."
"You need to have solutions end to end, from algorithms down to the silicon, and it's about making that available to everyone who's trying to come to grips with this transition."