X
Tech

IBM is using light, instead of electricity, to create ultra-fast computing

The company's researchers built a light-based tensor core that could be used, among other applications, for autonomous vehicles.
Written by Daphne Leprince-Ringuet, Contributor

To quench algorithms' seemingly limitless thirst for processing power, IBM researchers have unveiled a new approach that could mean big changes for deep-learning applications: processors that perform computations entirely with light, rather than electricity.  

The researchers have created a photonic tensor core that, based on the properties of light particles, is capable of processing data at unprecedented speeds, to deliver AI applications with ultra-low latency.  

Although the device has only been tested at a small scale, the report suggests that as the processor develops, it could achieve one thousand trillion multiply-accumulate (MAC) operations per second and per square-millimeter – according to the scientists, that is two to three orders more than "state-of-the-art AI processors" that rely on electrical signals.  

SEE: Kubernetes security guide (free PDF) (TechRepublic)

IBM has been working on novel approaches to processing units for a number of years now. Part of the company's research has focused on developing in-memory computing technologies, in which memory and processing co-exist in some form. This avoids transferring data between the processor and a separate RAM unit, saving energy and reducing latency. 

Last year, the company's researchers unveiled that they had successfully developed an all-optical approach to in-memory processing: they integrated in-memory computing on a photonic chip that used light to carry out computational tasks. As part of the experiment, the team demonstrated that a basic scalar multiplication could effectively be carried out using the technology. 

In a new blog post, IBM Research staff member Abu Sebastian shared a new milestone that has now been achieved using light-based in-memory processors. Taking the technology to the next stage, the team has built a photonic tensor core, which is a type of processing core that performs sophisticated matrix math, and is particularly suitable for deep-learning applications. The light-based tensor core was used to carry out an operation called convolution, that is useful to process visual data such as images. 

"Our experiments in 2019 were mostly about showing the potential of the technology. A scalar multiplication is very far from any real-life application," Abu Sebastian, research staff member at IBM Research, tells ZDNet. "But now, we have an entire convolution processor, which you could maybe use as part of a deep neural network. That convolution is a killer application for optical processing. In that sense, it's quite a big step." 

The most significant advantage that light-based circuits have over their electronic counterparts is never-before-seen speed. Leveraging optical physics, the technology developed by IBM can run complex operations in parallel in a single core, using different optical wavelengths for each calculation. Combined with in-memory computing, IBM's scientists achieved ultra-low latency that is yet to be matched by electrical circuits. For applications that require very low latency, therefore, the speed of photonic processing could make a big difference. 

Sebastian puts forward the example of self-driving cars, where speed of detection could have life-saving implications. "If you're driving on the highway at 100 miles-per-hour, and you need to detect something within a certain distance – there are some cases where the existing technology doesn't allow you to do that. But the kind of speed that you get with photonic-based systems is several orders of magnitude better than electrical approaches." 

SEE: This powerful new supercomputer will let scientists ask 'the right questions'

With its ability to perform several operations simultaneously, the light-based processor developed by IBM also requires much less compute density. According to Sebastian, this could be another key differentiator: there will be a point, says the scientist, where loading car trunks with rows of conventional GPUs to support ever-more sophisticated AI systems won't cut it anymore.  

With most large car companies now opening their own AI research centers, Sebastian sees autonomous vehicles as a key application for light-based processors. "There is a real need for low latency inference in the domain of autonomous driving, and no technology that can meet it as of now. That is a unique opportunity." 

IBM's team, although it has successfully designed and tested a powerful core, still needs to extend trials to make sure that the technology can be integrated at a system level to ensure end-to-end performance. "We need to do much more there," says Sebastian; but according to the scientist, work is already underway, and as research continues, more applications are only likely to arise. Trading electricity for light, in the field of computing, certainly makes for a spot to watch. 

Editorial standards