Intel has announced a neuromorphic artificial intelligence (AI) test chip named Loihi, which it said is aimed at mimicking brain functions by learning from data gained from its environment.
According to Intel, its efforts in "comparing machines with the human brain" have resulted in a self-learning, energy-efficient AI chip that uses asynchronous spiking to take inferences from its environment and become constantly smarter.
Loihi has digital circuits mimicking the basic mechanics of the brain, corporate vice president and managing director of Intel Labs Dr Michael Mayberry said in a blog post on Tuesday, which requires lower compute power while making machine learning more efficient.
"Neuromorphic chip models draw inspiration from how neurons communicate and learn, using spikes and plastic synapses that can be modulated based on timing. This could help computers self-organise and make decisions based on patterns and associations," Mayberry explained.
Such chips could be used to speed up complex decision making, with the ability to autonomously solve "societal and industrial problems" using learned experiences adapted over time, Mayberry said.
The chip has fabrication on Intel's 14nm process tech; 130,000 neurons; 139 million synapses; a fully asynchronous neuromorphic many-core mesh supporting sparse, hierarchical, and recurrent neural network topologies, with neurons capable of communicating with each other; a programmable learning engine for each neuromorphic core; and development and testing of several algorithms for path planning, sparse coding, dictionary learning, constraint satisfaction, and dynamic pattern learning and adaptation.
Applications could include the use of facial recognition via streetlight cameras for solving missing person reports; stoplights automatically adjusting to the flow of traffic; and robots gaining greater autonomy and efficiency, Mayberry said.
"The Loihi test chip offers highly flexible on-chip learning and combines training and inference on a single chip. This allows machines to be autonomous and to adapt in real time instead of waiting for the next update from the cloud," he said.
"The self-learning capabilities prototyped by this test chip have enormous potential to improve automotive and industrial applications as well as personal robotics -- any application that would benefit from autonomous operation and continuous learning in an unstructured environment. For example, recognising the movement of a car or bike."
Using the chip to detect abnormalities by teaching it the norm under various circumstances would allow for applications such as heart monitoring, Mayberry added, and could also be used in cybersecurity services to detect differences in data streams as being a breach or hack.
Intel researchers have shown a learning rate 1 million times improved with typical spiking neural sets, he claimed, with 1,000 times more energy efficiency than typical computing used for training systems.
Loihi will be shared with AI-focused university and research institutions in the first half of 2018.
CTO of Intel Artificial Intelligence Product Group Amir Khosrowshahi -- who co-founded Nervana Systems, which was purchased by the chip giant in August last year as the central part of Intel's plans for AI -- had in April told ZDNet that the industry needs new architecture for neural networks.
Before being bought out by Intel, Nervana had developed Lake Crest, its own silicon targeting neural network training, along with software, as it found traditional GPUs to be unsuitable for neural networking.
"Neural networks are a series of predetermined operations; it's not like a user interacting with a system, it's a set of instructions that can be described as a data flow graph," Khosrowshahi told ZDNet.
"There is so much circuitry in a GPU that is not necessary for machine learning ... you don't need the circuitry which is quite a large proportion of the chip and also high cost in energy utilisation.
"Neural networks are quite simple, they are little matrix multiplications and non-linearities, you can directly build silicon to do that. You can build silicon that is very faithful to the architecture of neural networks, which GPUs are not."
Intel last month announced its AI-focused Movidius vision processing unit (VPU) chip with processing capabilities for edge devices.
Movidius, acquired by the chip giant a year ago, develops systems on a chip that are equipped with dedicated neural compute engines to support deep learning inferences at the edge.
"We're on the cusp of computer vision and deep learning becoming standard requirements for the billions of devices surrounding us every day," Intel VP Remi El-Ouazzane said in August.
"Enabling devices with human-like visual intelligence represents the next leap forward in computing."
Intel CEO Brian Krzanich last week said that his company has invested more than $1 billion in AI companies through its capital investment fund, including acquiring Movidius and Nervana Systems, as well as financing AI startups such as DataRobot, Mighty AI, and Lumiata.
"Intel is making strategic investments spanning technology, R&D and partnerships with business, government, academia, and community groups," Krzanich said.
"We are deeply committed to unlocking the promise of AI: conducting research on neuromorphic computing, exploring new architectures and learning paradigms."
Also focusing on the consumer segment, Intel on Monday took the wraps off its eighth-generation Intel Core desktop processors, led by the $359 Core i7-8700K.
Part of Intel's K-series core chips, which are unlocked for overclocking, the Core i7-8700K can reach a frequency of 4.7GHz using Intel's Turbo Boost Technology 2.0. The chips can also be expanded with up to 40 platform PCIe 3.0 lanes.
The company also unveiled its Z370 chipset-based motherboards, the Core i5-8600K six-thread chip with a base clock speed of 3.6GHz, the Core i5-8400 with a base clock speed of 2.8GHz, the 4GHz Core i3-8350K, and the 3.6GHz Core i3-8100.