As the march of Moore's Law slows, so chip designers have had to find new ways of delivering the regular leaps in computer processing power we've become accustomed to.
One of the more recent trends is a fresh push towards using specialized chips, whose design is optimized for carrying out a specific task.
One example is Google's Tensor Processing Units (TPUs), the bespoke processors that excel at the mathematical operations required for machine learning, another are the many Bitcoin mining chips released when the cryptocurrency was at its peak.
These chips differ from the traditional CPU (Central Processing Unit) found inside computers, which is able to handle a very broad range of tasks for the computer and can be considered a jack of all trades and master of none.
But the problem with increasingly relying on Application Specific Integrated Circuits (ASICs) like TPUs to handle different computational tasks is the prohibitive cost of developing such chips, says Bapi Vinnakota, director for silicon architecture program management at fabless semiconductor company Netronome.
ASICs today are typically system-on-a-chip [SoCs] devices. As the name system-on-a-chip [SoC] suggests, an SoC is a single monolithic chip, split into multiple parts each handling different tasks, everything from central processing to interfaces with USB ports and memory controllers. As more functionality is packed onto SoCs and the surface area of the chip grows, the chance of defects in these monolithic SoCs increases, which leads to reduced manufacturing yields for chips and increased costs.
Rather than building these monolithic ASICs, a new move is afoot to break them into smaller "chiplets", where each task is handled by smaller chips that specialize in individual tasks -- both general tasks and domain-specific, as shown in the diagram below.
Chiplets, which have better manufacturing yields than monolithic SoCs due to their smaller surface area, are becoming more viable than they once were, due to improved performance of interconnects for sharing data between each chiplet.
Vinnakota says that using chiplets has the potential to make domain-specific accelerators, such as Google's TPUs, easier and cheaper to develop.
"When you're building an accelerator the only portion that you would build is the domain chiplet," he says.
"Now you can choose best-in-class [chiplet] for each other function. You'll buy an I/O chiplet, you'll buy a fabric chiplet.
"So your development and verification costs go down because in terms of the actual new silicon you're producing, it's area and complexity have gone down in a big way."
There's also scope to reduce the cost of chips by mixing and matching newer and older generations of chiplets, even those manufactured using different process node technologies, a term relating to the size of the transistors used by a chip and therefore how many transistors can be packed onto that's chip surface.
Vinnakota says that using chiplets to build ASICs will significantly increase the performance per watt of these processors and, along with the reduced cost and better yields, should help make it viable for ASICs to be developed to handle a wider range of tasks than is possible today.
However, the problem with this vision of chip designers mixing and matching chiplets from different manufacturers is the lack of a common hardware interface to allow these chiplets to interoperate.
To overcome this issue, the Open Compute Project and Netronome have launched an Open Domain-Specific Architecture (ODSA) sub-project, whose goal is to define an open interface and architecture to allow chiplets made by different manufacturers to work together.
Vinnakota hopes this common specification will eventually lead to the development of a wide range of specialized chiplets, allowing chip designers to choose from chiplets that are the best in the class for specific tasks.
"Our end goal is to create a marketplace of chiplets. Our focus is how do we create an open standard so these chiplets can talk to one another," says Vinnakota, who is also Open Domain-Specific Architecture (ODSA) lead at Netronome.
"You need a logical interface between the chiplet and the package for them to work as one chip."
Vinnakota says the goal of the ODSA project is to roll out a specific interface proposal towards the beginning of the fourth quarter of this year and then fabricate test chips based on that interface, with the hope that companies will be inspired to make design proposals for chiplet-based products utilizing this new interface.
"The design cycle for silicon is around 12 - 18 months, so if all goes according to plan somewhere in early 2021 is when you see chiplets with the new interfaces that we have in mind," he says.
Researchers, academics detail new Microarchitectural Data Sampling (MDS) attacks.
Amnon Shashua, head of Intel's Mobileye division, is charged with transforming the chip giant into a service provider for autonomous fleets of robo-taxi. Shashua spoke with ZDNet about the two phases of autonomous mobility, and the shortcomings of today's machine learning.
The results come a week after Intel lost Apple as its primary 5G customer and subsequently shuttered its 5G wireless chipset business.
After many production issues, Intel finally released another processor built on 10nm designs, and made moves in the 5G modem and base station market