Why doesn't Intel put DRAM on the CPU?

Summary:They keep cramming more transistors on chips every year while performance gains have slowed to a crawl. Why not put DRAM on-chip to speed up CPUs?

The System-On-a-Chip (SOC) market has been around for years. Isn't it obvious that on-chip DRAM would reduce access times and increase performance? I asked my friend and fellow analyst Jim Handy of Objective Analysis, who's been following semiconductors for decades, why chip vendors don't. 

I summarize his reasoning here. The answer is simple but the reasons are illuminating.

Process

DRAM is commonly built on a logic process that is customized for its special requirements. For example, DRAM needs a good capacitor that is characterized for leakage to know how often the DRAM needs refreshing.

That's a lengthy and expensive process. It's easier to put Static RAM (SRAM) on the CPU instead of DRAM because SRAM doesn't need finicky capacitors.

Special Feature

Storage: Fear, Loss, and Innovation in 2014

The rise of big data and the demand for real-time information is putting more pressure than ever on enterprise storage. We look at the technologies that enterprises are using to keep up, from SSDs to storage virtualization to network and data center transformation.

Logic processes - those used for CPUs - are also more expensive. A logic wafer might cost $3500 vs $1600 for DRAM. Intel's logic wafers may cost as much $5k. That's costly real estate.

Size

Another cost difference is that the cell (bit) size will be larger if you don't use a process customized for DRAM. So putting DRAM on a logic chip is a double-whammy: larger cell sizes on more expensive wafers. Not a winning combination.

SRAM actually requires more transistors per bit than DRAM as it uses 4-6 transistors vs 1 transistor and a capacitor for DRAM. But since you're already putting a couple of billion transistors on the chip, that isn't much of a problem. And SRAM has another huge advantage.

Speed

While SRAM is easier to build on a logic wafer, it also has one other huge advantage: speed. Access times are roughly 10 to 20 times faster than DRAM.

Jim summarized the answer very simply:

There's a reason people don't put main memory onto their chips, and that's because it's always significantly cheaper to use separate memory chips.

The Storage Bits take

Cost pressures are ferocious throughout the storage hierarchy - which is why we have a storage hierarchy. If the fastest and most reliable storage was also the cheapest, the "hierarchy" would be only one layer deep.

Flash memory had no impact on computer storage for decades until it got cheaper than DRAM. Tape - which once dominated computer storage - hangs on because it is cheap.

As long as the speed and cost correlation continues, we'll have a storage hierarchy. And no DRAM on CPUs.

Comments welcome, as always.

Topics: Storage, Intel

About

Harris has been working with computers for over 35 years and selling and marketing data storage for over 30 in companies large and small. He introduced a couple of multi-billion dollar storage products (DLT, the first Fibre Channel array) to market, as well as a many smaller ones. Earlier he spent 10 years marketing servers and networks.... Full Bio

zdnet_core.socialButton.googleLabel Contact Disclosure

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.