Intel: Non-volatile memory shift means chips need an overhaul

Intel: Non-volatile memory shift means chips need an overhaul

Summary: Processor architectures and filing systems will need dramatic redesigns to take advantage of upcoming non-volatile memory technologies, Intel has said.

SHARE:

Current processor and filing system designs must be revamped to get the most out of non-volatile memory technologies, Intel has said.

Though we are still a few years away from seeing a replacement for flash that will dominate the market, when it does chipmakers and filing system designers will need to alter their technologies to take advantage of the low latencies afforded by this new class of memory, Intel's chief technology officer, Justin Rattner, told ZDNet on Tuesday at the Intel Developer Forum in San Francisco.

"I'm reasonably confident that... non-volatile technologies will replace flash and bring non-volatile memory very close [to compute] with dramatic improvements in latency," he said. "Architectures will clearly have to react and respond to that."

Memory hierarchy

The jury is still out on exactly which technology will come to replace flash. It could be any one of phase change memory (which is currently being developed by IBM), memristors, which are being worked on by HP and Hynix, or spin-transfer torque, Rattner said.

Non-volatile memory can retain information without power — unlike RAM — and has fast access times, providing both huge power savings and the potential for much faster data transfer.

"Within probably the next three, four or five years we're going to have that memory, and we need to start now to look at the operating system issues and file system issues to take advantage of it," Rattner said.

When you change the memory hierarchy, it has huge knock-on effects on how computation works, he explained.

"Through most of the history of computing, we've assumed a persistent storage system — including the file system and virtual memory system — based on the characteristics of moving-head disks — devices with a very high access latency organised into fixed-sized blocks of thousands of bytes," Hank Levy, a professor of computer science and engineering at the University of Washington and non-volatile memory chip design researcher, told ZDNet on Thursday.

"These characteristics run very deep at every level of the software stack. These new memory technologies are fundamentally different both in their low access time and their fine-grained (byte-level) access."

If chip and filing system designers do not make fundamental changes to take advantage of new memory, "then we'll be missing an opportunity to really benefit from what these technologies can provide", Levy said.

Intel Labs research

To that end, Intel Labs is currently doing research about the implications of what happens when main memory becomes non-volatile, Rattner said.

"These new memory technologies are fundamentally different both in their low access time and their fine-grained (byte-level) access" — Hank Levy

"Right now all extant architectures assume all [directly CPU-accessible] memory is volatile," he said, noting that Intel is looking at adding instructions to processors to help them correctly move data between cache and into persistent memory and back.

Along with processors, Intel is thinking about filing systems as well as it would be "ridiculous" to use conventional techniques on top of non-volatile memory.

"Those [filing systems] are all optimised for when access times are in the tens of milliseconds, but [with non-volatile] now they're in the tens of nanoseconds.," Rattner said.

Ultimately, architectures will have to change because widespread use of non-volatile memory will make the "distinction between main memory and bulk memory... begin to disappear", he added.

Race to replace flash

Richard Coulson, director of the storage technologies group within Intel's technology and manufacturing group, pointed out that there is currently "a race" between spin-transfer torque, memristors and PCM to see which technology can be mass manufactured at a low enough price to be viable.

"We don't know which one of those or others will ultimately be cost effective," he told ZDNet on Wednesday. But when one of these comes in, "it changes the whole memory storage hierarchy".

Unfortunately, a lot of further development work depends on which non-volatile technology comes to dominate, and that is as yet unknown. "The base memory technology is the biggest wild card at the moment," Coulson said.

Topics: Emerging Tech, Intel, Processors, Storage

Jack Clark

About Jack Clark

Currently a reporter for ZDNet UK, I previously worked as a technology researcher and reporter for a London-based news agency.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

13 comments
Log in or register to join the discussion
  • Nobody is designing for Intel any more...

    ...so who cares what Intel is doing? Now back to our regularly scheduled ARM briefing...
    Tony Burzio
    • Following up with AMD & ARm

      Hey Tony
      Thanks for commenting, however all processor companies will need to change chips to take advantage of this tech, so Intel is representative of a broader industry trend
      JC
      Jack Clark
      • Could be the biggest thing since...

        Well anything in computing in my life time.

        To explain, let's look at my first home
        Computer; A hard drive was an expensive external expansion to the system. The way it worked was the essential firmware lived in a rom chip, and the rest of the OS was on a floppy disk - you put in the floppy and booted the machine. This copied the OS from the floppy into the RAM and the computer was started. To do anything you then had to load a floppy(ies) containing the software you wanted to use and load them into RAM. Once in the RAM, The computer could use it, but it couldn't be run direct to the processor from the floppy - the magnetic tape and reader were just far to slow in processor terms; It'd be like you ordering a large fries only to see the guys start peeling a potato - you need the potato to be prepared and put in the fryer before you can have it.

        Not too much longer later we got internal hard drives for the masses; they were huge - 40mb comes to mind - and the piles and piles of floppies started to diminish; you could now have the OS and the software inside the machine and use floppies to transfer files and install new software. But things hadn't changed; when you turned on the pc, firmware loaded a bootloader that copied the OS to the RAM and the machine started. Then you'd start a program and that program would copy to RAM and then it would start.

        All these years later, with all our ports, bus's north/south bridge variations, even CD's and SSD's, how your PC works is exactly the same; even your phone, games console and wireless router; all data is stored on a storage medium, run from the RAM.

        This all goes back way further than my lifetime - back to the early days of modern computing. Linux is referenced below; both it and OS X are derived from UNIX (even NT is designed to be more like UNIX) UNIX goes back to 1967 and was designed along the assumption that this is how a computer works.

        What we're now talking about is a type of memory (like RAM) that doesn't loose all it's data when the power is removed; think SSD that runs at the speed of RAM. Going back to our takeout example; they've now worked out how to grow steaming hot, pre packaged fries on a tree - you want fries; all the fries in the building are ready to go.

        There are a lot of advantages; the ultimate goal of the technology is that your RAM and storage would be the same thing; imagine a pc that never boots? You just turn it on running? Or you know how when you first open office, it takes a few seconds, but a new document is almost instant? Imagine it's for all intents and purposes all ready running? Then there's data - ever forgotten to save somthing? Imagine you forgot to save and a power cut hits.. You turn your machine back on and it instantly pops back up exactly where you were... Nice eh?

        Now it's unlikely we'll get there straight away, but it's the potential. So whichever chip architecture you wish to use (even if i don't personally agree about intel) chip design and operating system design will have to adapt to this kind of tech.

        Right now the only OS that load themselves fully into RAM are puppy like linux and install/repair CD's - everything else assumes it's too big(it usually is) to fit in the RAM, and loads as much as it needs, keeping the rest 'on disk' until it is needed.

        So as you can see, it's not just what Intel are doing, it's what's around the corner for computers in general that we're excited about.
        MarknWill
        • exactly!

          It boggles my mind when I start thinking about it... There won't be any need to boot... No saving... The architecture of general purpose computing will completely change.
          killermilind@...
  • Bet Linux kernel will be first to have

    I/O scheduler, and file system ready for those new challenges ;)

    Anyone wanna comment on (ex)FAT on such memories? :P
    przemoli
  • Non volatile memory

    It will take longer and cost more to develop. Murphy rules.
    hayneiii@...
  • New infrastructure? Here's the challenge:

    How do you go from making "memory", which used to be thought of as something different as "storage", to the same thing?

    Can you put your head around that for a second?

    Think about your program files sitting on your hard drive. When you launch a program, some of the program code gets put into RAM for processing.

    What happens if you merge the two, and you no longer have to do any actual copying? Instead, you'd just have program files ready to be processed and other program files that are sitting dormant. Files being processed just flip to an "active state" but don't actually get moved around. This is not the same as the way a RAM drive works either because RAM is still separated into a storage area.

    How could you do that without needing to create a second temporary copy of a program file? Would you need to virtualize memory address space for such an endeavour? How do you distinguish processed output from "files" at that point? Or a file system from memory pages/addresses?

    It certainly will take a whole new way of thinking to come up with answers to this.
    Joe_Raby
    • ... and you no longer have to do any actual copying?

      On Linux systems, we haven’t need to do that for decades. You see, we have this file format called “ELF” which we use for both executables and shared libraries. Loading an executable or loading a shared library is essentially the same operation: we use a system call named “mmap” to map the relevant parts of the file into the memory space, then just page it in on demand—no headaches with the peculiarities and limitations of those antiquated “DLL” thingies, no explicit copying necessary at all.

      And if the persistent storage that holds the file is also directly accessible as part of the machine’s physical address space, then “paging” just means “adjusting some memory-management registers to point to a different place’, without any need to physically copy any part of the file. Only this all takes place in the kernel, the userland apps don’t even need to notice that anything different is happening, they continue to work the same way as before.

      Isn’t it nice to have a platform that already takes the future into account?
      ldo17
      • Biggest change will be non-volatility!

        Now your computer will not have to dump RAM onto HD any longer. So start up will be much faster.

        But there will still be separation onto RAM (memory on the bus), and HD (memory behind SATA or other interface). Just because memory takes space, and not all will make it onto RAM chips.

        And be realistic. Hybrid solution will be there first. (for casual users, servers may see full solutions). Like hybrid SSD + HDD. Just to lower costs.
        przemoli
        • answering,

          Yes hybrids will come first, but technology will find a way to make the metamorphosis complete, and some of the techniques we depend on today will go the way of slide rules.
          RayInLV
      • DLL's antiquated? You've got that backwards

        ELF dates back to Unix System V, from the mid 70's. Any disk cache can do what you propose.

        Also, you're talking about a file format, not a replacement for the conceptual file system, which is what Intel is obviously talking about here. Also, the concept of "paging" was only one example I gave, but it too is antiquated and needs to be revised for non-volatile memory. If anything, paging should be something done transparently by the memory controller, much like a hard drive's motor control.
        Joe_Raby
    • exactly!

      I was wondering how the operating system kernel would permanently reside in nvram... And so, in case of corruption, would require to specifically wipe it.. Or perhaps have a switch to drain the charge... The concept of installing a program and then running it would be doubly indirect then.....

      I really don't understand why software needs installation..... Why can't the installation folder contain all it needs, and only configure the system settings.... The "portable" versions of various software today should be the norm...

      Same way, what's the difference between an open file and a closed file, in a nv system ram setup? ... No write back, much faster paging..... A brave new world.
      killermilind@...
  • We're getting there

    Comp Units will have the ultimate power conservation available... Since recovery will be instantaneous even 5 second inactivity to sleep will become viable and in many instances have huge impact on electric usage, allowing sensors to keep track of when capacity is need rather than the whole unit running continuously.

    Computers use huge amounts of electricity now, and this could have a direct impact on at least the usage growth.
    RayInLV