Taming shingled drives: A hack for doubling disk density

Taming shingled drives: A hack for doubling disk density

Summary: The disk drive industry faces expensive technical challenges as they try to return to 40 percent capacity growth. Shingled drives could double disk density, if the performance kinks can be worked out. This firmware hack could do it.

TOPICS: Storage, Hardware
Graphic courtesy of the authors.

The disk industry's fight with flash storage is hampered by disk areal density flatlining the past 5 years. Instead of disk capacity doubling every 18-24 months as it did in the 2000s, it's creeping up at approximately 20 percent per year.

The long-term answer is two-fold: patterned media; and heat-assisted magnetic recording, or HAMR. But scaling up lab demos of patterned media to economically producing several billion platters a year is expensive and years away. HAMR has similar challenges.

Thus the interest in shingled magnetic recording (SMR).

SMR for the curious

Disk heads write a wide track, but only need a narrow track for reading. Thus shingled recording: write the wide track; then overwrite most of it on the next pass, leaving a narrow track for reading.

In theory this could double today's disk capacities. But there is a hard problem: updating.

Like flash memory, updates require reading the existing data, integrating the new data, and then writing back to the disk. Slow and costly, the same write amplification problem found in flash.

Special Feature

Storage: Fear, Loss, and Innovation in 2014

Storage: Fear, Loss, and Innovation in 2014

The rise of big data and the demand for real-time information is putting more pressure than ever on enterprise storage. We look at the technologies that enterprises are using to keep up, from SSDs to storage virtualization to network and data center transformation.

Great for archive disks — where SMR is used today — but not so great for frequently updated data. How to fix?

Novel SMR address mapping

In a recent paper presented at the Usenix Hot Storage 2014 conference Weiping He and Prof. David Hung Chang Du from the University of Minnesota's Center for Research in Intelligent Storage presented their take on the problem.

Novel Address Mappings for Shingled Write Disks offers a partial solution to the problem. Instead of writing each track in order — track 1, 2, 3, 4 — write them out of order: 4, 1, 2, 3.

When the disk — or the band that contains these tracks - is less than half full, tracks 4 and 1 can be freely updated as there is no shingling. Until 75 percent of the disk is used, tracks 2 and 4 can be freely updated.

The paper explores various permutations on this theme and concludes

"By appropriately changing the order of space allocation, the new mapping schemes can improve the write amplification overhead significantly. [...] [N]ew mapping schemes provide comparable performance to that of regular HDDs when SWD space usage is less than 75 percent.

The Storage Bits take

Double the nominal density of hard drives with SMR and use 75 percent of that means a 50 percent boost in capacity. Not bad for firmware tinkering.

Integrating this concept with a non-volatile RAM buffer would enable even higher capacity utilization of the SMR drive.

We haven't seen the last of the hard drive.

Comments welcome, of course. How full are your drives on average?

Topics: Storage, Hardware

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Or just learn to make smaller write heads...

    Works better as it won't cause errors in the shingled tracks.

    The major problem with shingling is the repositioning. Moving a head back to the same track never quite gets the same physical position... The wide write path covers this imprecision with redundancy. Shingling removes the redundancy - and introduces potential read problems due to positioning errors.
    • Head positioning

      >>>> Shingling removes the redundancy - and introduces potential read problems due to positioning errors.
      • Head positioning

        (Ooops! Bad edit...)
        I may be wrong, but I suspect that modern disk drives use some kind of "track on data" servo system, where the position of the heads during a read operation can be adjusted on-the-fly to achieve the optimum read signal from the data track. So, the radial position of the heads during a read operation may not be the _exact_ same radial position as when the data was written. Older disk drives used open-loop positioners, which relied on mechanical repeatability.
        • quite - but to identify where that position is requires some fringe

          to be able to "track on data" for the adjustment.

          If it doesn't miss the track...
    • Now it makes sense ! Thanks for your explanation, I agree with you.

      I was wondering if "Disk heads write a wide track, but only need a narrow track for reading"

      Why, then, to write a wide track if you only need a narrow track to read ?

      So if you need a wide track to read "overwriting leaving a narrow track for reading" will not work, and if you dont, why write it then ?
      • track widths

        I suspect that the same head is used for both writing and reading. The heads are made as narrow as possible. The write process may have unavoidable fringing effects which in effect write a magnetic image that is wider than the core of the physical head. For the read-back process, the heads are sensitive enough that they do not require the "extra wide" track that was written. Thus, if you initially write a track that is wider than you need for read-back, you can afford to over-write some of the extra-wide track without significantly effecting the integrity of the read-back signal.
        • Write vs read track width

          This situation is not new.

          Back in the days of analog (audio) tape it was not uncommon for erase heads to have wider tracks than the record or play heads just to make sure that no 'garbage' is left behind.
        • No, they use different technologies

          No, the write and read heads use different technologies. Write heads still essentially use a coil to generate the magnetic field. But read heads use Tunneling magnetoresistive (TMR), a purely quantum mechanical effect where the resistance of a material is changed by the polarity of a magnetic field. See https://en.wikipedia.org/wiki/Disk_read-and-write_head .
    • That's not the reason

      It's not that they can't make write heads smaller or position the head more precisely to do the write. The reason is that modern drives use perpendicular recording. It turns out that the depth of the bit is related to the width of the write head. At the present time write heads are so narrow that the bit only penetrates about halfway into the magnetic coating. If they make the write head any more narrow, the bits simply won't reach deep enough into the magnetic media to stay permanent over time.

      My understanding of how HAMR will work is that a new magnetic coating is used that can only be overwritten at very high temperatures. During writes a laser will be used to preheat a thin strip of the magnetic media. Then when the wider write head does its write, only the magnetic media that has been preheated will actually change, but at the depth corresponding to the width of the write head. The rest of the media will not, even though it will be hit by the same magnetic field. Thus the width of the write will be controlled by the laser (which can be very narrow) and not anymore by the width of the write head.
  • There has been a lot of work on handling SMR drives

    The paper mentioned in the article mentions one scheme to deal with SMR drives, but the industry has been thinking along similar lines for a couple of years now. There's currently work being done in the ATA and SCSI committees that allow the host to tell the drive which regions of the drive (i.e., essentially parts of the filesystem) will be used as read-only, write-seldom, random writes, etc. This will aid the drive in setting itself up to minimize the hit caused by SMR and also aid in handling hybrid drives.

    Fortunately, when it comes to using addressing schemes that are non-linear, the industry has a great deal of experience because that's how SSDs do their addressing internally. The twist is that with SSDs there's little penalty for mapping contiguous parts of the address space to physically separate parts of the flash. With HDDs, there's a huge penalty to randomly mapping the address space unless a lot of thought goes into the mapping beforehand.