Mac ZFS dead, again. Thanks, Apple!

Mac ZFS dead, again. Thanks, Apple!

Summary: File corruption is a fact of life for Mac users, thanks to the ancient HFS+ file system. And now a heroic effort to bring a new primary, 21st century file system - ZFS - to the Mac has been shelved.

SHARE:
TOPICS: Apple
27

It's been a long strange trip, but ZFS, the most advanced file system in production use today, is dead on the Mac as a consumer option. Which is more than a little odd, since it was Apple itself that originally planned to add it to Snow Leopard Server. What happened?

When Tens Complement started work on Mac ZFS they didn't foresee any major problems. Founded by Don Brady, a former Apple engineer who worked on the ZFS/Mac integration project, they had already solved many of the integration issues.

What TC didn't expect though was that Apple's kernel engineers would hardwire HFS+ into their core technologies and applications, such as Versions, File Sharing and Time Machine in Lion, making it impractical to fully replace HFS+. ZFS (or any other file system) is now relegated to the more narrow task of secondary storage.

This past weekend, Tens Complement admitted defeat, turned the code over to deduplication storage appliance vendor Greenbytes, and ceased operations. This is doubly sad because Microsoft is resurgent in file systems. Microsoft's new ReFS is a major overhaul of the 1990's NTFS.

The Storage Bits take Apple designs fabulous hardware, but their reluctance to invest in a modern file system bodes ill for Mac power users. Adopting ZFS would have put them ahead of even ReFS, but that isn't in the cards anymore. The best we can hope for is that someone - Greenbytes? - offers ZFS as secondary storage for the Mac. But at least our data at rest will be safe.

But is too much to ask for Apple software that is as modern and well-designed as their hardware? Evidently. Comments welcome, of course. I'm starting to consider putting Linux on my MacBook Air.

Topic: Apple

About

Robin Harris has been a computer buff for over 35 years and selling and marketing data storage for over 30 years in companies large and small.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

27 comments
Log in or register to join the discussion
  • Seriously?

    If this is that big of an issue, why do people use these machines?
    slickjim
    • It isn't. This guy has beaten this

      strawman to death with doom and gloom predictions of massive data corruption that never happen because he happily pretends that checksums and error correction algorithms don't exist. All he would have to do to prove his rate of data corruption is simply set up a computer to do read/writes for a set amount of time and demonstrate the corruption taking place. For someone who knows how to code and knows some statistics, it's a trivial test to set up. Why doesn't anyone do it? Because the people who know how already know about error correction and so know the author is full of it.
      baggins_z
      • How would you know?

        Windows and OS X cannot detect file corruption.

        But somehow, you're quite sure that none is taking place?!?!

        LOL! Suddenly I have the image of Pee Wee Herman putting his fingers in his ears and singing "La La La! I can't hear you!"

        Foolish...

        I manage nearly 1,000 Windows machines. I noticed about two years ago that we were having issues with a many of the machines that we ordered with WD Raptors***[1]. For some reason the Windows install on them would crash and restart on boot. Furthermore, they'd fail to re-image. What was odd however, is that when I pulled the drives and looked at them on my test station, they appeared perfectly functional. Initially, we simply warranty replaced them.

        To investigate the issue further, I used a utility named MHDD. (Warning: This tool is not user friendly, and can damage or destroy drives if used improperly!) MHDD is a DOS based bare-metal tool that communicates directly with hard drives using raw ATA commands. It can scan every physical sector of a drive reporting latency as well as error messages. It also can be used to brute force a drive into remapping sectors which report Unrecoverable Read Errors. UREs are a source of data corruption. All drives have the ability to remap bad sectors, however the mechanism is FAR less perfect than I once believed***[2].

        In two computer labs of 32 machines each, I had 7 machines go down in rapid succession. MHDD found exactly 1 bad sector out of ~156 million in all 7 machines. While MHDD remapped the bad sectors, rendering the drives serviceable***[3]. I found this deeply disturbing.

        So... I modified a copy of The Ultimate Boot CD (which contains MHDD) to make it network boot***[4] the machines on the network.

        Next, I scanned every machine in both labs (64 machines). MHDD found 7 more with a single bad sector which were still working, and one with 25 bad sectors. On the one with 25 UREs, I set the scan to loop and let it run for 12-16 hours to rule out impending failure or intermittent drive head errors (it was fine)... The machine with 25 bad sectors was working fine except for Flash whose files happened to be written across some of the bad sectors.

        While the 8 additional machines were booting Windows fine, failure was simply a "Windows background defrag of Windows system files to the wrong sector" away.

        [1] - I mention the WD Raptor since roughly 90% of the issues I've had with unmapped UREs has been on these drives. I've experienced this issue with other vendor's drives, but in far fewer numbers.

        [2] - The mechanisms built into the drives are supposed to detect and remap these bad sectors automatically. Clearly, they're failing to do so enough to cause me issues. Clearly Windows is failing to detect or deal with these issues. Since HFS+ contains no way of detecting corruption, the same would be true of OS X.

        [3] - I only use these drives in out-of-warranty machines where the user cannot save their files to the local drive. I have 40 drives or more in service, some in excess of a year without additional issues. So far, I estimate saving my employer at least $2000.

        [4] - Specifically, PXE Boot via Altiris Deployment Solution. The PXE boot can be scheduled or selected via a PXE boot option. Furthermore, I've password protected the Ultimate Boot CD Menus to prevent tampering.


        Note: My home file server runs OpenSolaris hosting a ZFS v22 pool. It's fixed minor corruption issues 3 times in as many years. Instead of a multi-day long rebuild, a la RAID 5, it's never taken more than 15 minutes to sync up.

        PS: If you're not finding people talking about this, it's because you're either reading articles by those who are less savvy than you think, or those whose data integrity requirements are not absolute.
        Kevin Trumbull
    • I know. Evidently, ZFS isn't as good as many thought.

      Even though they solved many of the issues, it sounds like Apple's engineers had little faith in it. I wonder if there are some serious flaws in ZFS?

      But I don't think people are upset that Apple products won't have ZFS, since sales look to be moving along Ok.
      Challenger R/T
      • ZFS is actually great

        I used to use raid 5 + hs.
        zfs decreased my rebuild time and I've been happy with it for almost 2 yrs now..
        Anthony E
      • I would ask a different question:

        Why would Apple's kernel engineers hardwire HFS+ into their core technologies and applications, such as Versions, File Sharing and Time Machine?

        That makes little sense.
        rhonin
        • It may actually makes sense

          The engineering resources needed to develop, test and support interfaces that allow file-related features to interoperate with diverse file systems are almost certainly greater than the resources needed to make those features work with a single, well understood file system.

          The questions are: (1) Does supporting arbitrary file systems sell more Macs? (2) If it does, is the increase in sales sufficient to cover the associated development, testing and support costs? I suspect the answer to the first is 'probably not many', and the answer to the second is a clear 'no'.

          It's a bit like PC hardware. It used to be much more modular than it is now, but once de facto standards caught on, the relevant hardware was added directly to motherboards or integrated into other components (e.g. CPUs). When there's very little demand for component swapping, the costs of supporting it simply outweigh the benefits.
          WilErz
          • "Does supporting arbitrary file systems sell more Macs?"

            Yes, and in some very high end markets - workstations (think $3,000 Mac towers).
            vgrig
        • I agree

          that just seems retarded - forget ZFS, but why would you paint yourself into the corner as far as filesystem choice?
          vgrig
  • Only Linux Offers Filesystem Modularity

    I thought it was only Windows that did this stupid thing, of hardwiring NTFS so heavily into its filesystem features that it cannot easily be replaced by anything else. I am disappointed to hear that Apple has made exactly the same mistake.

    This means the only OS in common use today with a proper pluggable virtual filesystem layer is Linux.
    ldo17
    • BSD

      The BSDs count as in common use in my book.

      Now, since Microsoft and Apple are heavily invested in their entire slack, they may feel that the advantages gained from optimizing outweigh the disadvantages of denying super-users the ability to change the underlying filesystem. Thing is, the super-users probably already had sufficient reason to be dismissive of Windows and OS X.

      Me, I'm not in that category and so the Apple / ZFS story interests me as to whether it was a technical or licensing issue, or did Apple hope that Sun would do the heavy lifting and pay the bulk of the fare, but the sale to Oracle (and its prelude) may have derailed those hopes. Clearly, Apple ultimately decided that flogging HFS+ for another few rounds was less expensive than moving the entire filesystem to something new. I hope I'm not wrong in not caring.
      DannyO_0x98
  • ZFS overhead?

    ZFS brings with it a significant processor and memory overhead. Perhaps Apple did not want to further burden their already memory hogging and overblown OS?

    More likely - OSX is up for retirement in the near future?
    12312332123
    • Not really

      Depends on the use of zfs..
      OSX is currently 64bit which ifs is recommended
      If using on desktop /w raid desktop configs will usually have plenty of memory.
      CPU overhead is nominal again depending on the use and number of drives.
      Anthony E
  • ZFS and Macs

    While Apple indeed "hardwires" some upper-level filesystem functions with their lower-level HFS+ filesystem, this has nothing to do with ZFS.

    ZFS is not just a filesystem. It's primary advantages are the volume manager and safe storage that uses redundancy and strong cryptographic checksums to identify and automatically correct bad data.

    Currently, ZFS implements two things:

    - zvol - which is block storage, a virtual "disk" device that can have any filesystem on top of it, including HFS+;
    - zfs - which is an POSIX filesystem, in all the good UNIX tradition. One feature this filesystem has is the NFS4 compatible ACLs, that by the way happen to be comparable/compatible with the NTFS ACLs.

    The most valuable safety features of ZFS are introducted because today's disks have become larger and larger. Say, consumer disks are today up to 4TB in size and "guarantee" 10^-14 error rate. That is one wrong bit (unnoticed and uncorrected by the disk) every 12.5 TB. If you have busy multi-terrabyte array, chances are you will read (and process) wrong data often...

    Thing is, none of Apple's computers have anything like this size of storage. This is why Apple is not overly concerned about this, yet. Then, ZFS has traditionally significant memory requirements and making the Macintosh ZFS-only means all computers will have to be equipped with more memory, just to make ZFS happy.

    Also, if you happen to have ZFS storage accessible over iSCSI (trivial, with plenty of free and commercial offerings today) you can use HFS+ on that virtual disk and enjoy all the benefits of end-to-end data verification.

    Further, it seems Apple has "abandoned" ZFS when it became clear that Oracle is buying Sun. It is still uncertain whether Apple could properly license ZFS from Oracle.

    In summary, Apple should be really interested in ZFS if they intend to sell huge storage-oriented servers. As much as this is niche market, I don't see Apple interested in this business, at all.
    danbi
    • "Thing is, none of Apple's computers have anything like this size "

      Think again - video and photo editing workstations (heavily mac segment) have often FC or SAS bricks attached to them. Since final cut pro is terrible at working with files that sit on NAS, a lot of video editing outfits just patching FC bricks directly.
      vgrig
  • ZFS is saving my data, quite frequently

    I have two different machines, each with 4 pairs of mirrored drives yield 6-8TB each. Each quarter of the year, I replace at least one of those disks after it fails. ZFS detects and repairs error sectors on some, but many of them just flat out die. If I take them out of those (Illumos) machines, and put them in a SATA cradle hooked up to a windows machine, and select "format", and turn off quick format, they will fault, and windows will remove them from view.

    The drives you buy, delivered with your new computers, may have slightly better opportunity to survive, just because of the manufacturers packaging and shipping policies. But practically, you can expect a large majority of disks to survive no longer than a couple of years.

    If you have extra disks and spend time copying things around, you can just get by with external USB/firewire/TB disks. But, why worry about it? Get ZFS!
    greggwon@...
  • ZFS is great, but resource intensive...

    I'm a big fan of ZFS and would like to see it on everything eventually. The design is smart as heck. It's fast, redundant, self-repairing, and often able to roll back file changes. BUT - it's a major resource hog on normal desktops. It was originally made for the server world. ZFS won't make it into Mac OS X until the standard memory size for basic models is 16 GB. It would also benefit from about 8x the CPU power of the slowest Mac laptop being sold currently. Perhaps the next gen 3D chips from Intel will reach that point while still using less power. That's why I wasn't even slightly surprised to hear that ZFS didn't make the cut again. The hardware needs to reach a point where it can be supported correctly.

    There is also the very real possibility that Apple is currently rethinking its future OS direction since iOS has become their largest source of revenue. Perhaps a true desktop version of iOS is in the works. If they could create a new desktop iOS where everything that runs on a Retina iPad would also run on your desktop, they could potentially change the balance of power forever. It's what Microsoft is trying to do with Win 8 and Metro applications. Just imagine, buy an application once, it appears on all of your devices, desktop, tablet, and phone included. Only Apple could potentially make that scenario work quickly and correctly considering the current isolated state of apps for Windows Phone versus iOS apps on iPhones and iPads. It will be interesting to see what Apple is up to.
    BillDem
  • Neither dead nor shelved

    Please view or review the information published by by Ten's Complement LLC and by GreenBytes, Inc. – in particular the many reassurances given by Steve O'Donnell.

    The notice of changes appeared in the ZEVO product area around the time of WWDC 2012. Now details are emerging, and none of the details are negative.

    Where – amongst the published information from the organisations in the know – do you get notions of shelving and death?
    grahamperrin
  • MacZFS is ignored again. Thanks, trade rag journalism! http://maczfs.org/

    I honestly cannot believe that there can be people who 1) spend any effort at all to follow the concept of ZFS on Mac OS, and 2) who don't know that it's FREE SOFTWARE which has been maintained and working perfectly fine ever since Apple left it. I don't understand how those two concepts fail to coincide. In this case, it's obvious flamebait, and is the reason why AdBlock exists.

    http://maczfs.org/

    Come and get some free ZFS on Mac OS.
    smuckola
    • From what i found on the net

      The mach_kernel cannot be read from a ZFS partition, so it can't boot from it (correct me if i'm wrong).
      So, only non boot partition can be zfs with free software you pointed out.
      Meaning at least one partition on mac has to be HFS+
      vgrig