What is an LVM? PC users know that system disk drives show up as a C: drive or a Macintosh HD - or whatever you choose to call it - on their system. These are physical drives or physical volumes.
Logical Volume Managers virtualize these physical drives. The LVM takes the attached physical blocks on 1 or more disks and presents a virtual disk to the OS. The virtual disk walks like a disk and quacks like a disk, but the underlying storage is hidden from the OS.
So what's hiding underneath? It can be a physical disk, a disk partition or a RAID stripe of multiple disks.
LVMs can be handy tools for storage administrators, but like any paradigm that relies on managing discrete units, LVMs don't scale. Ask any sysadmin struggling to update an Excel spreadsheet of all the LVs on a SAN.
Enter the new millennium Around 2001 architects saw the scale problem and asked: why pretend we have physical or virtual disks at all? Why not treat the underlying storage as a big pool of blocks?
That's what architects of Google's BigTable, Amazon's Dynamo, Sun's ZFS, Oracle's BTRFS and Basho's Bitcask, among others, all decided to do. Which is why LVMs are so last millennium.
These newer storage tools treat the underlying disks as a pool of blocks. Set a policy for replication - there's no RAID in the common sense - and you can add drives as needed.
That's how the 21st century does it. Too bad the Mac file system guys didn't get the memo.
The Storage Bits take For the average user with 1 or 2 disks managing the physical volumes isn't a big deal. But the era of Big Data isn't only in data centers: I have 16 external disks attached to my new iMac - plus several more that plug into a drive dock.
But the new LVM in Mac OS isn't the biggest problem: the lack of data integrity in HFS+, the current Mac filesystem, is more worrisome. Video editors worry about not being able to transfer clips to the latest version of Final Cut Pro. How about losing your clips to silent data corruption?
If you're wondering what the post-Steve Apple looks like, the continuing hacking of HFS+ is a sorry case in point. Steve doesn't care about file systems - they're plumbing - and thus the normal corporate sins of sloth, inertia and risk-avoidance kick in.
Apple was moving towards the modern ZFS 3 years ago - see Apple announces ZFS on Snow Leopard - but didn't move fast enough to nail down a license before Sun went up for sale.
After the dust cleared Apple had no license for ZFS and discontinued the project. So the hacking of HFS+ has continued long past its expected demise.
Data integrity is one area where Microsoft has the chance to whack Mac OS. While NTFS isn't much better than HFS+ - see How Microsoft puts your data at risk - the NTFS team has been hard at work to fix the biggest problems. I expect to see announcements from them by the end of the year.
Comments welcome, of course. Data integrity should be Job 1 for storage folks, just as computational correctness should be for CPU vendors. But in the PC world, "good enough" has been good enough.