The good old days - are back?

ZFS eliminates the accomodations people have had to make to RAID hardware, network storage, volume management, and all the other kludges

In the good old days you got system wide performance and reliability for data intensive applications by using as much directly connected disk as you could afford -which, at $2,100 for a 104MB disk in about 1990, wasn't that much.

Nevertheless the need for larger spaces existed and what we got was software horrors like Sun's DiskSuite and the rather doubtful joys of million dollar EMC arrays connected to Unix via parallel SCSI controllers and accessed through volume managers like those from Veritas and HP - but actually run from Windows laptops glued under the top cover.

That was bad - but then it got worse: "high performance" NAS arrays choked behind Wintel switches whose occasional hours of continuous functionality suggested that prayer really does beat Rube Goldberg design.

But now, courtesy of Sun's ZFS, we're suddenly back to the good old days -except that the drives are now 500GB, much faster, vastly more reliable, and one quarter the price.

ZFS eliminates the accomodations people have had to make to RAID hardware, network storage, volume management, and all the other kludges needed to make large disk farms available with some pretence to reliability and multi-point access.

ZFS is designed to use the classic 1980s disk array - just a bunch of dumb disks in a power box - while bypassing the complexities of volume management and RAID. Look at the introduction to ZFS provided on the open solaris site, and you'll see it's exactly what a Unix tool ought to be: elegant, simple, powerful.

No more volume management, no more raw vs direct vs cooked, no more expensive RAID hardware, no more surprises caused by some PC nit deciding to borrow, replace, or reprogram the only brocade switch still stuttering in your application's data path. It's system managed data storage as it should be.

Got an older Sun box to play with? Download Solaris 10, hang a couple of disk boxes off it on separate controllers, do a boot -r, and:

% zpool create zvol mirror c2t*d0 c3t*d0 [ ;-) ]
% zfs create zvol/ora
% zfs set mountpoint=/db/oracle zvol/ora
% zfs set sharenfs=rw zvol/ora

And that's all there's to it: your disks are ready for use with automatically balanced parallel access; guaranteed data integrity; no journaling; no need for fsck on re-mount; no partitioning; and, no wasted space.

Sun's "thumper" (X4500) servers implement this in a hardware/software package - 16GB of RAM, two Opteron dual core processors, Solaris 10, six controllers, and up to 24 500GB drives in a 4U high box. Check out what you're paying monthly for support on your traditional NAS or local array and do the arithmetic: a 12TB thumper costs $33K, 240TB in ten boxes -including 40 opteron cores and 640GB of RAM- runs about $480K.

As a result you can use thumpers to replace NAS style arrays - because using NFS on Solaris doesn't have any of the complications you're used to seeing with traditional RAID storage devices -while letting you run the most critical applications directly on the disk servers.

Oh, and if you think that's cool, wait until next year when thumper gets iSCSI and the second generation "Niagara" CMT chips get hardware packet management - enabling 10GB/s disk access across four to eight channels at a time.

Bottom line? it's the good old days made new again - and a model, I think, for Sun itself.