Back in 1977 Dennis Fairclough had a good idea: build something that would allow multiple small computers to share a single disk drive and in 1979, after his investors forced him to hire Ray Noorda to run the business, this became the first Novell product to reach industry standard status.
Today's replacement for that solution to the costs of sharing and managing storage is called a SAN -and its time is past because we're all going back to Novell's original implementation of the shared storage server idea.
The specifics of the SAN solution evolved in response to three main sources of pressure
- the Microsoft rack mount approach to SMP necessitated a lot of server to server networking - and no single x86 machine had the power to provide data services to all;
- the typical data center found itself responsible for backing up, storing, and recovering data from hundreds to thousands of desktop PCs - each storing multiple gigabytes of mostly duplicated material; and,
- legal, audit, and cost pressures aligned with IT management desire for more centralized control to make data centralization the only naturally acceptable solution.
As a result the idea evolved that you could set up a separate server network dedicated to acting as a virtual disk drive and then use software running on the machines in that network to manage your data more or less in parallel as a way of getting past the small machine limitation.
The key to making this work, of course, lies in interconnecting the storage rackmounts to facilitate both communication between the servers "parallelizing" data storage and communications with the external layer in the local network hierarchy -and out of that we got separate fiber channel (and now FC over 10GB ethernet) networks for storage management and the whole business of "fabric" switches ranging from Brocade's 256 port "Intrepid" at $4.5 million down to things like IBM TotalStorage SAN32M-2 Express Model at about $11,000 for 16 ports.
Looked at objectively SAN technologies are very advanced and reasonably effective - but also completely unnecessary because they all fundamentally respond to a small machine limit that really only exists in the wintel world.
In the mid nineties, of course, nobody wanted to use big machine solutions: the idea of using a two million dollar, four processor, DEC Alpha running OSF/1 to connect PC networks to terabytes of centralized data storage just wasn't a big seller to NT true believers. Instead, therefore, the industry choose to expand the client-server paradigm to evolve fiber channel networks to do the same thing using from dozens to hundreds of NT machines.
Today, however, that Alpha can be more than replaced by machines like Sun's 74XX storage servers that fit in a 4u rackspace and don't incur the costs, risks, and commitment to expertise needed to make a bunch of small machines approach storage parallelism.
In effect the new machines replicate the simplicity of Novell's original approach -itself based on using a 16bit MC6800 processor to serve data to multiple 8bit machines built on z80s and 8080s - and therefore allow the same usage options including physical distribution in the enterprise, automated backup and recovery, and simple storage connectivity.
Historically, of course, better technologies are rejected by the majority of IT decision makers, so what makes this different? The arrival of 10Gb ethernet as the next generation SAN standard is already forcing consolidation in the SAN vendor community -and because that will force some customers to change, the survivors are likely to be those who choose to pre-empt customer choices between traditional SAN costs and complexities on the one hand and the simplicity of Sun's low cost approach on the other, by trying to compete with Sun and thus drag the whole industry forward to where it was going in 1979.