Fusion-io ioDrive (80GB)

fusionio-iodrive1.jpg
  • Editors' rating
    Not yet rated

Solid state has presented us with a problem.

It's not just capacity mind you — it's that for the first time in recent memory there's a very real threat that even a single drive can saturate the bus it's attached to. Consequently companies are turning to PCI-E instead of SATA for SSD storage, of which Fusion-io is one. It's not just throughput this saves on either — pure physical space is ripped out by negating the need for the physical hard drives, cables and interfaces required to RAID drives to reach the same speed as solid state, not to mention the vastly dropped heat output.

The company is certainly the most high profile of the add-in-board SSD manufacturers — attracting the likes of Steve Wozniak and plenty of venture capital investment. Unlike OCZ and Promise, Fusion's offerings are aimed squarely at the enterprise space, and this has paid dividends, with its technology ending up in HP's IO Accelerator and being given "Server Proven" status by IBM.

They should be in the consumer space soon though, with a version catered towards gamers branded the Fusion-io ioXtreme PCIe. It's endorsed by the gaming celebrity gamers love to hate, Fatal1ty, but stands a very real chance of being the first Fatal1ty product that either isn't pointlessly branded, or doesn't suck.

This technology doesn't come cheap, however. The entry price for the above 80GB gamer card upon launch is a somewhat painful US$900. There's also the flagship ioDrive Duo which sits on a PCI-E x8 interface, RAID 0s two ioDrives together, features capacities up to 640GB and will set you back around AU$12,500. There's a 1.28TB version coming later. We're talking very, very serious stuff, to the point where there's only a few places in Australia you can get them — IoMax and PixelIT being the most prominent.

Design and features

The Fusion-io ioDrive connects through PCI-E x4. (Credit: CBS Interactive)

Top ZDNET Reviews

The card on the bench today is the original Fusion-io ioDrive 80GB model (although 160GB and 320GB variants are available), which connects over a PCI-E x4 connection, giving it a potential 1GBps bandwidth in both directions between the card and the rest of the machine. It retails for around AU$4200 — which breaks down to about AU$52.50 per GB. Compare that to about AU$26.50 per GB for a 32GB Intel X25-E, or around AU$0.13 per GB for a 1.5TB Seagate hard drive, and it becomes apparent the ioDrive is for those with immensely special data needs.

It's a half height card (supplied to us with a full height bracket), with green, orange and red signal lights on the back. These perform diagnostic duties: a solid green light means the device is attached to the system and the driver is loaded, a flashing green light says it's reading, a flashing orange light indicates that the device is writing, and a solid red means the driver is not loaded. For those administering remotely to a number of ioDrives, you can also turn on a "beacon", which highlights all three lights at once to identify the ioDrive in question. There's also a set of pins available that can be wired to your chassis for hard drive activity.

Status lights tell you when the device is successfully connected to the OS, is reading or writing, or has an error. (Credit: CBS Interactive)

Installation is as easy as pushing in the card and installing the drivers (and optional SNMP support, if you want) — at least on Windows. Despite a number of attempts, CentOS just didn't want to play with us (likely down to kernel/driver mismatch), so it was off to Ubuntu for Linux testing. Installing the supplied .debs and running modprobe as suggested in the instructions wasn't enough — we also had to move the drivers into the current kernel directory under /lib64/modules/ and add the fio-driver module name to /etc/modules file. While this may be an obvious step to Linux admins, its omission from the manual may provide some users grief. It's worth noting that you cannot boot an OS off the card — this is a storage device only.

The ioManager software looks pretty much identical on both Linux and Windows, although you can opt for command line tools if you so desire (a separate package in Linux, and in Windows are copied to Program Files\Fusion-io\Utils during driver install). There's even a separate remote manager application if you need to get at your cards locked away in a server room and don't want physical access. The options are reasonably sparse: you can update the firmware, low level format the drive, and attach or detach the device from the OS. Firmware version, serial number, PCI details and lifetime physical read and write statistics are displayed.

The ioManager software is cross platform and easy to use. (Credit: CBS Interactive)

On Windows, the latest firmware is installed into the Program Files\Fusion-io\firmware folder when you install the latest drivers. While this little tidbit is listed in the manual, we'd prefer a separate download to make it more obvious that there is a BIOS update available. For Ubuntu, it seems as if installing the supplied .deb does all the firmware work for you.

Low level formatting gives three options: Maximum Capacity, Improved Write Performance and Maximum Performance. At Maximum Capacity you're given 74.93GB of formatted space, at Improved Write Performance 37.47GB, and Maximum Write Performance 22.48GB. So how does less capacity translate to extra performance?

The ioDrive uses a lot of redundant writes and checks at the bit, die and chip level that confirm a successful write and make sure the data is safe — for the most part this is overcome by using high quality parts and intelligently programmed controllers. During heavy writing though, the controller sometimes can't keep up with the needed write-flush commands to save the data to the NAND in a timely fashion. The extra capacity supplied in the improved and maximum write modes is therefore used to spread the redundancy load across the NAND chips, minimising the chances of slow down while still including that factor of safety.

So while a single user is unlikely to really punish the ioDrive write-wise in any mode and should be happy with the Maximum Capacity setting, we should start to see benefits in harsh situations like a multi-user environment. IoMax tells us that the card also ships with 25 per cent more NAND chips than it needs, to add extra redundancy and to cope with wear levelling a little better, and uses higher quality, ECC grade three NAND chips, whereas the consumer drives use significantly cheaper grade one. It's even got its own dedicated parity chip.

The ioDrive supplies grade three NAND chips, with 25 per cent redundancy for extra wear levelling capability. (Credit: CBS Interactive)

Performance

Our testbench for this analysis included an Intel Q9550 @ 2.83GHz, MSI P7N Diamond, 8GB Corsair DDR2-8500 RAM and a GeForce 7600GS. To make sure the maximum speed was available to the ioDrive, it was inserted in the first PCI-E x16 slot, the graphics card inserted in the second. Driver revision 1.25.73 was used, in combination with firmware 21460.

To illustrate just how much of a jump from traditional media the ioDrive is, we compared it against a Seagate Barracuda 7200.11 1.5TB mechanical drive and an Intel X25-E 32GB solid-state drive in Windows Server 2008 64-bit.

First up, the single user test CrystalDiskMark, which allows us to test sequential reads and writes, as well as 512K and 4K random reads and writes. We set it to five tests, and transferred 1GB to all the competing drives.

The mechanical drive gets slaughtered, most of all in the 4K random stakes, invisible on this chart. The Barracuda scored 0.628MBps in reads, and 1.3MBps in writes.
(Credit: CBS Interactive)

On average, the ioDrive is three times faster on sequential reads and writes than the Intel SSD, and massively increases the random reads and writes. At this stage you'll notice that the different modes of the ioDrive make no appreciable difference and really are within margin of error — however, they do make a giant impact in other areas, which we'll investigate later. First, let's hit it up with an Iometer test for multi-user access stats, using the "file server" set of parameters as outlined by Intel. Four workers were set going with varying queue depths for a period of 10 minutes.

IOPs show the mechanical drive being decimated — it peaked at 105.37IOPs.
(Credit: CBS Interactive)

How the above translates into MBps — the mechanical just managing to break 1MBps, compared to the Fusion's 291.51MBps. (Credit: CBS Interactive)

Response times are blisteringly fast for the SSDs — the ioDrive only showing its advantage over the Intel X-25E when things get really heavy. (Credit: CBS Interactive)

Writes are always slower. Here the mechanical posts a maximum of 26.6IOPs. (Credit: CBS Interactive)

The same, but in MBps. The mechanical can't even manage 0.3MBps (Credit: CBS Interactive)

Response times are typically longer for writes as well. (Credit: CBS Interactive)

In a multi-user situation the ioDrive absolutely dominates, and it's here where the different speed modes and Fusion-io's aggressive write strategy become apparent. If you have a heavily embattled server, this could go quite a long way to fixing your concerns.

Conclusion

The Fusion-io ioDrive is in a performance field of its own. Home users are much better off RAIDing a few SSDs together; however, for those running servers that need extra throughput now, the Fusion-io represents an expensive, but justifiable saviour.

Top ZDNET Reviews

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All