X
Tech

NAS Wars 2017: Behind-the-scenes and final results

Welcome to ZDNet's DIY-IT project lab, where we've been stress testing RAID devices for your entertainment and edification. We put seven NAS boxes to the test and, in this final installment, we take you behind the scenes and summarize our results.
Written by David Gewirtz, Senior Contributing Editor

Network attached storage boxes have gone from mere storage devices to what are essentially desktop departmental or small business servers. Most offer a wide range of add-on applications that take them into the general purpose server realm.

Even with all that add-on fluff, their most important role is to protect your stored data. Over the course of this project, I subjected seven of the most popular NAS boxes to the ultimate test: how well do they protect your data?

I also want to put this insane project into context. This project has been huge. Actually doing real testing at the storage level has been incredibly time consuming. This project took about three months to complete.

Over the course of those months, I stress tested seven devices: the Synology DiskStation DS1817+, the QNAP TVS-473, the Buffalo TeraStation 5410DN, the Terramaster NAS F2-420 (yes, there's a TeraStation and a Terramaster -- it hurts my head, too), the Western Digital MyCloud PR4100, the beastly ioSafe 1515+, and the Drobo 5N2.

Understanding RAID

The core tests were all about the RAID functionality. Let's talk about RAID for a moment, because all your data's life depends on it.

The RAID concept was introduced in a UC Berkeley (go Bears!) research paper back in the 1980s. The authors described the concept as Redundant Array of Inexpensive Disks.

Over time, however, the term has come to mean Redundant Array of Independent Disks. Since RAIDs also work with SSDs, even that's not quite accurate anymore.

In any case, think of RAID as a stack of drives. Once you have a bunch of drives, there's a lot you can do with them. For example, if you want to speed up writing data, you can write to one drive, then the next, letting one drive continue to spin around while utilizing the other drive. That's called striping.

While all these boxes can do striping, that's not what I tested here. The big value of RAID boxes for most businesses is their protection of data in the event of drive failure. That's called mirroring. The idea is that data written to one drive is also written to another.

That way, even if a drive completely fails, you can still access your data. You should be able to remove the failed drive, install the new drive, and keep working. The RAID will rebuild the redundancy on the new drive and your data remains secure.

Our RAID testing process

That brings us to our tests. I ran all seven boxes through three key RAID tests. I used the same four drives for all my tests. Let me be clear here. I didn't just use identically configured drives. I used the same physical drives and moved them from machine to machine. That way, I could reduce the variability in the experiment.

At the beginning of each test, I installed two 2 terabyte drives in each machine and made sure the RAID mirroring was active. I used relatively small drives for initial testing because the RAID rebuild process takes quite a while.

When it came time to test a given box, I booted up the NAS and copied over about 700 gigabytes of media production files. These files were a wide assortment of types, ranging from large videos to audio files, PDFs, Word documents, and more. They're the real documents used in the production of a few of my projects.

diy-data-center.png

Since these files were on a Mac, they gave me a perfect opportunity to test whether the NAS I was testing was able to support Mac files cleanly and without fuss. Of the seven machines I tested, two of them, the QNAP and the Buffalo, failed that test. Each lost points in my overall rating because of this failure.

Even with smaller-capacity drives, and less than a terabyte of sample files, it took almost three weeks just to run the failure and rebuild tests across the entire set of seven candidate servers.

With each candidate machine, after copying the files, I waited overnight for the array to fully synchronize. That way, I could be sure the mirroring would be fully functional.

Drive failure recovery test

Once a collection of files was copied to the initial test RAID array, it was time for the first RAID test. The purpose of the first test, the drive failure discovery test, was to see how well each machine handled a drive failure. That, after all, is the fundamental purpose of a RAID -- protecting your data when a drive fails.

Since it wasn't possible to reliably simulate a drive failure with the drives in motion, I used a more structured test. I have, in my collection, a bad drive. It won't boot, can't be recognized by Windows, and is, not to put too fine a point on it, basically toast. It is the living embodiment of a drive gone bad.

Yes, I collect things like this. I also have a brain in a jar, but that's another story.

This same bad drive was used in each machine to simulate a drive failure. For each machine, I shut down the NAS being tested, removed the second drive, and replaced it with the dead drive. I then rebooted the NAS and waited to see what happened.

My goal was to determine what kind of visible or audible notification each device would display upon failure. I wanted to determine, first, if the NAS could detect the failed drive at all. Then, I wanted to determine how each NAS would show it.

I also looked at other methods of notification for a drive failure, whether through email or SMS, or both. This is actually a bit more complex than you might think, because if you're running the NAS in a home office, you might run afoul of your ISP blocking some ports.

More great project ideas

To do email notification the old fashioned way, you have to specify an SMTP server and an account. Not all home broadband providers allow this. For example, my cable company will not allow me to connect to an SMTP server outside their domain.

To help users with this limitation, some of the NAS vendors provided a link to a central service or used an API to connect into Gmail to send notifications.

Recover to a mismatched drive test

Now, let's go back to the RAID tests. In the early days of RAIDs, for mirroring to work at its best, you needed to have matched drives. If a drive failed, you had to replace it with an identical drive, or the system would either fail or take performance hits.

The downside of this was the expense and storage issues associated with keeping extra identical spare drives, and the stress of knowing that once you ran out of those spares and the model needed is no longer on the market, you'd probably have to replace your RAID.

With modern file systems, you should be able to use mismatched drives. Keep in mind, though, that it's always a good idea to keep one or two spare drives around in case you need them.

So for our second test, I removed the known bad drive and, instead of replacing it with a 2 terabyte drive that matched the remaining working drive, I put in a 3 terabyte drive from a different vendor.

Would the storage array accept the mismatched drive and rebuild the array? Most of the NAS boxes passed this test, with the exception of the Buffalo (which required some rather extensive, unnecessary, and painful intervention). As you saw in my review of the Buffalo, painful was a theme with that box.

RAID grow-over-time test

For the final RAID test, I wanted to see if the RAID would grow with you over time. This is about storage capacity. In a two-drive mirror, the maximum storage capacity has to be that of the lesser-sized drive. In reality, there's also some capacity used up by the RAID housekeeping.

At the end of Test 2, each RAID box had one 2 terabyte drive and one 3 terabyte drive. The capacity was dictated by the existence of the two terabyte drive.

But what if you want your RAID to grow with you over time? When I first bought my first Drobo seven years or so ago, all the drives were mere 1 terabyte drives. We were thrilled to have that much storage. Today, you can get a 10 terabyte drive for under four hundred bucks.

Ideally, a RAID NAS box would allow you to swap out drives, growing your array capacity over time. That's what Test 3 aimed to measure. To accomplish this, I removed the existing two terabyte drive from the 2/3 array and replaced it with a four terabyte drive.

Now, the mirror capacity should be dictated by the three terabyte drive, not the two. That's Test 3. Would it be possible to increase the storage capacity to account for the fact that the machine grew from the duo of two terabyte drives to one three terabyte and one four terabyte drive?

The Buffalo and the Terramaster failed this test.

All told, those three tests gave us a really solid, hands-on understanding of how each NAS array performs under stress.

File transfer performance

Next was a test of file transfer performance. To eliminate any possible network connection issue or difference, I ran a wired network connection between a test computer and each box, one by one.

Every box shared the same network cable and ran across the same switch. I moved the cable from box to box when I moved on from one test machine to another.

I used the Blackmagic Speed Test to benchmark each box. In write tests, there was about a 10 percent variability from the fastest to the slowest. Read tests were even closer, with about a six percent variability from fastest to slowest.

Usability and pricing

Beyond that, I looked at fit and finish, number of apps, whether or not the NAS supported a network recycle bin, and user interface.

Scores and weightings were assigned for all of these factors into one large spreadsheet, and then points and stars were assigned.

Because I wanted to find a consistent way to compare prices, even though I had boxes ranging from one bay to eight, I settled on a price-per-bay measurement. The least expensive box was $100 per drive bay, while the most expensive was $380 for each bay.

One thing I found particularly interesting was that two of the boxes I'm recommending most highly turned out to be the least expensive on a price-per-bay basis.

While unexpected, I thought that was pretty cool. NAS devices aren't cheap, but you don't have to buy the most expensive devices to get the best quality. In fact, with the exception of the hardened ioSafe, the opposite is true.

Final results

So let's wrap things up.

When it comes to RAID performance, the clear winners are the Synology, ioSafe, QNAP, and Drobo. If you want lots and lots of features, you'll be happy with the Synology, the armored ioSafe, or the QNAP. If you want a set-it-and-forget-it storage appliance, you'll like the Drobo.

It's difficult to call out one that's best, because everyone has different needs. That said, there's no question a Synology box is the machine I'd recommend for most small businesses, creative folks, and departments. Not only is it one of the least expensive, its design is clearly best in show.

The machine I tested here has eight bays, but you can get Synology boxes with as few as two bays and as many as 24 (or possibly more, because expansion bays are available). Earlier this year, I tested a four bay Synology 916+ and it was quite excellent.

If you want a device that's purely an appliance, with very few additional features, the Drobo is an excellent RAID, but it is extremely basic. Its non-web UI hasn't been updated or improved much since I bought my very first Drobo back in 2009. You also can't use the dashboard on Linux, which is an unnecessary limitation.

If you want to hook your NAS directly up to your TV for a media center, the QNAP is the way to go. Just be aware that it failed the Mac file compatibility test, so my good recommendation is limited to Windows users.

The Terramaster and Western Digital did reasonably well, with one failure each. Check out the individual reviews for more info on my test results.

Finally, given the wealth of other offerings on the market, I just can't recommend the Buffalo. They're very nice folks, but their product, from the interface to the file system, seems dated, convoluted, poorly designed, and doesn't do what it's supposed to do. It might have been barely compelling in 2007 but not in 2017.

That's it for this project. A big shout-out of thanks goes to all the vendors, who provided the gear. All were a pleasure to work with.

You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards