Building our massive storage Media Tank

Building our massive storage Media Tank

Summary: We continue our massive Media Tank story by answering the questions almost everyone asked: what's inside? Read on, and we'll tell you all about it.

SHARE:
TOPICS: Storage
14

The case

The next problem was the case. I wanted a moderately compact case, and one that wasn’t ugly. This was going to be sitting in the media room and it had to at least look vaguely stereo-equipment-ish. As it turns out, it looks more like a Dalek than a stereo component, but that works, too.

Key to this requirement was at least two sets of contiguous three-bay (5.25-inch) drive slots. I was going to slide the drive cages into those slots, so I needed to be able to fit them in the gap.

I really like the case we found, the Thermaltake V5 Black Edition, which is also, unfortunately, no longer available. These are really sweet cases, and they were only $60 each. They also have a very robust handle on top, which is surprisingly helpful (especially when we were moving).

The case
A view of the case (Image courtesy Newegg)

Like most DIY-IT projects, If you decide to go off and build one of your own, you’ll never find the exact components I used, but you’ll want to find equivalent units that do the same job.

Power

As you might imagine, ten drives draw a considerable amount of power. However, since the Tank was not hosting other power-sucking PC components (like honking video cards or overclocked processors), I could get away with a big, but not huge, power supply.

I eventually settled on a $90 Corsair 650 watt ATX supply, which has behaved itself reasonably well since we first fired it up.

Riding the technology price/performance curve

Now, as you may have noticed, the cases actually came with a pile of internal drive bays. I didn’t, technically, have to install my sliding trays, because I could have just mounted all the drives (or most of them, anyway) inside the case. But I wanted the ability to add, remove, and swap drives from the Tank without having to open everything up.

The ability to easily swap drives without opening the box was actually one of the key parts of our design strategy. That’s why I used the sliding trays.

I approached it this way because I didn’t want to have to buy all the drives at once. As the drives filled up, we added more. Also, I know that drives fail, and I wanted replacement to be easy without tearing apart the whole case. In fact, one drive did fail a few months ago, and replacement was a snap.

In terms of strategic drive purchasing, drives usually have a sweet spot (unless there’s a major flood in Thailand). There always seems to be one drive capacity that’s comparatively cheap for the amount stored compared to the other drives.

Drives also regularly come down in price. When we first started equipping the Tank, 1TB 7200RPM drives were in the $169 range. Last week, I bought four 2TB 7200RPM drives for $85 each.

So, the idea was that as we digitized our media content, we’d add drives. Over the time it took us to use up more capacity, the higher capacity drives came down in price, making the whole thing more cost effective. I started off by re-using a bunch of leftover 1TB drives, and then as we needed more space, I moved the content over to 2TB drives, which is where we are now.

Had I bought the larger capacity drives all at once, I would have spent roughly three times as much, and incurred more wear and tear on always-spinning platters.

Assembling the Tank

Assembling the Tank was a lot of fun, in large part because projects are really fun to do with my wife. We work well as a team, both when pulling the components together and during the assembly phase, where she’s a lot more patient with fiddly parts than I am.

There are a lot of cables jammed into that case, and opening it is non-trivial (which is why I haven’t taken any interior pictures for you). But the big challenge was mounting the drive cages inside the case’s 5.25-inch drive bays. As it turned out, there were some metal mounting brackets that got in the way.

My wife went to town with some hand tools, swiftly and efficiently removing those brackets, and actually performing precision alterations (as well as some profanity-fueled brute-force bashing) on the interior of the case so everything would fit right. I love my geeky girl!

Stay tuned

So there you go. The guts of the Media Tank, Mark I. There’s more to come with this story. Stay tuned.

Topic: Storage

About

David Gewirtz, Distinguished Lecturer at CBS Interactive, is an author, U.S. policy advisor, and computer scientist. He is featured in the History Channel special The President's Book of Secrets and is a member of the National Press Club.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

14 comments
Log in or register to join the discussion
  • Keep 'em coming

    Most interesting stuff I've read on ZDNet in a while. I have an Aspire Easystor H340 that does the trick for now, but I'll eventually outgrow it (or it will die at some point), so I'm taking notes here.
    MichelR666
  • Awesome!

    I want to build one kinda like yours David. Is it really that hard to just take the case cover off and snap a few pics? Nothing too extreme depending on where the tank sits.
    Please?
    JustWow2000
  • I will keep my advice short...

    For storage, never, ever use anything but ZFS!
    If you value your data, that is. :)
    danbi
    • That is poor advice

      So we are fortunate that is was short.
      John Zern
  • XP, David?

    I'm generally not one of the people around here who participate in OS wars or anything, but if you're making a huge dedicated storage array whose life's purpose is to share folders, why not use one of the many excellent appliance operating systems?

    My home storage server runs NAS4Free, and runs it very well. FreeNAS, OpenFiler, NexentaStor, and OpenMediaVault are all explicitly designed for the very purpose for which you're assembling the hardware. Additionally, some of the distributions above run on ZFS, which is among the best file systems available for highly resilient storage.

    I know you said that Linux finds its way to annoy you in one form or another, and I completely concur with your decision to not install Ubuntu/Fedora/Cent and try to coax it into not making a mess. However, purpose-built Linux appliances are generally more stable and user friendly, and in nearly every case uses a browser based control panel instead of requiring the use of a command prompt. I've never used it out of necessity on my box.

    Check 'em out before you opt for XP. Windows may be familiar, but there is decisive optimization for storage in NAS4Free/FreeNAS/OpenFiler/NexentaStor/OpenMediaVault. Truth be told, the amount of work it would take to mold XP into a storage server is roughly the same amount of work it would take to take a lap around the UIs of these projects.
    Joey
    voyager5299
    • Nope

      I have never experienced Linux to be more stable than NTFS-based machines. I've had EXT3-based Linux machines (which is what, years ago, we started with as a file server) crash to unrecoverability, where the NT server we ran ran for something like 5 years without a restart.

      ZFS may be well done, but it's complex and persnickety. NTFS just works. I've never, ever lost a system to NTFS, but had a variety of Linux machines fail in completely unrecoverable and nasty ways.

      The Tank is a good example. Even though it's grown, it's been rock-solid robust for the last three years. That's because of NTFS. XP, for this purpose, is merely a shell around the NTFS and SMB environment.
      David Gewirtz
      • ZFS

        Let's just say, that ZFS is not an Linux thing. Linux might have some good spots, but filesystems is not one of these.

        ZFS was designed by SUN in Solaris and is best available in Solaris, it's derivatives, like OpenIndiana etc and, FreeBSD. There is Linux port of ZFS but it is still not very well integrated. There is also an OS X port of ZFS, which too, is not very well integrated. All of these platforms will do fine with ZFS for storage (even Linux and OS X). To my knowledge, these is no usable ZFS port for anything Windows.

        ZFS is by far the best possible platform for storage. It integrated both device manager, caching, volume manager, block device and POSIX filesystem layers. ZFS ensures reliability by using end-to-end cryptographic checksums for every block it writes, plus plenty of redundancy of metadata to recover from many different data corruption scenarios. In fact, if your "computer" hardware, such as CPU, memory is "good", ZFS will take care of your data consistency.

        One of the original design goals of ZFS was to take care of the ever increasing size of disk drives and the more or less constant amount of error correction that is provided. For example, consumer disks provide sort of 10^14 error rate. This means, that for every 12.5TB of data you read, you will inevitably have at least one bit wrong. That is, with any other file system, reading your entire "tank" data will provide you with at least one file that is read wrong. Now, consider you will process that file and write it back.. with the wrong data. And hopefully you get the idea.
        Going with the much more expensive enterprise drives gives you just 10^15 error rate, which means you will garble your data after just 10 full reads...

        Doesn't matter if it's NTFS, FAT32, EXT3 or UFS. Or HFS+. As long as your huge file storage doesn't do integrated end-to-end cryptographic checking for *everything* your data is silently corrupt. You should have kept your paper receipts :)

        You think it's rock solid, only because you have't yet discovered the data corruption.
        Been there, done that. Never be naive with storage again.
        danbi
      • In Agreement

        I'm in agreement with you David. For a while, I drank the purple, sweet Linux juice only to find that it would let me down. Granted, most of my experience is with Red Hat and all of it's variants such as CentOS and Fedora.

        Starting with Windows XP/Server 2003, my issues have been very small. With the advent of Vista/Server 2008 and all later versions (OS versions 6.0, 6.1, and recently 6.2) losing data to a NTFS Windows' share has been a nonissue--even on a failed Dell PowerVault MD1000/3000.

        If I ever take on a personal project--and even a professional one in which I have controlling interest--I will use Windows Server backends for the storage and let OSX and Linux clients and servers play nicely with it.
        DarienHawk67
      • I agree

        When building my home storage server I really wanted to go with ZFS. Since my NIC was not supported by OpenIndiana I got cornered into Linux and went with Ubuntu Server.

        It works great for me, but I fully agree, with you David, that for an all Windows network it would be easier to get XP fully optimized than all of the fun that I've had with Samba.
        swoarrior
    • Those appliance OS's.....

      Such as OpenFiler, are usually Red Hat based systems. I have used OpenFiler many times and it has never failed on any server I installed. Using a good raid controller, David's Tank can be built with the same costs and would be much more reliable too.

      But XP??? Come on David......
      linux for me
  • Were These Full Retail Copies Of XP?

    Because you're not allowed to install OEM XP licences on a machine for your own use.
    ldo17
    • Fully compliant

      Over the course of the years, we bought a tremendous number of XP licenses, which were on machines taken out of service. It is entirely within licensing parameters to move a license purchased from one hardware machine to another.

      I very rarely bought machines with OEM XP licenses on them. During the XP era, with the exception of a few laptops, we built most of our machines and bought separate XP licenses. We've done the same with Win7, including my super-honker laptop, which I bought without an OS and then bought and installed my own.
      David Gewirtz
  • Use Server 2008 R2/2012 Eval

    As I am pretty sure you already know, you can legitimately use Widows Server 2012 (or 2008 R2) 180-day eval edition. With a single, fully MSFT supported command line, you can have your fully authorized and activated "server" running for a very long time. Note that I am not talking about any hacks, cracks, etc. The server can be fully patched and updated with all of the latest WGA technologies.

    The Eval server should still be very active by the time you decide to build the Mark [x].
    DarienHawk67
  • Time for an upgrade ?

    Nice article, but you might want to start thinking about an upgrade. XP is old, and it's SMB stack is old and slow. If you were to upgrade to 2008, or 2012, you would get SMB2, and some serious networking improvements. I would also recommend some level of RAID protection, just in case one of those drives die. Also, a backup strategy for this beast might be in order, as a nifty flood, or fire, could eradicate your known universe.
    Another thought... external 8 bay tower, with internal hardware RAID in the tower, connected over USB3, stuffed with 2 or 3TB drives... Then you could use any stock PC as the NAS gateway.
    Iozone