Build a 10 Gbit home network for $1100

Build a 10 Gbit home network for $1100

Summary: Create the ultimate gaming supercomputer? You've overclocked, water cooled, matched DIMMs, added 10k drives and the latest 1 GB video card.

TOPICS: Networking

Create the ultimate gaming supercomputer? You've overclocked, water cooled, matched DIMMs, added 10k drives and the latest 1 GB video card. But so have all your friends. What now? How about a 10 Gig home network for the ultimate gaming supercomputer?

In a pricing breakthrough you can now buy an 8-port 10 Gig switch, 2 PCI-Express 10 Gig adapters and cables for under $1100. It is the fastest network available for the dollar. Update: By comparison the cheapest 10 gigE NIC at Newegg is almost $900.

One word, my friend: Infiniband No, this isn't 10 Gig Ethernet. An average 10 GigE switch port costs over $2500 today and the overhead of TCP/IP will bog down even hefty systems unless you buy a costly TOE (TCP/IP Offload Engine) adapter. No, this is Infiniband, a high-speed, low-latency, low-overhead network widely used in supercomputers, high-end storage and clustered computing.

Originally spec'd in 1999 by Intel, Microsoft and Sun (ngio) and Compaq, IBM and HP (Future I/O) to replace PCI, Infiniband has evolved into a general-purpose high-performance interconnect. As volumes have grown, prices have dropped, but this latest price-cutting iteration took me by surprise.

Drivers are available for Linux, Windows XP and OS X - though serious gamers aren't likely to be using the latter. The kit is available from Colfax Direct, a new e-store subsidiary of 20 year-old Colfax International.

Some pricing from their web site:

  • PCI-Express 10 Gbit adapter: $125
  • 8-port unmanaged switch: $750
  • Cables: range from $35 to over $900 for plenum-rated 100 M length

The Storage Bits take Networks and storage can often substitute for each other. With a 10 Gig low-latency network you can configure diskless workstations that really scream. While today's Infiniband networks are practical only for serious gear heads, early adopters will help point the way to a not-to-distant future when we all have 10 Gig home networks.

Commments welcome, of course. Disclosure: I have no relationship, financial or otherwise, with Colfax. I worked with Colfax's chip provider, Mellanox, at a previous company and found them a pleasure to deal with.

Topic: Networking

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Necessary?

    I'm not sure even the "hardcore gamer" needs a 10Gbit home network. Heck, I stream HD content and I don't even have a need for a gigabit network... yet.

    Unless you're hammering the bejesus out of your network, it's not necessary. Neat though it is.
    • Latency, Latency, Latency!

      That's the only reason why you want to go overkill on the bandwidth, so you don't have to deal with poor latency. It's like gaming on a 5mbps/384kbps internet connection versus gaming on a 50mbps/5mbps internet connection.
      • er...

        Bandwidth is not the main determining factor when it comes to latency. It can be a factor yes, but there are a lot of other factors.
        • Ever done a massive LAN party?

          Even Gig-E with accelerated network cards (you know, those $200 and $300 network cards with the DSP on board to accelerate TCP/IP) can be slow once you have enough people playing Quake 4, Crysis, or other heavy-duty multi-player games on the same network.
      • Latency and bandwidth are different issues.

        Infiniband is low-latency - less than 100 ns through a switch - so it is not only fast, it
        is responsive.

        The Ferrari of networks.

        R Harris
  • Only reason I see for a 10G LAN is...

    ...clustering machines. Be it for a render farm or whatever. GigE is more than enough for even the most network intensive games with 50+ players (provided its going to a decent switch). But the price is damn cheap for what you listed.
    • 10G

      I seem to remember the same argument when the 100mb to 1GB switch came about...
    • 10G LAN vs gigabit vs 100baseT - its all the same!

      After running many exhaustive tests, I have found it rare that IDE (pata and sata) hard drives will transfer more than 8 megabytes per second over 100 base T.

      The same drives, over gigabit, will transfer... 8 megs per second!

      So under 10G, they will also transfer 8 megs per second.

      Yes, I have had burst transfer faster than 8 megs per second, and so have you yada yada. But run the numbers - in real world applications, even touching the hard disk means waiting long periods of time. Compared to ram, disks are slow. Even ram to ram transfers in the same computer are slow. Make to 128mb ram drives. Write a 128mb file to the first, then copy the file to the second. You will be shocked at long the transfer takes.

      The advantage of the faster speed is so the network doesn't plug up - the same wire can have more computers blathering away. It won't speed any one connection up, it will just clear out of the wire so another connection can be switched in. The data transfer rate is limited by the disk. If not the disk, then the computers ability to process the incoming/outgoing packets.

      I doubt we will see a home user needing 10G speeds (or even being able to achieve 10G speeds on a 10G setup) in our lifetimes.
  • What 10Gb Home Network? Pfffft. Yeh, I've got that.

    Not. ;)
    D T Schmitz
  • Nice Blog

    No ranting.
    No statements involving NTFS or corruption.

    Genuinely nice. I think I'll start paying attention to your blog a little more often.
    • If we don't talk about corruption it won't happen?

      You're gonna love one of next week's posts. All about how Microsoft makes it hard to
      keep working when a disk fails . . . .

      Stay tuned!

      R Harris
  • Just want I needed

    File transfer should be a snap.
  • 80% of HPC market uses 1 gbps Ethernet

    80% of HPC market uses 1 gbps Ethernet for a good reason: Cost.

    You would need more than 2 hard drives in RAID-0 to generate more bandwidth than a 1 gigabit network.

    You could build a 10GBASE-CX4 network by using a crossover cable of some sort and skip the ethernet switch but it's only good for 2 computers.

    You could wait for 10GBASE-T to bring the price down with CAT-6 networks.
    • George, the 10 gigE NICs are $900

      That's why everyone optimizes for 1 gigE.

      Sure, they'll come down in price eventually. But in the meantime this is the best
      price/performance network.

      Also, the Top500 supercomputer list is dominated by Infiniband-based systems.

      R Harris
      • Top500 doesn't really mean much, much more interesting metrics out there

        Top500 doesn't really mean much to me or any other business or consumer. The Top500 list is to see who has the most money to buy the biggest clusters of yesterday. Performance, price, power consumption per node is a far more interesting number to determine how to build a cheap cluster for today.

        10G Ethernet will become common within 2 years so for now, infiniband has the advantage on price. But 80% of the HPC market doesn't really need it and they stick with simple gigabit Ethernet.

        You certainly don't need it for your normal servers, you need 10 gigabit for your switch-to-switch backbones and the file and backup servers. Nothing else really needs 10 gigabit today much less the home.
        • This is about WANT - not NEED

          George, there's a reason I put this in a gamer context: gamers are 21st century hot
          rodders. Much of what they do has no perceptible effect on game performance.
          They're just pimping their ride.

          Very few people NEED 10 gig, but it is coming anyway. 20 years ago I looked at
          how manufacturers were using 10 mbit Ethernet on the plant floor and found they
          were only using about 6% of the bandwidth. Today they have 100 mbit - probably
          1000 mbit - and they're probably still using 6% of the bandwidth.

          But your biggest misperception is that the list doesn't matter to
          business. The single largest specified application area on the list is FINANCE! Next
          is geophysics. Those 2 make up 23% of the total.

          Let's see. Money and oil. . . . hm-m-m?

          Oh, and Infiniband is more power efficient than Ethernet.

          R Harris
        • lol..

          this from the guy who is always touting Intel over AMD because their bleeding edge processors happen to be faster on a given day. When it's not a company that Georgie has a religious investment in (Microsoft/Intel), bleeding edge speed doesn't really mean much. It's a shame that religious fervor doesn't equate to usable information, otherwise Ou might qualify as a prophet. As it is, he's just another religious zealot on the Wintel crusade.
        • Consider this

          [i]Infiniband can make an incredible difference.

          Think of 20 machines trying to communicate at the same time through a switch. It takes appx 30-120microseconds for each communication. So a back and forth chat between two machines on a gigabit switch takes between 60 and 240 micro seconds. IB can be as low as 2 microsec, or 4microsec between 2. Think of all the microseconds that are wasted while a node sits and waits to hear from another. Those microsec's turn into sec's which turn into minutes, etc etc etc.

          In a cluster a parrallel job can have systems sitting idle 40% of the time, this all depends on the need for communication, but it is quite often that high. So a high speed/low latency intercconect can give you back 35-39% of the lost cycles plus it can add 10-20x the bandwidth. Not a bad purchase for a cluster. So many people buy more nodes to speed up there cluster and many are just adding wasted cycles.[/i]

          That is not my quote but from a friend known as "Gruntman"
        • Come back to Earth George

          [i]Performance, price, power consumption per node is a far more interesting number to determine how to build a cheap cluster for today.[/i]

          Blue Gene/L was designed with energy efficiency in mind. While the predecessor on the list was a mammoth super cluster, Blue Gene/L was larger but consumed much less power and yielded even greater performance. And let us not mention scalability.

          10G LAN isn't even remotely usable today with a modern PC network unless you happened to be trunking together two very large backbones. Even then, the cost would probably be better off set by fiber. It would be fun to upgrade the earlier mentioned $1800 dollar cluster.
  • yikes

    Heh, wow, this would really have to be somebody with some serious money to spare.