Another battle royale: SAN vs. local storage for VDI

Another battle royale: SAN vs. local storage for VDI

Summary: The "experts" are at it again and I feel that it's my duty to set 'em straight.

TOPICS: Storage

If you've followed my posts, you know that I love the so-called "experts" and their often media-informed opinions about technology. It's good entertainment for me when one of them says that technology 'A' is the right answer and technology 'B' is too slow/expensive/difficult to manage/legacy or whatever the latest buzzterm is for them to use. Often my buddy, Jason Perlow, and I get together on a podcast, phone or via IM and give these guys a good going over. Sometimes we make it public and sometimes we don't. Yesterday, I received a notice for a discussion about SAN vs. local storage for VDI in my email that features Brian Madden and Gabe Knuth. Well, of course, I had to read it.

A moment of divergence: If you read Brian Madden's bio, you'll see that he considers himself to be a world-renowned desktop virtualization expert. OK, let's go with that. Though he lists no practical data center or technical experience, he has written a lot about desktop virtualization. I suppose one can become an expert as an observer. I know a lot of football (American football) experts who've never spent one moment on the field as a player--so, it happens.

The transcript is very hard for me to read*, so I searched for the original post and found it. I also found Ron Oglesby's rebuttal, where he says Brian is wrong. Although I don't know Ron nor his complete background, he does seem to have some practical technical experience behind him. He also posts that he is a "Cheif Solutions Architect at Unidesk," as his title. I won't fault him too much for not being able to spell "Chief" correctly.

My thought is that, again, there's no single right answer for this technology question. There are pros and cons for both. And, consequently, I think both Brian and Ron are wrong. But, they're also both sort of right too. But, it's their "all or nothing" campaign that makes them more wrong than right.

How can they both be wrong when Brian says, "Local" and Ron says, "SAN"?

The reason is that it depends on the VDI user--you know, the person for whom VDI is supposed to serve. For people who use Word, Excel, Outlook and Internet Explorer (or their Linux counterparts, if anyone's using Linux VDI), then they can just boot from SAN and use that image. Heck, they could even use a shared Windows image--and contrary to what Brian says, it actually can be done. There are companies doing it. Parallels is one of those companies.

However, there are cases, rare though they are, where you'd want a user to have a locally-stored image. Those users require a "bigger" desktop in all instances--more RAM, more disk space, more CPU, more network bandwidth and faster disk access.

It's not really an all or nothing question or answer. If you're moving to a VDI solution, you'd better take the type of user workload into account or you're going to have some very unhappy users pounding on your door. In a perfect world, you'd need only one disk solution. The world is far from perfect, so you'll need local storage (I recommend SSD) and SAN. You'll possibly also want some other type of network attached storage (NAS) for storing documents and other data files that doesn't require SAN speeds.

You see, for most users, speed isn't all that critical. Once the image boots, everything required is in memory. Your biggest bottleneck isn't going to be disk I/O but network bandwidth. However, you can control some bandwidth problems by using VLANs or other network segmentation to separate "boot storm" traffic from regular data flow.

It is for the reasons I've given that VDI is very expensive. You need local storage for some users and SSDs aren't cheap. You need SAN for almost everyone else, which isn't cheap. And, you'll need NAS for longer-term or static storage. You'll also need a good backup and restore system for your data as well as your disk images unless you use a single desktop image (or a few master images).

The best scenario for a one-to-one (one user to one desktop image) is to have a master image that everyone gets a copy of for their very own. No local data and no local changes are possible on that desktop image. This way, you'll only have to preserve one desktop image or a small set of images based on user function and a few local images for those who require those locally stored desktops.

It isn't as simple as these guys tried to make it. It never is.

A bit of real world experience helps a lot.

What do you think about local vs. SAN storage for booting and storing disk images? Do you think it's all or nothing or do you think it's a mixture? Talk back and let me know.

*It's very hard to follow the language in it. I had no trouble with the actual words.

Topic: Storage


Kenneth 'Ken' Hess is a full-time Windows and Linux system administrator with 20 years of experience with Mac, Linux, UNIX, and Windows systems in large multi-data center environments.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • For I/O

    Actually, speed matters in some cases. For instance, I'm trying to optimize database reads/writes for a large app (server side), and we're trying to do whatever it takes to optimize I/O - that sometimes means local instead of SAN/NAS.
    • I know but...

      I am just talking about VDI here.
  • Wyse

    Wyse Streaming Manager solution is a good take on VDI. Doesn't require SSDs for 'local' storage. Uses the SAN. Good performance. Flexible with apps.

    We like it.
  • Boot storms

    VDI makes a great case for SAN technology. SSD with disk tiering to prevent boot storms, or NetApp's PAM cards, but still have the capacity needed using SAS drives. You could in some cases allocate just 5-10% of the total space from SSD, the rest SAS.

    If you recommended DAS only, for a VDI deployment, either it's a very small implementation for a very limited set of users, HA is not needed, or the consultant pitching this has absolutely ZERO experience with a SAN and isn't comfortable with it.

    A recent deployment I participated in for VDI had some dozens(Greater than 40 that I'm aware of) of disks in the pool. All SAS. Boot storms galore for the 1500 users accessing the system. All it took was 300GB of SSD, and problem solved. Well that and some fancy tiering software in the SAN.
    • Boot storms

      Are definitely a concern with VDI. We have been testing Server 2012 with SMB storage using RDMA and Windows file servers and are seeing comparable IO to SAN technologies (FC or iSCSI). SSD's are a great option if you have the money to spend, but we are seeing more than adequate performance in our setup which is a fraction of the cost of doing this with SSD's. We are also planning on testing Storage Spaces (also new to Server 2012) to see if that improves things. Storage, regardless of the scenario is becoming commoditized.
  • Another battle royale: SAN vs. local storage for VDI

    Oh dear. Arguing SAN vs local storage for VDI assumes a false premise that VDI makes sense in the first place. What is presented and argued by Brian and Ron is frankly irrelevant. It's akin to arguing who gets the last berth on the Titanic.

    The marketed advantages of VDI (a VMware term) are that VDI lowers cost, improves security (everything is behind the firewall), offers centralized management, and better, more consistent user experience.

    One word answer: bulls--t! Neither the economics nor the physics of VDI make sense.

    But, let's assume it does make financial sense, and that it actually offers advantages over a well deployed and managed Windows 7 'legacy" deployment. (Microsoft publishes a white paper detailing why it doesn't, and they get more cash when you deploy VDI, so maybe they know something the VDI vendors don't tell you!)

    But, how complex do we really wish to make delivering a desktop? This may sound simple on paper but really, come on. How many layers, management consoles, vendor tools, hardware components etc does it take to build a VDI deployment. Tower of Babel anyone?

    Phrases like "the management of which is scriptable " and "I???d store those on a NAS and stream them down to the VM with Citrix Provisioning Services, Doubletake Flex, or Wyse Streaming Manager" are hints at the complexity of VDI. Let's add more layers, more code to update, patch and keep in synch. And ask the server/storage and network teams to manage desktops. Yikes!

    Oh, one more little problem that Citrix & VMware will NEVER acknowledge.

    Redundancy and failover, and off-site disaster recovery. If a single PC dies, you lose the productivity of one worker. If VDI goes down, you are unemployed. Simple as that. So, how much money do you have for DR? Can you even do it?

    And another little problem. For low end task workers working in a call center using browser based apps, Terminal Services is cheaper and simpler than VDI.

    But, what about "power users" that need graphics, VoIP, video etc and most importantly, are mobile road warriors? VDI ignores the reality that today's user is mobile. Per Forrester Research, 40% of North American workers routinely work away from their desk, and that means mobility is key. And that means that the "last 30 yards of the network" is critical, and out of corporate control. Go into a Starbucks or an airport lounge and see what I mean. Try delivering a hosted Windows desktop to a mobile or off-line user mover WiFi.

    As late as four weeks ago, both Citrix and VMware claimed that new network protocols, local delivery systems, upgraded networks etc solved these issues (however, they never acknowledged the additional cost in their Cost of Ownership models). Then they both spend millions of $$ to purchase vendors offering IDV solutions (Virtual Computer and Wanova) to "extend VDI coverage to mobile users and provide disaster recovery capabilities".

    Huh? Wait a minute, just last month you said you could do this, but now with these acquisitions you have solved the problem.

    And let's also not forget that "crossed the final bridge" of the the graphics performance problem of VDI with the new server mounted Nvidia 192 core graphics processor card. Of course, that only supports 100 active sessions, and requires its own management console and an annual per user software subscription fee.More cost, more complexity.

    No, the question is not SAN vs NAS storage, but why the hell are we doing VDI in the first place?
    • Nice!

      Good to see a cogent post on ZDnet. There may be hope for this site yet.
  • tsk tsk

    "It isn???t as simple as these guys tried to make it. <b>It never is.</b>"

    Using an absolute to refute an absolute-- highly ill advised
  • storage factors to consider with VDI

    Great post. Some additional items worth noting
    - VDI tends to be very write-intensive. Storage (either local or SAN) that uses flash to accelerate reads and mitigate boot-storms is only a partial solution. Write-optimized file systems such as log structured file systems, can help address the write peaks.
    - Yet another quirk with VDI workloads is the rapidly changing IO demands. With auto-tiering, there is a time lag when moving data between tiers. Caching architectures that can populate flash with hot data quickly are much better suited for VDI.
    - Finally high performance SAN doesn't have to be expensive - there are innovative approaches to mixing flash and disk to deliver the performance VDI demands at a cost-effective price point.