VMFS-3, How Do I Despise Thee

VMFS-3, How Do I Despise Thee

Summary: The VMFS file system is one of the core technologies used in VMWare environments. Unfortunately, it's also a completely proprietary black box that makes interoperability nearly impossible.


The VMWare Cluster Locking File System, version 3 (VMFS-3) Is one of the core technologies used in VMWare ESX/vSphere 4 virtual infrastructure environments. Unfortunately, it's also a completely proprietary black box that makes interoperability nearly impossible.

One of the perils of being a practicing systems integration expert versus someone who strictly writes about or reports on technology is that when it comes to taking care of paying customers versus attending trade shows, my customers come first. So while I would love to attend every industry trade show that would allow me to network with other industry peers and touch base with the companies that I write about, it's not always possible.

Click on the "Read the rest of this entry" link below for more.

VMWorld 2009 in San Francisco was a trade show that I would have loved to have attended this week, but the practical realities of having to actually use the technology in question -- VMWare vSphere -- prevented me from being able to cover the event. In this case, I had a customer which had to perform a business continuity recovery exercise over a 48 hour period which required my assistance as the VMWare lead, working round the clock shifts in a cold datacenter, sleep deprived, bugged-out on fluorescent lighting and hopped up on dispenser machine coffee, Dunkin' Donuts Munchkins and MSG-laced snack foods.

Among other systems which needed to be restored at the target site, the exercise involved restoring a number of Virtual Machines that the customer had supplied to us on commodity, 1 Terabtyte external SATA hard drives which they had copied out of their production VMWare ESX 3.5 environment at their originating data center.

As a best practice, I normally would recommend that production VMWare data be SAN replicated over a Wide Area Network, but in this case the customer for whatever reason, probably as a cost saving measure, chose to physically send us actual FAT32-formatted hard disks containing copies of the VM directories with the .VMX and .VMDK files.

No big deal, right? Just physically attach the USB/eSATA hard disks to the VMWare ESX server's USB 2.0 or eSATA ports and copy the files in. Won't be the fastest of restores, but it will work just fine.

Uh, hold on there cowboy. Sorry, you can't do that.

Had we been dealing with just about any other x86 operating system and/or hypervisor, the procedure I described above would work just fine -- on Windows Server 2008 Hyper-V and on a multitude of Linux distributions, UNIX, Xen flavors and KVM included. Hell, even on Macs. But on VMWare ESX, that just isn't the case.

You see, VMWare's ESX bare metal hypervisor is a black box. The only way you can move data in and out of their VMFS-3 file system is using their provided proprietary tools, in this case the VMWare vCenter Client which is used to remotely administrate an ESX box from a Windows workstation or server, or their Linux command line console which is only available on the full blown VI3/ESX 3.5 or vSphere 4.0 product, not the embedded ESXi version which is becoming increasingly popular with environments using server blades.

For those not familiar, the ESX hypervisor itself can talk to local VMFS-3 storage, or to iSCSI or fiber SAN-attached VMFS-3 storage. It can also talk to remote NFS storage through its vmkernel interface over the network. However, It CANNOT talk to the USB ports on the hardware the ESX server itself is running on, nor can it locally mount any other file system other than its own VMFS, or files mounted on the CD-ROM or DVD drive.

So if someone provides you with a disk that contains copies of your VMWare virtual machines, how do you transfer it? Well, in our case, since we didn't have replicated VMFS-3 LUNs that we could just connect to as a regular VMWare ESX datastore using the SAN, we had to connect the eSATA drives to a PCI eSATA adapter that was hooked up to a Red Hat Linux server, exported the storage as NFS, and used the vmkernel NFS interface on ESX to copy over the files over the gigabit LAN using the Windows-based VMWare Infrastructure Client.

Typically, the vmkernel NFS interface is used in conjunction with fast NAS appliances with large RAID stripes such as NetApp devices and the like which cost tens of thousands of dollars, not with consumer grade hard disks that you buy at the local Staples or Best Buy hooked up to some random Linux server. So to say that this jury-rig cobbled up solution was not optimal for data transfer would be a gross understatement.

Suffice it to say that it took us well over 10 hours to copy 1 Terabyte of data across the network using this method, and we had several false starts and several aborted transfers, including a few ESX crashes in the process due to network and contention problems, so it really took us about 12-14 hours to do the job and get all our VMs fired up before we could even begin the process of incremental tape restores. I was not a happy camper, and neither was the customer.

An alternative to restoring data in this method would have been to use the VMware Consolidated Backup (VCB) product, which is in effect a VMFS-3 file system driver for Windows.

VCB is used as a "gateway" so that a single Windows server with SAN connectivity to VMWare's clustered storage can be used for speedier out-of-band backup and restore of the VMDK files, instead of doing slower agent-based network backup and restore from within the virtual machines themselves in conjunction with popular network-based backup software such as IBM Tivoli Storage Manager, Networker, CA ArcServe or NetBackup. However, VCB costs (a lot of) money, and the customer in question didn't have a license for it.

Let's face it -- enterprises which choose VMWare as their Virtual Infrastructure environment of choice are trusting their data to a proprietary filesystem and a hypervisor which is a black box closed system.

Indeed, Microsoft's Hyper-V and their NTFS file system are also proprietary, but enough reverse engineering over the last decade has been done to expose NTFS so that it is read-writable by Linux and UNIX, and all modern Microsoft OSes as well as Linux and UNIX OSes thru SAMBA can write to networked NTFS volumes without any problem whatsoever. Recently, through Microsoft's Open Specification Promise, SAMBA is also becoming more and more "kosher" as a fully certified CIFS/SMB networking solution with the full cooperation of Microsoft.

There are even Linux ext3 file system drivers for Windows should anyone really care to copy data in that direction without the use of SAMBA networking. With Windows and Linux, there are multiple methods for accessing and moving data stored on disk. Now, would I like to see NTFS's specifications fully opened or a Microsoft certified Linux NTFS driver in Open Source? Sure, but compared to VMWare, Microsoft is utter interoperability paradise.

Providing a network interface via VMWare ESX through NFS and VCB just isn't good enough. VMWare needs to expose as much of the hypervisor and the underlying file system as much as possible so that better tools for data transfer and data forensics and recovery can be created.

It doesn't help that VMWare is the industry leader in virtualization and that it abuses its market position by nickel and diming its customers for such things as simple file system interoperability tools like VCB. Backup/Restore software and other operating systems should be able to talk directly to VMFS, period.

It's time that VMWare's customers demand better interoperability from the company and its products -- or seek more open hypervisor solutions like Hyper-V, Xen and KVM.

Is VMWare abusing its position by locking you out of your own data? Are they making things unnecessarily difficult, proprietary and costly? Talk Back and Let Me Know.

Topics: Networking, Hardware, Linux, Open Source, Operating Systems, Software, Virtualization, VMware


Jason Perlow, Sr. Technology Editor at ZDNet, is a technologist with over two decades of experience integrating large heterogeneous multi-vendor computing environments in Fortune 500 companies. Jason is currently a Partner Technology Strategist with Microsoft Corp. His expressed views do not necessarily represent those of his employer.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Sounds bad... How about Red Hat's new stuff?

    A nasty experience... Have a glass of fine cognac to recover. No ice (the horror!), no mixing (barbaric), at the perfect temperature of 18 degrees Celsius. Cheers. :-)

    How about Red Hat's new virtualization stuff in RHEL 5.4? Is that a better alternative?

    Salut, Pjotr.
    • KVM would have been significantly better in this situation

      I haven't tried KVM in the new RHEL 5.4 version, but by its nature KVM is a very small hypervisor layer integrated in with the Linux kernel. Thus using all the drivers, tools and file systems available to a normal distribution. In the situation Jason described, it has been my experience that the copy would have only taken a hour or two.
      • Authoritative

        "I haven't tried KVM in the new RHEL 5.4 version, but. . . " Okay
        • In all fairness

          I know JSchweitzer, and I know he's worked with KVM and Xen extensively, even if he hasn't touched the latest Red Hat release.

          KVM uses the QEMU qcow2 file format which is stored on a standard Linux file system. In Redhat's case, that would be EXT3 or EXT4, which it would have no problem reading. There's no reason why you couldn't copy the files over to a standard NTFS or FAT32 drive either.
  • RE: VMFS-3, How Do I Despise Thee

    I see your point, but also must point out that VMware recommends running the VSphere server as a VM on your host. Given this, it's easy enough to mount a usb drive to the Windows VM that VSphere (or the VSphere client) is running on, and copy the files to the vmfs-3 storage via the client/server.

    Try to keep in mind that VSphere and VMFS-3's target audience is not the do-it-yourselfers who rely on the "freebie" versions of ESXi. The file system is very reliable as a clustered/shared storage solution and works very well for its intended purpose - as a data center solution.

    My 2 cents.

  • RE: VMFS-3, How Do I Despise Thee

    You're customer went cheap, and it bit them and you. That's the simple truth with the situation you describe. As the "lead" VMware consultant, you should have been involved even before they started trying to do the DR test, and should have adviced them of the issues before ever getting to the point of trying to make it work. As a VMware consultant myself, I've had customer's try to go cheap on DR solutions for VMware, and it eventually gets to a point where you have to tell the customer that they are making a mistake, and be prepared to walk away from them. Unless that is, you enjoy spending long hours in a datacenter, drinking bad coffee, and eating Twinkies. :)
    • 'Cheap' may not be the right word.

      Some might think that ESX Server is very expensive, even without add-ons. The market will be the judge of this, over the coming two years or so. VMWare had better be careful - they've got a lot to lose!
    • As the lead...

      You cannot always be brought in to provide architectural oversight when someone else designs the solution or contracts an engagement. In this case, I was asked to come in to assist for availability reasons.
      • Ok, but still don't think this is a VMware issue...

        Unless you had absolutely zero heads up on the setup before coming in, you must have expected issues like what you got. Even if you had no prior info on the setup, you can't blame VMware for a poor design decision by the customer or some other consultant. Personally, I have no issues with the way VMware has kept VMFS locked down, in my opinion it had prevented a lot of 3rd parties from screwing around with the file system and causing issues.
        • Enter the world of Veeam FastSCP

          Had I been in this situation, I would have pulled out the "big guns" and used Veeam FastSCP. I have moved large VMs on and off non-VMFS storage and now that Veeam supports ESXi, there is no reason not use their free product. Especially since it has inline compression to make that data fly.

          I've moved more than a TB of VMDKs across a gigabit LAN in minutes because there was a lot of empty structures (the disks weren't full). It is in places like that where FastSCP is superb. As for the potential issue with sector alignment in VMFS being thrown off due to direct writes to the VMFS volume, since this was a DR exercise, the potential minor performance hit would not be critical and could be resolved during a normal maintenance window had that been a real DR situation.
  • You are aware of the variety of 3rd party tools right?

    Many of them are available for free.

    Have you ever tried Veem SCP? We regularly use this product to copy files from ESX Servers to external storage or a cheap NAS (or going in the other direction to the ESX servers) using a workstation running Veem as an intermediary. Our particular setup is limited by the speed of a gigabit ethernet connection but it works just fine and has been tested repeatedly.

    I think you way way over complicated your situation.
    • Actually

      I was not aware of the product, but I can't imagine SCP being actually faster than native NFS datastore transfer directly through the vmkernel interface. You're talking about adding encryption overhead there. It sounds like a cool tool for secure transfers of smaller VMs, though.
      • Works quite well

        Veem SCP runs on a Windows machine. It runs as a service and copy jobs can be scheduled. We use EsXpress backup to backup our virtual machines to disk on a separate array in our SAN. The scheduled copy job then copies the compressed backup files to a NAS box via gigabit ethernet. We have two of the NAS boxes with one stored off site and the two are swapped once per week. Our total VDMK's add up to around 600 GB or so but compress down to about 400. Yes, this takes a while, but its acceptable.
      • veeam scp not encrypted actually...

        In previous version at least, the data traffic isn't encrypted....I think that
        may be an option now.


        Regardless though, in your situation (i.e. closed network, speed is
        paramount) it sounds like no encryption for the data transfer would be
  • Jason why would not booting up Knoppix and using dd_rescue have worked?

    Not to be a Monday Morning quarterback on Sunday, I'm just saying....

    D T Schmitz
    • Not backup of a raw filesystem

      I'm talking about -moving- vmdk files to a VMFS-3
      drive. Knoppix would only work for COPYING a LOCAL
      VMFS-3, and I would still have concerns about
      compatability between RAID metadata. No Linux
      distribution can read or write to VMFS-3. There is
      currently a VMFS read project on Google Code but
      no write driver yet.
      • u're so crazy talk....

        Crazy talk... facts: does Hyper-V support natively EXT2, EXT3, EXT4, REISERFS, etc filesystems? No! Does XEN support NTFS as backend for VM's? ... NO! This is why your comment is so so crazy... it makes no sense!
  • RE: VMFS-3, How Do I Despise Thee

    [b]Unless you had absolutely zero heads up on the setup
    before coming in[/b]

  • VMFS-3 exists for a reason

    Vmware spent the time to build a filesystem optimized for the somewhat unique nature of virtual machine files - multiple host access to very large file shares with reliable locking.

    I've been out of the game for a year - has anyone caught up with VMware? Can any other solution allow you migrate the VM files while the system is still running? Do any provide the ability to grow the filesystem non-destructively?

    These are the kind of benefits a custom fs designed specfically for vm storage has. I don't care if it's closed as long as it's better ( more features, reliability, security) and is well supported.
    • Sure there are

      Solaris ZFS with Xen can live migrate running virtual machines.

      Linux GFS2 with KVM or Xen can live migrate running virtual machines when using clustered locking.

      ZFS and GFS2 both support growing a filesystem non-destructively without taking the filesystem offline.

      VMFS-3 did not have to be proprietary. They did not have to make it so hard to interact with the bare metal OS. They didn't have to develop their own kernel and stop using Linux, either. They chose to do these things. And those choices seem to have spurred a lot of competitors who did not make those same choices.