In the old days of Novell Netware servers, we used to say that, "If you want better performance, add RAM". Well, it seems that in these virtual days the same is true. The problem that RAM solved on Novell servers is the same one that RAM solves in a virtual environment: Disk I/O. Netware used that RAM to cache information and requests from disk. It also cached disk writes. Virtual environments do some of this on their own, but not efficiently.
Condusiv's V-locity product reduces I/O and delivers a reported 50 to 300 percent faster performance. The V-locity product reduces I/O by using a caching engine that leverages RAM for its performance-boosting solution. Condusiv aims to help you reduce the Windows I/O tax by caching in on RAM.
The Windows I/O tax problem stems from an inefficient use of free space. This inefficiency makes Windows virtual machines (VMs) work harder and perform at a lower level than their similarly equipped Linux VM counterparts. To combat this problem, administrators allocate more RAM and more vCPUs to each VM, lowering virtual machine density levels on hosts.
And if you're ready with the SSD or flash arrays as an answer, I'll need to stop you. SSDs don't solve the problem. And you'll notice it a lot on those random writes. The answer is an efficient caching algorithm for reads and for writes.
Condusiv's IntelliMemory product is a server-side DRAM read caching engine--a dynamic caching engine that throttles according to need.
ESG Lab worked with Condusiv Technologies to audit and analyze data that was automatically collected from over 100 different sites that evaluated V-locity deployed across 3,450 virtual machines in production environments The data presented below is from the ESG Lab analysis.
ESG Lab Analysis of Performance Results:
Reduced Read I/O to Storage - ESG Lab calculated that 55% of systems saw a reduction of 50% in the number of read I/Os serviced by the underlying storage. Even more, 27% of systems saw a 90% or more reduction in read I/Os. These impressive I/O reductions could be attributed to the caching advantages provided by V-locity.
Reduced Write I/O to Storage - By optimizing writes to be written in a more contiguous fashion, the size of an I/O consistently increases. In other words, instead of writing four 4Kb blocks of a 16Kb file, V-locity enables the system to write a single 16Kb write, requiring a single I/O operation. As a result of the I/O density increase, ESG Lab witnessed a 33% reduction in write I/Os across 27% of the systems. 14% of systems experienced a 50% or greater reduction in write I/O from VM to storage.
Decreased I/O Response Time - With an average available DRAM size of 3GB across all 3,450 systems, ESG Lab calculated the total time required to process all requests for each system and concluded that on average, systems achieved a 40% reduction in response time.
Increased Throughput - ESG Lab witnessed throughput performance improvement of 50% or more for 43% of systems. Further, 29% of systems experienced a 100% increase in throughput and as much as a 300% increased level of throughput for 8% of audited systems.
Increased IOPS from DRAM - Though the overall goal of the solution is to lower IOPS and improve throughput to the underlying storage, in some cases, because the working set was consolidated and serviced primarily out of DRAM, the number of measured IOPS dramatically increased. This means that the application was able to service requests faster. In fact, 25% of systems saw IOPS increase by 50%, and a small group of 25 systems achieved a 1,000% IOPS improvement. The same can be said for throughput. ESG Lab witnessed throughput performance improvements of 50% for 43% of systems, and as much as 300% increased levels of throughput for 8% of systems.
As you can clearly read from the ESG Lab report, most of the systems experienced a 50 percent performance boost.
Workloads that benefit from I/O caching:
- SQL Server
- Oracle RDBMS
- Exchange Server
- VDI implementations
- CRM (Salesforce)
- Web services
- Business Intelligence (BI) applications
- File services
V-locity is transparent, "set-and-forget" software that operates with near zero overhead, utilizing only idle, available resources. V-locity is a very lightweight file system driver and performs all optimizations at the OS level, which means V-locity is both hypervisor and storage agnostic.
Although Condusiv bundles a performance analyzer with its product, I suggest running your own benchmarks in a before and after style scenario to measure your individual benefit.
The V-locity product improves the efficiency of the following hypervisor platforms:
- VMware ESX/ESXi
The problem with caching for performance is that you have to allocate so much memory for it. Condusiv suggests 4GB (minimum recommended) RAM per VM. You'll have to check the performance vs. cost vs. RAM allocation in your environment. All virtual environments are not created equal and neither are workloads that run in those environments.
V-locity is compatible with Windows operating system VMs:
- Windows 7
- Windows 8.x
- Windows Server 2008 R2
- Windows Server 2012
- Windows Server 2012 R2
In my opinion, 4 GB of RAM per VM seems like quite a lot and I would like to see some live benchmarks between systems that use the cache solution vs. adding 4 GB of RAM to the virtual machine for use as RAM. For example, which performs better, a VM with 16 GB of RAM or one with 12 GB of RAM and 4 GB of cache?
If I have to allocate 4 GB of RAM per VM for caching, it seems that the systems that would benefit the most from this caching solution would be those at the upper end of the virtual hardware spectrum. To illustrate, imagine a VM with 32 GB RAM vs. a VM with 8 GB RAM. The one with 32 GB would be less affected by the allocation of the RAM from its pool rather than the one with 8 GB. In other words, 28 GB RAM and 4 GB cache vs. a VM with 4 GB RAM and 4 GB cache. The one with 28GB will have a better chance at a much higher benchmark because it isn't choked for resources, whereas the 4GB system would suffer.
I haven't spoken to Condusiv, but it could be that they suggest an additional 4 GB of RAM per VM, rather than exchanging 4 GB of RAM per VM. So a VM with 16 GB of RAM after caching would have 16 GB of RAM plus 4 GB of cache. If your hosts are already maxed out on RAM, then you can't add more RAM, so either you have to redistribute VMs to a lower density to free enough RAM or reallocate RAM for the VMs. A third option would be to purchase newer hardware that can handle more RAM.
In any case, the solution is not as non-disruptive as the literature would have you believe. Additionally, lowering the VM density is the opposite of what a solution like this is supposed to accomplish. I have nothing against Condusiv or its caching solution. I hope it works. Condusiv makes two of my most favorite products of all time: DiskKeeper and Undelete. So don't assume that I have a problem with the company or its products. I guess the rule here is to check it out for yourself and see what it does or can do for you. If allocating 4 GB of RAM per VM is worth the potential for a 50 percent boost in performance, then the cost of the V-locity software and additional RAM requirement is worth the investment.
I'm not a huge fan of RAM caching technologies for performance boosting, but I'm probably in the minority on this issue. I'd really like some reader feedback to tell me about your personal experiences including successes and failures using caching technology for virtual machines.
However, I won't discount the technology until I can see for myself that it's a bad thing. As I wrote earlier, Condusiv's other products are so good that I can't imagine that this one would be anything but great too. Talk back and let me know what you think of V-locity and other caching products and if you think they cure the Windows I/O tax problem.