I've used VMware's software for more than ten years now and what we now call vSphere for five or more years. Personally, I've never thought overprovisioning was a great idea, although I do see some benefit from it.
From a purely intuitive standpoint, it's a really bad idea. It introduces risk by allocating resources you don't have to too many systems. However, from a more technical viewpoint, it has the advantage of fully utilizing resources that do exist.
The primary point of moving toward a virtualized infrastructure for many businesses is to alleviate the number of systems that are underutilized. Most servers run at 10 to 15-percent CPU utilization, 30-percent or less memory utilization and disk space, surprisingly, is the biggest waste of all resources. More surprising is that those same resources (CPU, Memory and Disk) are the most wasted in virtualized environments as well.
Part of the reason that we still have so much waste in our virtual environments is that we overbuild virtual machines (VMs). We allocate too many vCPUs, too much RAM and far too much disk space. We still tend to think in terms of physical systems with double-digit RAM allocations and multiple CPUs. But, VMs are different. They look like actual operating systems but what they really are to the host is applications. That's something to keep in mind when creating your own virtual infrastructure.
I take a fairly conservative approach to support and tend to err on the side of caution so overprovisioning, for me, has always been a bit of an arguable point. That said, there is a time and place for everything, even overprovisioning. Vkernel (Dell) has outlined some best practices for overprovisioning that are conservative and appropriate for most environments. In fact, I think they're spot on.
Perhaps their guidelines should be called rightprovisioning instead of overprovisioning.
One CPU per VM - This is one point that's hard to convince anyone of unless they're really 'in the know' about such things.
Add vCPUs as Required - Good advice. Too often, I see administrators and system creators request 4 vCPUs only to be disapointed by performance. vCPUs should be rationed as required by the applications running on the VMs.
CPU Recommendations - 1:1 to 3:1 vCPU to pCPU ratios. This is a very conservative number and will greatly limit the number of VMs per host, unless you understand what's being said. You have to remember that when you allocate a vCPU, you're actually allocating a Core, not a physical CPU. They do not map directly, so for a 16 core system, you can safely allocate 48 vCPUs. That falls at the recommended 3:1 ratio.
This number gives you some idea of how many single and multi-processor systems that you can deploy onto a given host. Watch your CPU Utilization numbers for your hosts and your VMs to see how things are working. Additionally, you should watch CPU Ready metrics to see how long your systems have to wait for host CPU resources.
Memory resources and their overprovisioning add another arguable point to the mix. Administrators often won't oversubscribe memory at all. If you do oversubscribe your memory resources, pay close attention to host memory utilization. If it remains in the "red" zone of 90% or higher sustained usage, you should consider adding more RAM, if possible. Of course, adding another host to a cluster also provides relief.
Overprovisioning of storage derives from the ability to use thin provisioning for virtual disks. Thin provisioning means that when you add a virtual disk to a VM, you can designate it as "Allocate on demand." If you setup a 100 GB disk for a VM, it might only use 10GB initially for applications and other files but as data grows the disk can grow dynamically to a maximum of 100GB.
That sounds like an excellent way of saving space and it is. The downside is that if you've setup several VMs with thin provisioned disks and they all grow over time, and they will, then at some point, you're going to run out of space.
Overprovisioning saves space and therefore better leverages your disk resources. Expensive SAN is often wasted on "thick" provisioned disks. It is a tradeoff and you have to measure the risk in your environment and maintain a level of vigilance through disk space monitoring.
I think that overprovisioning of CPU and storage resources isn't such a bad thing. I believe though, that you have to keep an eye on what you're doing. You can't just starting deploying VMs in a random or haphazard fashion. You have to do some calculations and some careful planning to make it work. You also have to plan for rapid expansion and mitigation in case of a severe resource constraint. In other words, when things start to go bad, you'd better have more space available somewhere or another host standing by.
What do you think? Is overprovisioning a good idea, bad idea or a feature that should be used conservatively? Talk back and let me know.