In yesterday's post, What is “Virtualization 2.0?” I tried to shine some light on the notion called "Virtualization 2.0." If we accept for the moment, that this is something new, the adoption of this approach will face the traditional barriers to adoption that virtualization of any type of faced for decades. Let's review those barriers.
Barriers to infrastructure software adoption
- Licensing rules for applications, development tools, data management tools and operating systems often make a completely virtual environment more costly than the organization expects.
- Staff training.
- Organizational structures: Most medium and large organizations have a very complex internal structure. Technology often becomes a political hot potato because different groups are responsible for:
- cappuccino makers
- IT staff and vendors have different goals.
- Suppliers want to sell their newest products which means persuading customers that their old products can't possibly handle new requirements and convincing these customers that moving to their new technology will be better than moving to someone elses' technology. In short, they often have a "throw everything away and start fresh with our newest technology" mindset.
- IT staff want to maintain the status quo because they're held responsible for keeping things running 24x7. As a result, they typically are very conservative and follow the "golden rules of IT" religiously.
- I'm sure I've left out a host of other issues in an attempt to be brief. What issues do you think that I've left out?
- The result of these issues, adoption of any technology is slow.
Follow the money
If we take the advice of "Deep Throat" and "follow the money" we'll soon see that any new technology must offer either a strong probability of dramatically increasing the organization's revenues or of dramatically reducing the organization's overall costs. Some of the features proponents of "virtualization 2.0" mention, including reduced system costs, greater organizational agility, higher levels of application reliability and the like will move this notion from the media into the datacenter. The key question is when?