Cloud computing brings its own sets of problems for cloud providers. Hmph, I hear you snort, who cares? After all, the cloud is the cloud is the cloud, right?
As a previous colleague of mine was wont to say: it isn't quite as simple as that. The reason you might care is because your data is mixed in with those of other people.
I was in conversation with a cloud provider today, who provided insights into the process of ensuring that the business keeps ahead of demand -- which in his neck of the woods continues to grow despite the best efforts of the world's economists and bankers to trash the economy.
Clouds work on economies of scale. This means that the tighter you pack the equipment storing and processing it, the more efficient you are. Just as importantly, the performance patterns of different types of cloud service help to determine whose data you put next to whose.
For example, if you use databases a lot, the traffic patterns you generate in your cloud provider's virtual infrastructure are quite different from those of a media company which streams a lot of data, or from a company using virtual desktops, which generates a lot of IOPS.
So as well as balancing out the peaks and troughs in traffic across the datacentre, so for example that your monthly accounting run doesn't collide with a data flood resulting from a successful promotional exercise, a cloud provider needs also to try and keep opposite types of load on the same infrastructure -- preferably the same host.
In this way, the provider doesn't overload the network or a server when too many users stream a video or access their virtual desktop from the same system. Instead providers will try to locate those different loads on the same network segment or on the same physical host.
And you care because, if the provider is not very good at doing that, your users will wait for their data, and that ain't good. So it's a question worth asking of prospective cloud providers.
Incidentally, this process should get easier for providers, as I also heard an anecdote about another cloud provider who will soon be running some 700 virtual servers on just 16 physical hosts. That's over 40 VMs per server. In a multi-tenanted environment where you don't always know what kind of load the VMs will present to the system, it's way more than most people envisaged around a decade ago, when technologists started talking about virtualisation, and when loading more than 10 VMs per physical server was a bit daring.
How much further can it go? And will the bottleneck be the network or the storage?