Why virtualisation is struggling to keep up

Why virtualisation is struggling to keep up

Summary: The relentless increase of processors per chip will rapidly reach a point well beyond the levels for which key software has been engineered, says Carl Claunch

TOPICS: Tech Industry

The soaring power of chips is throwing up all sorts of challenges for virtualisation and server scaling, says Carl Claunch.

On the face of it, the significant increases in the power of processors is a good thing. Roughly every two years, a new generation of chips doubles processor counts through a combination of more cores and more threads per core.

So a 32-socket, high-end server with eight-core chips in the sockets delivers 256 processors in 2009. In two years, with the appearance of chips capable of holding 16 processors per socket, the machine jumps to 512 processors in total.

Four years from now, with 32 processors per socket, that machine would host 1,024 processors. Even small machines, such as a four-socket server used to consolidate workloads under virtualisation, could be fielding 128 processors in just four years.

The trouble is, most virtualisation software today can barely cope with 32 processors, much less the 1,024 of the high-end box.

Scalability limits
Operating systems may be able to support the low-end box, but most would not be able to run one image on the 1,024-processor machine. Database software, middleware and applications all have their own limits on scalability. Organisations may find they simply cannot use all the processors that will be thrust on them in only a few years.

Hard limits on the total number of processors supported exist in all operating systems and virtualisation products and, in many cases, these limits are close to the maximum server configurations available today. Increasing the scalability of software, which permits it to benefit from more processors, is a slow and difficult task. Software will struggle to keep up with the expansion of server processor counts.

This problem is not specific to x86 processors. Organisations will experience this issue across the processor technology spectrum. The relentless doubling of processors per microprocessor chip will drive the total processor counts of upcoming server generations to peak well above the levels for which key software has been engineered.


The cloud's not ready for desktop virtualisation

Desktop virtualisation is tipped to be one of the next major trends in enterprise IT, but it's limited by the state of LAN/WAN infrastructure...

Read more

Operating systems, middleware, virtualisation tools and applications will all be affected, leaving organisations facing difficult decisions, hurried migrations to new versions, and performance challenges.

It is worth looking at the operating systems and virtualisation products that exemplify the present state of the software market:

Red Hat Enterprise Linux Advanced Platform has a hard limit of 64 processors for x86 systems, up to 512 processors with the largesmp package installed. Novell's Suse Linux Enterprise Server 10 has a hard limit of 64 processors for x86 systems generally, up to 128 with the bigsmp package installed, and up to 4,096 processors only on specific Silicon Graphics servers.

However, most Linux servers are relatively small machines with small total processor counts today. Organisations running relatively high processor counts under Linux should do some testing to check they have not exceeded the soft limits for their intended workloads.

Windows Server 2008 has a hard limit of 64 processors, which will increase in 2010 to 256 processors in Windows Server 2008 R2. SQL Server 2008 has a hard limit of 64 processors, illustrating that...

Topic: Tech Industry

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • What's the big worry?

    It's all very well to panic that the number of processors available will outstrip the O/S and virtualisation software available.

    However there are far bigger problems to solve when you have hundreds or thousands of processors available. It's over-simplistic to assume that you'd just have more virtual servers soaking up the load.

    That architecture would run out of I/O bandwidth even more rapidly than it would run out of supported virtual CPUs.

    There are a lot more hardware developments to do before the thousand processor system arrives on the desktop or average server rack, and the software has to adapt to that hardware too - you can't just wave a magic wand at the CPU make's fab.
  • The real point to the Great Virtualization Plan

    Virtualization is inevitable! The increase in chatter on the subject is directly proportional to the probability that it WILL happen.

    When the Big V reaches ubiquity and the hard drive becomes obsolete, even redundant (for Carbon Neutralisation purposes) then all our information will be stored outside of our devices and we will be at the mercy of those corporations who are the gatekeepers to our communications. And there within lies a much bigger, more darker game plan.

    In fact, the more I consider the potential, the less I think I should write about it.

    Your call.

  • Good point...

    Even now companies are going mad offering all forms storage space on there systems for all kinds of stuff, and people are uploading all forms of there personal data to them, without a bat of the eyelid.

    The worry thing in all this is what happens in the future when they put there prices up to such an extent and people refuse to pay them that amount, what then they just switch them off?

    By this time storing data in this way may well be the only way to store anything about ones self, it is something to worry about even now.
  • Ah....isn't it obvious?

    Today when you don't pay your phone bill or your TV bill or practically any kind of bill, the provider 1st threatens you to pay or subsequently ends your
  • :s

    Equilibrium or Gattaca, any one.