Why virtualisation is struggling to keep up

Summary:The relentless increase of processors per chip will rapidly reach a point well beyond the levels for which key software has been engineered, says Carl Claunch

The soaring power of chips is throwing up all sorts of challenges for virtualisation and server scaling, says Carl Claunch.

On the face of it, the significant increases in the power of processors is a good thing. Roughly every two years, a new generation of chips doubles processor counts through a combination of more cores and more threads per core.

So a 32-socket, high-end server with eight-core chips in the sockets delivers 256 processors in 2009. In two years, with the appearance of chips capable of holding 16 processors per socket, the machine jumps to 512 processors in total.

Four years from now, with 32 processors per socket, that machine would host 1,024 processors. Even small machines, such as a four-socket server used to consolidate workloads under virtualisation, could be fielding 128 processors in just four years.

The trouble is, most virtualisation software today can barely cope with 32 processors, much less the 1,024 of the high-end box.

Scalability limits
Operating systems may be able to support the low-end box, but most would not be able to run one image on the 1,024-processor machine. Database software, middleware and applications all have their own limits on scalability. Organisations may find they simply cannot use all the processors that will be thrust on them in only a few years./>

Hard limits on the total number of processors supported exist in all operating systems and virtualisation products and, in many cases, these limits are close to the maximum server configurations available today. Increasing the scalability of software, which permits it to benefit from more processors, is a slow and difficult task. Software will struggle to keep up with the expansion of server processor counts.

This problem is not specific to x86 processors. Organisations will experience this issue across the processor technology spectrum. The relentless doubling of processors per microprocessor chip will drive the total processor counts of upcoming server generations to peak well above the levels for which key software has been engineered.

Operating systems, middleware, virtualisation tools and applications will all be affected, leaving organisations facing difficult decisions, hurried migrations to new versions, and performance challenges.

It is worth looking at the operating systems and virtualisation products that exemplify the present state of the software market:

Linux
Red Hat Enterprise Linux Advanced Platform has a hard limit of 64 processors for x86 systems, up to 512 processors with the largesmp package installed. Novell's Suse Linux Enterprise Server 10 has a hard limit of 64 processors for x86 systems generally, up to 128 with the bigsmp package installed, and up to 4,096 processors only on specific Silicon Graphics servers.

However, most Linux servers are relatively small machines with small total processor counts today. Organisations running relatively high processor counts under Linux should do some testing to check they have not exceeded the soft limits for their intended workloads.

Windows
Windows Server 2008 has a hard limit of 64 processors, which will increase in 2010 to 256 processors in Windows Server 2008 R2. SQL Server 2008 has a hard limit of 64 processors, illustrating that...

Topics: Tech Industry

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.