X
Business

Why virtualisation is struggling to keep up

The relentless increase of processors per chip will rapidly reach a point well beyond the levels for which key software has been engineered, says Carl Claunch
Written by Carl Claunch, Contributor

The soaring power of chips is throwing up all sorts of challenges for virtualisation and server scaling, says Carl Claunch.

On the face of it, the significant increases in the power of processors is a good thing. Roughly every two years, a new generation of chips doubles processor counts through a combination of more cores and more threads per core.

So a 32-socket, high-end server with eight-core chips in the sockets delivers 256 processors in 2009. In two years, with the appearance of chips capable of holding 16 processors per socket, the machine jumps to 512 processors in total.

Four years from now, with 32 processors per socket, that machine would host 1,024 processors. Even small machines, such as a four-socket server used to consolidate workloads under virtualisation, could be fielding 128 processors in just four years.

The trouble is, most virtualisation software today can barely cope with 32 processors, much less the 1,024 of the high-end box.

Scalability limits
Operating systems may be able to support the low-end box, but most would not be able to run one image on the 1,024-processor machine. Database software, middleware and applications all have their own limits on scalability. Organisations may find they simply cannot use all the processors that will be thrust on them in only a few years.

Hard limits on the total number of processors supported exist in all operating systems and virtualisation products and, in many cases, these limits are close to the maximum server configurations available today. Increasing the scalability of software, which permits it to benefit from more processors, is a slow and difficult task. Software will struggle to keep up with the expansion of server processor counts.

This problem is not specific to x86 processors. Organisations will experience this issue across the processor technology spectrum. The relentless doubling of processors per microprocessor chip will drive the total processor counts of upcoming server generations to peak well above the levels for which key software has been engineered.

Operating systems, middleware, virtualisation tools and applications will all be affected, leaving organisations facing difficult decisions, hurried migrations to new versions, and performance challenges.

It is worth looking at the operating systems and virtualisation products that exemplify the present state of the software market:

Linux
Red Hat Enterprise Linux Advanced Platform has a hard limit of 64 processors for x86 systems, up to 512 processors with the largesmp package installed. Novell's Suse Linux Enterprise Server 10 has a hard limit of 64 processors for x86 systems generally, up to 128 with the bigsmp package installed, and up to 4,096 processors only on specific Silicon Graphics servers.

However, most Linux servers are relatively small machines with small total processor counts today. Organisations running relatively high processor counts under Linux should do some testing to check they have not exceeded the soft limits for their intended workloads.

Windows
Windows Server 2008 has a hard limit of 64 processors, which will increase in 2010 to 256 processors in Windows Server 2008 R2. SQL Server 2008 has a hard limit of 64 processors, illustrating that...

...limits will matter in all layers of the software stack, not just the operating system.

The future Kilimanjaro version of SQL Server, scheduled for release at about the same time as Windows Server 2008 R2, is expected to support 256 processors.

z/OS
IBM's z/OS v.1.10 has a hard limit of 64 processors. As IBM continued to refine z/OS to improve scalability for all types of workloads, it increased the limits for the next generation, once it was satisfied that the scaling experience of all customers would be good.

This is a conservative approach, whereas many other vendors set hard limits that may not be achieved in real-world workloads.

Unix
Unix systems have been operating with larger processor counts for many years and have hard limits that reflect that. However, in some cases, they may be set by the largest server built rather than the intrinsic hard limits of the software. Solaris 10 has a hard limit of 512 processors. HP-UX 11i has a hard limit of 256 processors, but HP states it has a design limit of 2,048, allowing for future expansion. AIX 6 is limited to 128 processors, which is identical to the largest machine available today from IBM.

VMware
VMware ESX supports a maximum of 32 processors in the physical machine. It could handle up to, but not above, a four-socket system with the new generation of eight-core chips. The virtual machines have limits on the number of virtual processors each is given.

The hard limit for virtual machines is four virtual processors. The market has a fair number of people running on four-socket machines with quad-core chips, at 16 processors in total.

Hyper-V
The latest Hyper-V in Windows Server 2008 has a hard limit of 24 processors for the physical machine and up to four per virtual machine. It can support, at most, a two-socket server using the newest eight-core chips. In 2010, the Windows Server 2008 R2 version of Hyper-V will increase the hard limit to 32 processors.

One solution is to plan to run with a wider range of operating-system releases in production, because users installing bigger servers may be forced to run the latest software, despite the migration strategies for their other servers.

You should also carefully evaluate the hard and soft limits to scalability of important software you plan to deploy, to ensure it will perform as expected on the intended hardware platform. Finally, you need to look now to hard partitions as a way of overcoming the limitations of many virtualisation hypervisors.

Carl Claunch is a vice president and distinguished analyst at Gartner Research. He conducts primary research into grid computing, its markets and technologies, as well as cluster computing. One of Claunch's key areas of interest is technology trends for servers.

Editorial standards