Moore’s Law being what it is, new servers will generally offer more power and capacity than existing systems. However, you still need to ensure that they can cope with the consolidated demands of all the hardware that's being replaced. It's also wise to add in extra 'headroom' to allow for future growth and other changes.
That could mean going for extra processors and memory or, perhaps, choosing a multi-processor platform with empty sockets so that extra processing power can be added in the future. You'll need to pay close attention to resilience, availability and serviceability (RAS) features such as redundant power supplies, error-correcting memory, RAID-protected storage, hot-swap disks and so on. These are far more critical on a consolidated server, which may be hosting multiple virtual partitions, than on a dedicated solution.
Rack mount servers
Often seen as the workhorse of the datacentre, rack mount servers are categorised by the amount of rack space they require and the number of processors they offer. The smallest 1U (1 rack unit) products have either one or two processors with limited memory and internal storage facilities. As such they are mostly deployed as front-end Web servers and require additional load balancing hardware/software for scalability.
IBM's eServer x Series 460: a rack-mount server range that starts as an affordable four-way model.
Physically larger servers (2-6U) provide more room for internal storage, as well as extra processors and RAM. The number of processors isn’t important for general file and print consolidation, except when running virtualisation software. SMP is also necessary when hosting more demanding database and mail server applications, where you’ll need to look at 4-way and 8-way platforms.
Rack-mount servers are available from a variety of vendors, but stick with the big names like Dell, Fujitsu-Siemens, HP, IBM and Sun. Not only will you get a wider choice of more scalable products, but additional high availability, management and support options will be offered that are not provided by some of the smaller manufacturers.
Blade servers come on single circuit boards (blades) that plug into a custom rack-mount chassis, with a common backplane to link the servers together and to the network as a whole. This allows for greater CPU density and redundancy at the server level, although individual blades aren’t always as scalable. Most vendors offer support for up to 4 processors, some with on-board storage and facilities to interface to non-server blades in the same chassis.
Blades such as IBM's AMD Opteron-based LS20 eServer plug into a common backplane, providing high CPU density and server redundancy.
Blades are good for infrastructure simplification because of their low power and cooling requirements compared to traditional rack-mount products; they can also be deployed quickly. However, you may still need larger rack-mount servers to host back-end databases and mail servers.
Again, stick with the big-name vendors and look for support of industry standards plus management software to help with the deployment and maintenance of your blade servers.
When considering servers to support infrastructure simplification, it's worth looking at hardware equipped with the latest 64-bit processors. The biggest advantage is the lifting of the 4GB limit on addressable memory, which can be a major plus when consolidating database servers because whole databases can then be resident in memory.
Dependent on the implementation there can also be benefits for server virtualisation, the extra memory space enabling more, potentially larger, virtual servers/partitions to be hosted than on 32-bit systems.
EPIC vs x86
Intel was first to market a 64-bit processor for use in industry standard servers, releasing the first 64-bit Itanium chip in 2000. Unfortunately Itanium eschews x86 in favour of newer architecture called EPIC (Explicitly Parallel Instruction Computing) that requires an emulator (introduced in Itanium 2) to allow existing x86 applications to run unchanged. Moreover, the performance of such applications is compromised when compared to native execution on an x86 CPU.
Intel's 64-bit Itanium 2 uses the EPIC architecture, and can only run 32-bit x86 applications in (relatively slow) emulation mode.
AMD took a different approach with its 64-bit Opteron processor, which offers all the benefits of 64-bit processing but with native support for 32-bit x86 instructions. This makes it easier for developers to port their applications to the AMD64 platform. Moreover, it allows customers to consolidate existing 32-bit operating systems and applications onto new Opteron servers then upgrade to 64-bit as and when required.
AMD's 64-bit Opteron has native support for 32-bit x86 instructions -- a move subsequently matched by Intel with the Xeon EM64T.
Intel was eventually forced to do much the same with the introduction of the Xeon EM64T processor (Extended Memory 64 Technology) which can run 32-bit x86 applications natively as well as in 64-bit mode. It also features 64-bit x86 extensions that are compatible with those from AMD.
AMD's Opteron and Intel's Xeon EM64T processors have proved a lot more popular than Itanium thanks to their x86 compatibility. All mainstream server vendors now offer 64-bit Intel Xeon-based products, while HP is one of the first to also add AMD64 rack and blade servers to its portfolio.
Other 64-bit processors
As well as AMD and Intel, Sun Microsystems has long offered customers 64-bit processing capabilities. Its 64-bit UltraSPARC IV processor, however, is proprietary and will only run Sun’s own UNIX-based Solaris operating system.
IBM also has a 64-bit processor, POWER5, which is available on a range of IBM eServer rack and blade products. Like Sun’s chip the POWER5 doesn’t provide x86 compatibility, but is designed to run specially ported 64-bit implementations of Linux.
POWER5 is a dual-core product with two independent processing cores implemented on the one chip, as is the latest implementation of Sun’s UltraSPARC IV.
Intel and AMD have also introduced dual-core processors, for both desktop and server deployment. Multi-core implementations are expected to follow from all these vendors, with Intel predicting that 70 percent of all its server chips will be dual- or multi-core based by the end of 2006.
There’s a new trend in network storage, with major implications for companies looking to simplify and consolidate IT systems. Out goes direct-attached storage, where each server has its own local disks or storage array, in favour of the Storage Area Network (SAN), with centrally managed storage devices accessed over a separate, dedicated, network.
Originally based on expensive and complex Fibre Channel technology, the SAN is now an affordable solution for SMEs, thanks to the introduction of iSCSI. This uses TCP/IP as a transport and ordinary Ethernet hardware instead of a Fibre Channel network, making it a lot easier to set up. It’s also a lot cheaper, with iSCSI storage arrays and adapters widely available from vendors such as IBM, Adaptec, Dell, EMC, HP and others.
The SAN advantage
With disks divorced from direct attachment to individual servers it becomes a lot easier to provision and manage storage on a SAN. The storage itself can also be located remotely in a secure datacentre and shared by servers running a variety of operating systems Moreover, using specialised software it’s possible to virtualise the storage rather than connect servers to individual, physical, disk drives.
Storage Area Networks (SANs) put storage on a physically separate network to the LAN servers. SAN storage can be virtualised, making for easier, more flexible management. The example shown above (courtesy of Intel) is based on Fibre Channel, but iSCSI-based SANs provide a cheaper alternative.
There are several benefits to this, such as the ability to share and utilise the available capacity more effectively. It's also possible to configure and deploy new virtual disks in seconds with no need for servers to be powered down. Likewise, virtual disks can be dynamically re-sized to meet changing demand, instant snapshots taken and backups run at any time with little or no impact on server availability or performance.
Although it's a major enabler in any infrastructure simplification project, SAN hardware by itself is of little value without software to provide the necessary virtualisation and management facilities.
There are lots of SAN management products available, some from specialist vendors such as AppIQ, DataCore and EMC. The AppIQ software is also used by HP, Hitachi and Sun. Cisco is another big player along with Computer Associates and IBM, which has a range of storage management tools that can be deployed standalone or integrated into its wider Tivoli management framework.
The NAS alternative
Don’t confuse SANs with NAS (Network Attached Storage). Storage in an NAS appliance is accessed over the LAN, not a dedicated storage network, using file sharing rather than block access protocols like iSCSI.
NAS appliances are useful for infrastructure simplification, but in a different way. For example, they can be used to replace general-purpose servers to provide local branch office storage, and for local survivability in the event of a WAN failure.
NAS appliances are available from most big-name IT vendors such as Dell, HP and IBM, as well as more specialist companies like Adaptec and Iomega. Some of the SAN management tools can also be extended to manage NAS devices.
Also referred to as 'partitioning', virtualisation is all about separating physical server hardware from client operating system and application software, enabling a single physical server to host multiple virtual servers.
Virtual servers are given either dedicated processor access or shared access to processor cycles, but with an independent memory space -- separate from that of any other virtual server -- plus virtual networking and storage resources on the host system.
Some virtualisation products sit directly on top of the hardware, while others require a host operating system. This is typically either Windows or Linux, although Solaris and other proprietary platforms also have virtualisation features. Multiple virtual machines can then be configured, onto which a variety of guest operating systems can be installed and run.
Exactly what software can be installed and run this way depends on the vendor and implementation involved, but support for Windows and Linux come top of the list. Moreover, virtual servers can run any application supported by the guest operating system as long as it doesn’t require additional custom hardware.
There are lots of benefits to be gained from virtualisation, especially when it comes to infrastructure simplification. For example, no matter how closely you match hardware to your requirements, server processors will be idle a lot of the time. Deploy multiple virtual servers and you can exploit that idle time more effectively, as well as better utilise other expensive server resources.
Virtualisation also facilitates fast deployment of new servers to meet increased customer demand or changing business needs. It's also easier for developers to test and evaluate new applications in a self-contained and controlled environment.
Another plus, with the leading products, is the ability to manipulate virtual machine settings in real time -- for example, to dynamically allocate extra processing, memory and storage resources as required. You can also, typically, take point-in-time snapshots of virtual machines for backup and disaster recovery purposes.
VMWare (now a division of EMC), has long been the clear leader in the virtualisation market with, as well as self-contained solutions, Windows- and Linux-hosted packages that can support a variety of guest operating systems. VMWare recently signed an agreement to bundle evaluation copies of its virtualisation software with IBM BladeCenter systems.
VMware is the market leader in virtualisation software.
Microsoft, though, is keen to get in on the act having recently released Virtual Server 2005, based on software from its acquisition of virtualisation specialist Connectix. The Microsoft product, however, only supports Windows guests and is limited to one processor per virtual machine at present.
Open source virtualisation tools are also available, including Xen (from developer XenSource), which is being incorporated into the SuSE Linux operating system by Novell. There are numerous other virtualisation products and management tools from companies such as Leostream, Platespin, Softricity and Aurema.
Finally, chip vendors Intel and AMD have both announced virtualisation technology as part of upcoming processor designs -- Intel VT and AMD Pacifica. This won’t do a way with the need for custom virtualisation software, but should make virtual servers more reliable and secure.
Network and Systems Management (NSM) software is another key enabler for successful infrastructure simplification. It allows network managers to remotely monitor and manage not just the network itself (the supporting Ethernet switches, routers and so on), but also other resources such as servers and storage, operating systems, applications -- even individual desktops, if required.
Such tools are essential to both maintain availability and guarantee throughput. It also allows expensive support staff to be used more effectively. Moreover, the latest management tools are able to draw together other enabling technologies such as high-density blade servers, server virtualisation tools and storage area networks to build cohesive, fully managed, solutions.
The NSM market is large and very diverse, so some care is required when deciding which products to buy. Many so-called 'point' products, for example, provide limited functionality and can’t always be integrated with other management tools.
SAN management tools like SANmelody from Datacore, for example, do very little beyond storage virtualisation and management. Similarly Microsoft’s Systems Management Server (SMS) is primarily designed to automate software distribution, while Microsoft Operation Manager (MOM) monitors servers and their applications.
By looking for support for industry standards, such as SNMP (Simple Network Management Protocol) some degree of integration and interoperability is possible. However, you should check platform and vendor support carefully. SMS and MOM, for example, can be integrated together, but are geared up to handle Windows with very little support for other operating systems or non-Microsoft applications.
Integrated management suites can be a better bet, with more tools provided that are also able, in most cases, to share information and work together. LANDesk Management Suite, for example, draws together inventory, software distribution, OS imaging, patch management and remote control tools.
Other vendors, like Altiris (an HP OEM) and Vector Networks have suites offering a similar range of management tools. But, again, you should check exactly what you’re getting, whether other tools can be added, and exactly what platforms and vendor-specific products they will work with.
The bigger picture
Higher up the food chain, yet more comprehensive NSM solutions are available, often referred to as 'management frameworks'. Products like BMC Patrol, CA UniCenter, HP OpenView, and IBM Tivoli, all of which offer multi-platform, multi-vendor support and the ability to add extra management functionality using tools from the vendors themselves and other third parties.
IBM's Tivoli range of management software provides multi-platform, multi-vendor support.
These larger management frameworks leverage industry standards and can be of great help when it comes to managing the various consolidation and infrastructure simplification products that are emerging. However, the tools themselves tend to be complex to deploy and expensive. They also tend to be targeted at larger enterprises, so SME customers would be advised to look for implementations that are specifically designed for more modest needs and budgets.