In this day and age, if an enterprise is not relying heavily for the majority of its business processes on IT equipment then it is certainly living in the dark ages--well, at least the early 1980s (and I am assuming most people would like to remember the '80s as the dark ages). Over the last six months or so, we have looked at a wide variety of products that cover what could be considered the periphery of the IT data centre. These software and hardware products cover areas like storage, security, and networking infrastructure. While they all have a place in an organisation, nothing beats the topic of this review, the heartbeat of the corporate data centre, the beast to end all beasts, high-end servers.
Sure, we have done Web servers and blade servers this year, and while these devices do have a vital role to play in the company, nothing is as mission-critical as the unit which processes the transactions, whether it's from a inventory database or, more importantly, employee payroll for the accounting department. These servers are key to the continuance of most large-scale businesses these days. This is succinctly summarised in this month's feature on disaster recovery solutions, comparing the reliance financial institutions place on data as compared to the retail sector. This reliance is evident enough with the recent spate of data centre replication projects taking place around the globe due largely this time to worldly unrest. I apologise for bringing this worldly unrest subject up again, but it must rate somewhere on Gartner's Hype Cycle--I am sure hoping it will shortly disappear into the trough of disillusionment.
High-end servers are also coming into their own more so now than ever before, with the advent of decent storage area networking (SAN) solutions incorporating multigigabit data transfer via fibre channel. The servers are able to offload a lot of the storage overheads previously associated with the drive arrays. They are able to get down to their primary role, which is processing data.
Only the first half
This review will be published in two parts. This first section is dedicated to the high-end machines that don't fit into the traditional midrange server market dominated by Microsoft with its server family of operating systems and Intel with its Xeon processors.
While Intel is still represented in this category, with a quad-processor Itanium 2 system, and both the AMD Opteron and Intel Itanium are running Microsoft server operating systems, the server we received from Sun runs Solaris on UltraSPARC processors, and the Apple runs OS X (Panther) on G5 processors.
The second section of this review, will be dedicated to the Intel Xeon-based machines running Microsoft server software and applications.
Because we received systems with a wide range of processors, it made sense to try and develop a methodology and run a basic test that could go someway to showing the relative performance of each unit. This task is not as easy as it seems, but with massive support from the vendors we were eventually able to get there and produce the results here.
AMD Opteron 846
The layout of the mainboard is very interesting with the large CPU heatsink shrouds covering two processors each, with a fan attached to each end. Directly behind each processor is four memory module sockets. The server we had on test was fitted with a total of 4GB Hynix RAM (Two 512MB modules per CPU). The AGP slot and several of the other legacy peripherals required are mounted on a daughter board that attaches to the front side of the motherboard.
Two of the five PCI-X slots support hot swapping of the cards, and there is plenty of length available in the expansion bay for those extra-long cards.
The rear of the chassis has a redundant hot swap power supply (two modules) that are mounted beneath the mainboard. There is also what appears to be a space for another third power supply module as well, however this was not fitted to the test server. To the right of the chassis are three RJ45 network ports, two are 10/100/1000 and one is just 10/100.
The system we received came installed with four AMD 846 chips running at 2GHz, providing more than enough power to run Solitaire. In fact when running the WebBench test, the CPU load monitor barely registered any activity at all until the very high end of the testing, which is to be expected when you have this much grunt behind you.
The operating system AMD supplied on this system was the standard Microsoft Windows 2000 Server. The Opteron processors are apparently able to interoperate between 32-bit and 64-bit applications, therefore when the 64-bit version of Microsoft's server operating system is released, the AMD should be able to accommodate it with no problems.
In the corporate world, these Opteron processors--particularly the four- and eight-way units--have been hotly followed. As with all things new and different, it can only be assumed that time will tell how they will fare, however it seems that AMD Opteron-based machines are certainly worthy of consideration for your next server rollout. And may be particularly useful if you are planning to move to 64-bit in the future but need your platforms to stay with your 32-bit apps in the meantime.
|Product||AMD Opteron (test system)|
|Price||From $13,000-$45,000; less than $45,000 as tested|
|Phone||02 8877 7222|
|Excellent interoperability with the ability to support both 32-bit and 64-bit apps.|
|Performance is excellent, upgrade path is relatively straightforward, redundancy is included.|
|A fair few bucks for the bang!|
|A range of service is provided by systems integrators and value-added resellers; individual service contracts would differ from system to system.|
Apple Power Mac G5
This dual G5-powered machine proved that it had what it took to mix it with the bigger (read more expensive) multiprocessor systems in this review. Apple installed the system supplied to us with its new server operating system--Mac OS X Panther. It seems to us that Apple has struck a real winner here with a well-designed unit that does not sacrifice performance for sexiness.
Some of the more impressive attributes are the four -climate" zones incorporated into the chassis. When the chassis side panel is removed, the engineer is faced with a large removable clear perspex panel which covers two of the zones: the CPU zone and the card expansion zone, then above this is the drive zone for the optical and hard disk drives while below the clear panel is the power supply zone. Each zone has its own fan(s) that are electronically controlled depending on the temperature generated in their particular zone. All this makes for a very quiet system. In fact, during our testing the two large CPU zone fans were barely ticking over.
The CPUs, as already mentioned, are G5s manufactured by IBM. The operating system is the latest server OS from Apple named Panther. We found the navigation and control of the system very easy with all controls placed in logical and easy to find areas. Unfortunately, due to the popularity of this model currently there are very few to go around therefore we did not have as long with it as we would have liked.
One change that's worth mentioning here is the graphical user interface (GUI) for the built in Apache Web server. Instead of being relegated to the usual command line interface (CLI) editing of the Apache configuration file for adding or removing some of the more popular modules, Apple now allows you to select which Apache modules you want to activate or deactivate and then restart the services all from the GUI. For those techno purists out there, another one bites the dust, but for those of you who just want to get things done simply and easily with the minimum of fuss (which at the end of the day is the majority of users), then this is the icing on the cake.
There is also another interesting development in the US called the Terascale Cluster where engineers at Virginia Tech clustered 1100 of these G5 systems together to form a supercomputer. As you can see from the images on the site, they managed to install nine servers in each rack, which is effectively 18 CPUs per rack.
The Apple G5 in this incarnation is not technically a server at the moment, lacking rack mount capabilities, redundant power, RAID arrays etc. But it certainly shows that the G5s will certainly mix up the field when they are incorporated into a server.
|Product||Apple Power Mac G5|
|Price||From $3599-$5599; as tested $5599|
|As with all Apple computers, this unit is still very Mac-centric, but it is getting better.|
|Performance is good, especially considering this is designed as a desktop PC. There is some room for upgrades, but no redundancy.|
|Excellent price considering performance and features.|
|Standard 90-day warranty is poor. Would expect at least 12 months at this price.|
Ipex Centra 9200RM Quad Processor Itanium2
On a normal day this system would be the one to get excited over, with four Itanium 2 processors it certainly stands out from the average crowd. Unfortunately for Ipex and Intel, it is not a normal day and the crowd in this review is far from average. The Itanium 2 machine still stands out with an amazing 4U rack mount chassis design that is probably one of the best engineered chassis that has ever come across the test lab threshold.
The front panel of the unit is easily removed to disclose the CPU module, which is roughly the height of a 3U chassis. It opens like a suitcase to reveal two of the processor heatsinks and two of the power pod heatsinks. To the right of these are two banks of four memory slots.
Initially we were surprised when opening this to discover what appeared to be four CPUs but with two different types of heatsinks. Upon reading the accompanying documentation it became apparent that one set of two were the CPUs and the other set were the power pods for those CPUs. This then lead us to wonder where the other CPUs could be hiding. Then we realised that this module was double-sided and sure enough when we flipped it over and opened it up the other two processors and their related power pods and memory showed up.
The rear of the unit houses two hot-swappable power supply modules, next to this is the removable expansion module that allows engineers to access the bays without removing the chassis from the rack. There are a total of nine card expansion bays in this chassis, next to these also in the removable module are the ports.
The Itanium 2 CPUs were great performers. The system was supplied running Microsoft Windows 2003 Server Enterprise Edition (64-bit). The only hassle we had was the incompatibility with some of our 32-bit applications, but for the dedicated tasks that a company would be purchasing such a specialised piece of equipment, this would pose no problems providing they did their homework.
Overall the system performance and chassis design were outstanding. The cost is a little high in our opinion, but you certainly get a machine that rocks.
|Product||Ipex Centra 9200RM Quad Processor Itanium2|
|Price||From $23,000-$98,000; as tested $56,000|
|Phone||03 9242 5000|
|You are locked into specific 64-bit applications; also no PS/2 ports is a minor grumble.|
|Performance is excellent, upgrade path is relatively straightforward, redundancy is included.|
|An awful lot of bucks for the bang!|
|A range of service contracts available; it would differ from system to system.|
Sun SunFire V240
As to be expected from a highly proprietary manufacturer, the SunFire V240 is a physically very impressive machine. Its 2U rackmount chassis has a muted purple flip-down front panel and a charcoal lid. Beneath the flip-down panel are four removable hard disk drive (HDD) bays. Also contained behind the front panel are a small management switch (key operated), a card reader (for the system configuration card), and a power button. The front panel does not lock at all, nor does any of the drive bays, so this equipment would need to be stored in a secured rack. The rear of the unit has redundant hot-swappable power supplies. And for all those Sun novices reading this, note the absence of keyboard, mouse, or monitor ports because this is a server and technically all good servers should be remotely accessible for all management and administration tasks via IP telnet or SSH sessions. And at the end of the day, if you do need direct access to the machine, nothing stops the system administrator from connecting via the console (serial) port on the rear and accessing the unit directly.
Internally (yes we couldn't wait to get our heads under the hood either) the server has plenty of space, once the large air baffle has been removed that is. Each CPU is slightly offset from each other with its own bank of four memory slots next door. The construction is excellent. All cables are neatly routed and guided to their respective ports. There is also a neatly engineered sliding rail support system for overlong expansion cards.
The Sun v240 only has two processors and is not quad capable. The processors are 1GHz UltraSparc IIIi. And these are heavy weight CPUs--while physically not much larger than your average CPU, they weigh a whopping 57g (without the external heatsink) and have 959 pins each. The system we were supplied with had eight 512MB memory modules, so a total of 4GB RAM.
The operating system is Sun Solaris 9. The configuration is relatively easy, initial connection is via a terminal console such as hyperterminal and a serial cable. Once the IP addresses have been configured, then as with any terminal-based system you can telnet or SSH to the system and complete configuration from that stage.
Overall this is a very neat package. No real complaints from us. The only thing for potential technicians looking to deploy Sun equipment is ensure you are versed in the Solaris OS and the technical details of how to set up and configure these machines. However any technically competent engineer with some level of IT networking experience should not have too much trouble deploying these machines.
|Product||Sun SunFire V240|
|Price||From $6280-$19,574; as tested $15,454|
|Phone||03 9869 6200|
|Relatively hard to comment on interoperabilty; the Solaris OS is robust.|
|Performance is very good. Upgrade path is relatively straightforward, redundancy is included.|
|Fair price for a decent server with very good performance.|
|Three years warranty is great, but particularly the gold and platinum extra coverage get very expensive.|
How well will the server work with your existing applications and other systems?
Performance and upgrade capabilities, and redundant components to improve uptime.
Just the age-old price, performance, and features.
Decent warranties and prompt service availability at a reasonable cost.
How we tested
Firstly, some more insight into the variety of machines that were supplied and the difficulties associated in doing a comparitive performance test. We received units from Apple (a dual G5 running OS X Panther), AMD (a quad Opteron running Windows 2000 Server), Ipex (a quad Itanium 2 running Windows 2003 Advanced Server 64-bit) and Sun (dual UltraSPARC IIIi running Solaris 9).
It should also be noted that NEC was more than happy to submit a server for us to test, but the company had sold out of its quad-capable servers at the time and were on backorder. While NEC offered to put one together from a spare parts kit, we made a mutual decision not to go to that extent, so full marks go to NEC for assistance in the face of deadlines.
Now for the conflicts with the testing, for many years now CPU manufacturers such as Cyrix, AMD, and IBM have been decrying the clock speed (MHz and GHz) ratings of CPUs as mere numbers which have no relative value to the true outright performance of the system, or indeed the CPU itself. We remember the outcry when AMD started shipping CPUs labelled 1100+ and 1300+ as an indication of their performance even though the actual rating was quite slower.
Until relatively recently, Intel has been pushing the clock speed as the be all and end all. I say recently, because Intel is now going the other way with its mobile CPUs--the latest 1.3GHz version we recently tested beat the previous 1.5GHz model through the use of a larger internal cache. While it is true a system's performance is only as fast as the weakest link in the chain--for instance bus speed, video speed, I/O rate, memory, or CPU--with servers there are many other factors to take into consideration. So with this test, given the disparate array of processors and operating systems, we decided to set up a test rig to hammer the servers using one application that they should all be more than capable of running and that the client systems could hit in a similar uniform fashion. That, folks, is a Web server application.
Yes, we have already just finished explaining that these servers are far and beyond the relatively menial task of serving out Web pages, however for the purposes of this test a Web server and its very cross-platform nature means we could easily upload the test hit pages to each server regardless of CPU, OS, hardware configuration, and other variables.
Naturally our test rig did not entail simply downloading a page via a 56Kbps modem. We borrowed a 12-port gigabit Ethernet switch from Dell, which allowed us to connect 10 client systems each running a 100Mbps card. Therefore, with the controller system and the test server on the switch at the same time, we could run WebBench and get almost 100 percent utilisation over the test server's gigabit Ethernet network link.
Each of the 10 client systems could simulate up to 20 virtual clients hitting the server simultaneously with thousands of multiple transactions, so we effectively had a network maximum of 200 clients connected to the gigabit Ethernet NIC on the server and capable of hitting it with hundreds of thousands of simultaneous Web transactions per second.
We then took the results data collected by the controller system of WebBench and translated them to match the clock rate of the CPU. Similar to a kilogram per kilowatt (Kg/Kw) performance rating for cars, we obtained a request per second per million clock cycles (MHz). The kilowatt per kilogram rating for a car is a virtual equaliser that shows while one car may have a five-litre V8 engine producing 300KW and another car may have a three-litre six-cylinder engine producing 130KW. Most rev heads would say give me the 300KW car anyday as it is obviously far more powerful than the 130KW car. However if the car with the V8 weighs near 2300kg and the six-cylinder vehicle only weighs near 800kg then in reality the 130KW vehicle has a power-to-weight ratio of 6.15kg per KW and the V8 vehicle translates to a hefty 7.6kg per KW. In reality, the performance on paper when compared in this way is more realistic because it takes the weight into consideration. Sure there are several hundred variables and arguments that can then be brought up given that scenario and we are a technology magazine not a motoring magazine. However, we are purely using this as an example of where we are going with this server comparison, given the major physical hardware and software differences between these platforms that these vendors are proposing can all perform the same or similar high-end server tasks.
(RPS divided by quoted/rated MHz = RPS/MHz ratio)
|System||Requests per second||Clock speed||Number of processors||RPS/MHz|
As you can see from the results, the AMD seemed to sweep the field with over 8200 requests per second with the Ipex following it up, however when divided per MHz the Ipex comes out on top with 1.58 RPS/MHz as opposed to the AMD with 1.02 RPS per MHz. It will be very interesting to see the results of the dual and quad Xeons in part two of this review next month.
These RPS per MHz results should be used as a guideline only to the systems' overall performance, not the CPUs themselves as there are more than just the CPU factor driving the system performance. Components such as I/O, memory speed/capacity, disk speed/availability, network performance, OS, and drivers all make a difference, not to mention individual system tweaking and the like.
Company: CVP Fabrics. This business wants to install some mid-level servers for database and e-mail use.
Approximate budget: Open.
Requires: Six servers capable of supporting up to four processors, with two processors installed. 1GB of memory, at least 150GB of disk storage. At least one gigabit Ethernet (copper) port.
Concerns: The company believes at this stage dual-processor servers will support its needs, but with the expected growth over the next few years, wants the option to install additional processors and memory. Redundancy and other features for improving uptime will be very highly regarded.
Best solution: Unfortunately as the Apple and Sun are only dual-processor systems they are not in contention for this scenario, therefore it is down to AMD and Ipex. At this stage the AMD has the upper hand as it supports both 32-bit and 64-bit OSes and applications, and also has a significant price advantage over the Ipex system.
Editor's Choice: None.
It would be unfair to try and judge one of these systems above another. There are benefits in each system that would be attractive to different companies for different applications. The ultimate power network would comprise Ipex servers running the back end databases with the AMDs following up supporting the legacy apps and then later migrating across to the 64-bit apps. With G5s on the desktops and the Suns bringing up the security and Web infrastructure. Throw in some decent network switching like the Dell Gigabit switches. Now there is a dream network that hopefully isn't going to go out of date in a hurry.
The Test Lab is currently working with the manufacturers to develop further tests and scripts that will be able to easily put these types of disparate servers through their paces using database transactions and other variable loads, so future tests will use more real world applications one would expect to run on these types of machines yet still maintain a uniformity over platform hardware and software. Naturally the time and cost to develop these tests is more than could be done in just a few weeks and on our reviewing budget, however by the time the next server review rolls around for the magazine we hope to have added these to our arsenal of test routines.
Subscribe now to Australian Technology & Business magazine. About RMIT IT Test Labs