Server component upgrade demand will remain niche

Summary:"Hyperscale" companies such as Facebook, Google, and Microsoft may ask for disaggregated servers to gain economies of scale, but the wider enterprise market will still go for integrated servers when a refresh is due.

Companies asking for servers that would allow them to replace certain components without upgrading the whole hardware are increasing their demand, but they remain the minority in an enterprise market still looking for integrated products offering optimized capabilities and manageability.

According to Rajnish Arora, associate vice president of domain research for enterprise computing at IDC Asia-Pacific, the use of modular servers which can be upgraded by components such as CPUs, memory, hard disk, or network cards has been in existence in many ways since the mainframe era.

The use of such servers is usually limited to Web 2.0 service providers such as Facebook, Amazon, Microsoft, Yahoo, and etc, who deploy the hardware to build up their "hyperscale" datacenters, Arora added.

Errol Rasit, research director at Gartner, defined "hyperscale" companies as those with more than 100,000 servers in their data centers. With such scale in terms of the number of servers deployed, and with most or all of them running the same functions, it makes more economic sense to simply change out the CPU or memory or hard disk according to the company's needs instead of constantly refreshing the hardware completely, he said.

storage
Swapping out components may make sense for "hyperscale" companies, but most enterprises will still prefer upgrading to integrated servers.

The call for disaggregated servers was made again by Facebook recently, according to a report by EE Times. Frank Frankovsky, chairman of the Open Compute Foundation and vice president of hardware design and supply chain at Facebook, said these servers need to accommodate upgraded CPUs when these arrive every year without having to swap out memory, networking and I/O chips, which might only need to be replaced every five years.

He noted that while some companies are planning 64-bit ARM server system-on-chips (SoCs) designed to save power for large data centers, these devices still come with existing chips and roadmaps that are focused on highly-integrated parts that he does not want in the SoCs.

"I can’t say anyone has come saying we will build it exactly the way you are asking for it . When they hear this disaggregation message, they can start changing their direction," Frankovsky said.

Wider enterprise market not enticed
However, Arora said the cost of upgrading the individual sub-systems of servers invariably works out much higher than the cost of buying a complete new system with much higher performance and specifications for most.

"Even though customers for years have had the option of upgrading x86 server systems, IDC's research shows that very few systems get upgraded after 18 to 24 months of the initial purchase," he noted.

For example, 4/8-socket servers are typically bought half populated with the intent of adding more compute capacity as workloads scale. But users generally end up buying new 4/8-socket servers, which offer much superior all-round performance, than buying additional CPUs, memory and hard disks to expand existing capabilities, the analyst said.

He added by upgrading in a piecemeal manner, the additional compute capacity may not necessarily complement with other sub-system components to offer a well-balanced and optimized system performance.   

Rasit agreed, saying the immediate user niche for disaggregated servers will remain large Web 2.0 companies such as Google and Facebook, high-performance computing institutions and research centers, online content providers, and online games operators.

"This niche remains very much on the periphery of the whole enterprise IT industry," he stated.

Vendors differ in approach
For Dell, it has already taken steps to disaggregate its server products as customers can upgrade networking and storage independent of a server upgrade.

Phil Davis, vice president of enterprise solutions group commercial business at Dell Asia-Pacific & Japan region, told ZDNet that for companies with applications that are performance-indexed toward a specific sub-system such as CPU or storage, component upgrades can be a "very attractive option". This is because it provides a path to upgrade the specific sub-system as close to the tech transition as possible, he said.

He said the companies benefiting most from such upgrade models are those with hyperscale data centers, as they can get a better return on investment (ROI) than with a wholesale refresh.

"The goal of any solution Dell chooses to productize, ultimately, aims to solve a compelling customer problem. [As such,] Dell's architecture teams are and have been investigating disaggregation options beyond those we are shipping today ," Davis said.

IBM, however, believes companies that build or commission original design manufacturers (ODMs) to produce low-cost, stripped down component-level systems will need to support the component makers' upgrade roadmap. Not every component can be swapped out and replaced as the roadmap transits, said Cheah Saw Pheng, country manager for systems technology group at IBM Singapore.  

"While there is growth in this segment, it is contained to only that few key global Web 2.0 and social companies. Their demand has accelerated over the last few years, but is mainly concentrated to the countries or regions where their data centers are located," added Cheah.  

These are some reasons why Big Blue does not focus on custom-build systems in such component-level manufacturing. It is focused instead on providing integrated systems which help customers manage their environments at reduced manageability cost, given that component cost is just one factor in the overall total cost of ownership (TCO) perspective, the executive said.

Hewlett-Packard (HP) reiterated the need to ensure that when server CPUs are upgraded, it is important to ensure other components such as memory, networking and I/O chips are able to keep up and complement the faster, higher-performing processors.

"Otherwise, these components can become bottlenecks and affect the overall computing performance of the servers," said Sreenivas Narayanan, general manager and director of industry standard servers at HP Enterprise Group in Southeast Asia, adding a vast majority of its customers are comfortable with the company's approach. 

In terms of getting companies pack more compute power into a smaller footprint, Narayanan said HP's Project Moonshot --a multi-year, multi-phased approach to helping customers with emerging Web, cloud and massive scale environments--bore its first fruits last June. The first system, codenamed "Gemini", has enclosures which are able to support thousands of servers per rack sharing management, networking, storage, power cord and cooling fan components, he added. 

Topics: Servers, Data Centers, Hardware, IT Priorities

About

A Singapore-based freelance IT writer, Kevin made the move from custom publishing focusing on travel and lifestyle to the ever-changing, jargon-filled world of IT and biz tech reporting, and considered this somewhat a leap of faith. Since then, he has covered a myriad of beats including security, mobile communications, and cloud computing... Full Bio

Contact Disclosure

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.