Fujitsu: Cheap servers can't fit all cloud roles

Fujitsu: Cheap servers can't fit all cloud roles

Summary: Fujitsu chief technology officer Joseph Reger explains why we still need high-end hardware despite schemes such as Facebook's Open Compute Project pushing commodity servers in cloud datacentres.

SHARE:
TOPICS: Cloud, Servers
1

As more companies choose to deploy software-as-a-service, their need for expensive hardware is declining.

Some firms, such as global bank UBS, have looked at moving to low-cost commodity servers to support their new applications — a potentially worrying shift for major enterprise server vendors such as Dell, HP, IBM and Fujitsu.

At the same time major cloud companies, such as Google and Facebook, are understood to be sidestepping these vendors entirely by designing their own hardware and buying it directly from Asian manufacturers such as Quanta.

This trend looks set to gather pace. For example, Facebook's Open Compute Project is aiming to become a clearing house for open-source, commodity hardware designs.

To find out more about the implications of these changes, ZDNet UK talked to Joseph Reger, Fujitsu Technology Solutions' chief technology officer. Reger formulates the enterprise technology specialist's IT strategy and helps ensure its products are designed to fit the latest trends in computing.

He thinks the Open Compute Project's commodity servers may find a place in a few enterprise applications, but that the integrated appliance approach remains the best choice for enterprises with known workloads for specific applications.

Q: What do you make of Facebook's push into low-cost general-purpose servers via the Open Compute Project? How could it affect your business?
A: You can [use an Open Compute Project server], if you have one or two applications and they are very specific in nature. But you've got to have the numbers, so that Quanta, or whoever, accepts the purchase order [for the basic servers].

You can do that and you probably gain in terms of specific performance. Zynga started out in cloud computing and moved back down into the Z-Cloud with specific servers. That's OK because it turns out that for their gaming engine they have a modular system where particular parts have particular needs and you can select the server for that. The generic cloud servers will never be like that because you cannot anticipate the needs of every application.

However, there are generic things that are very clear and this is where a particular kind of cloud-type of servers can come in, such as extremely thin but not blade [servers that have] a particular density or memory or interconnect so they can do certain things well, or be physically easy to deploy or mix generations. We do believe that there is a generic but cloud-orientated server for the massive and scalable deployment server generation, and we do that as well as the normal servers.

What we don't do is the purpose-built, particular, specialised server for a particular cloud need. However, related to that and similar to that, there are the appliance-orientated servers, which are not for massive scale but for a particular purpose — such as the HANA [in-memory] appliance or one based on a database appliance. Those we do have. And we will be building [more] appliances for particular specialised purposes.

The rise of cloud means software tends to be available over the internet. So surely appliance-style servers will go out of fashion and most companies will use commodity servers to support and distribute software?
I disagree. The mistake you make is you think — take SAP — that SAP is one thing. It isn't. It's many things together. There are some very specific aspects in it, such as in-memory large databases, but on that platform they will be offering many different things. It's almost like saying Force.com [Salesforce.com's cloud computing platform-as-a-service system] has specific needs. Not true. Salesforce.com has specific needs. Force.com is a platform depending on what you use it for.

Lots of small software shops that cannot have their own cloud service because they don't have the [resources] to build it will be looking to partners.

The large software vendors have very difficult-to-define specific needs. What they will be doing is looking for commonalities that can be optimised to a very high degree. I believe that will happen but the other thing [buying low-cost commodity servers] will not.

In addition, the software world will not be dominated by the very large companies. Lots of small software shops that cannot have their own cloud service because they don't have the [resources] to build it will be looking to partners to be able to put what they have in the cloud and run it in a platform. That is a very diverse environment.

Surely some of these companies will be tempted to use a platform-as-a-service such as Heroku that sits on Amazon Web Services, which is believed to use commodity servers?
That will only be part of the market. I do believe that the heterogeneity will stay and only the very generic workloads will go there. What the big server vendors will have to concentrate on is having enough of a generic platform, but that doesn't mean every server is the same. The clients are different and the scalable and deployment areas are different again.

The recent launch of the Xeon E5 has highlighted the central role a processor plays in a server. How else can servers be differentiated? What about the interconnects?
In general this is a very important aspect. In massively scaled-out environments the number of components is very large, so you cannot really have an interconnect technology that requires an expensive host bus adapter or interconnect or card. So the components of that network need to be almost commodity. So there I don't think there will be very much proprietary [technology].

At the same time it's very obvious that if you can build interconnects that are in a totally different performance region, interesting things can happen. So we know exactly what it takes and how to do it and even how to write proprietary protocols for it. That's available as a technology and we are commercialising that in the [PrimeHPC FX10] supercomputer but not outside that.

It could well be that at one point we realise that a specialised appliance for a particular database environment needs a very high-performance interconnect and then we might put it in there at that point in time. That could be a very useful thing for us to do, but that will be a very specialised piece of the market and we can't say how large that will be.

Have you been looking at software-defined networking and do you have any plans in this area?
Let me just say this much: one very attractive area that is now becoming pressing is scale-out massive storage and that has exactly the same question at its heart about the interconnect and scalability, and the nodes can be normal servers.

Do you have plans to offer servers based on ARM processors when 64-bit variants become available?
Yes, that is an option. A plan is a very concrete thing in our case, so it's not a plan but it's an option.

Will you offer systems based on Intel's Knights Corner many-integrated core processor, when it comes out?
I can't confirm any plans to do that, but it could well happen that many-core systems become very attractive, particularly if they are heterogeneous cores, because that works as a kind of inside offloading. There are particular tasks that can use a bit of hardware acceleration of [that] sort.


Get the latest technology news and analysis, blogs and reviews delivered directly to your inbox with ZDNet UK's newsletters.

Topics: Cloud, Servers

Jack Clark

About Jack Clark

Currently a reporter for ZDNet UK, I previously worked as a technology researcher and reporter for a London-based news agency.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

1 comment
Log in or register to join the discussion
  • Watch the Open Compute Summit Live May 2nd
    https://apps.facebook.com/fbtechtalks/?ref=ts
    anonymous