X
Business

AMD's Barcelona pitch: Contradictions that reflect today's gearmaker conundrum?

Meet Randy Allen (pictured left), the corporate vice president of AMD's Server and Workstation division. Using the podcast player above, you can hear my interview of him today regarding AMD's official launch of Barcelona: the first true quad-core x86 chip to hit the market (Intel has a 4-core product in the market, but it's really a 2x2 -- 2 cores x 2 distinctly separate pieces of silicon vs.
Written by David Berlind, Inactive

Meet Randy Allen (pictured left), the corporate vice president of AMD's Server and Workstation division. Using the podcast player above, you can hear my interview of him today regarding AMD's official launch of Barcelona: the first true quad-core x86 chip to hit the market (Intel has a 4-core product in the market, but it's really a 2x2 -- 2 cores x 2 distinctly separate pieces of silicon vs. Barcelona's one piece). Barcelona has been the chip's codename. Today, the excitement regarding the secrets and mystery that codenames often harbor gives way to the simple reference "Quad-Core AMD Opteron."

Based on my interview of him, Allen has a job that I do not envy. To the extent that Barcelona (it's more fun to call it that) turns out to be really good at what AMD says it's good for, it might actually be bad for AMD and server makers. Like a politician, Allen must find a way to discuss the advantages of AMD's innovation while putting a positive spin on the potential disadvantages that may result from its deployment. I guess it's a good problem to have, but a bad one to have to manage.

To the extent that Barcelona represents the industry's first quad-core x86 chip, it's hard to disagree with AMD's own press release that refers to it as the "world's most advanced x86 processor." Beauty of course is in the eyes of the beholder and Intel fan-boys are welcome to disagree with AMD's "most advanced" assessment. But the fact remains that it is the world's first truly four-core processor to support the x86 instruction set and it relies on a series of AMD innovations such as its Direct Connect Architecture (a direct, on-chip pipeline between processor and memory) that have kept AMD competitive over the years.

With Barcelona though comes a four-point message from AMD that articulates its advantages, but also a contradiction that is the official gearmaker's conundrum as we had into this century's second decade. According to Allen, Barcelona represents the ideal solution to those looking for unmatched multi-threaded server performance, an ideal virtualization target, optimal power efficiency (the amount of electricity it takes to service a given workload), and investment protection.

At 30,000 feet, the four-point message says "do more with less" and therein lies the problem. For what seems like forever, the introduction of new processors has always been accompanied by the message of unmatched performance. Historically, even though faster servers alone mean that users can theoretically do more work with fewer discrete machines, there seemed to be little abatement in the market's appetite for new servers. Responding to one of my questions on the topic, Allen said:

The thing I like to point out is that if you look at the industry, what you've seen for decades literally is that every year, there's much more performance delivered to the market, but there seems to be an insatiable appetite for computing capacity out there. And what happens of course is these kinds of innovations are required just in order to enable the industry to keep pace with that growing demand for computation.

Fair enough. Based on statistical market data, it's hard to disagree. But keep in mind that that demand was largely built on the performance message. Today's message is very different. It's not just about performance.

First, let's put AMD's four messages in terms of end-user goals. Starting with performance, the goals are generally to get the same work done faster, or more work done in the same amount of time that it originally took to get less work done. Get my computations done sooner. Then there's virtualization. "Virtualization" is not a goal. Talk to any virtualization software vendor (whose own goals are not necessarily aligned with those of the hardware manufacturers') and they'll tell you that the goals associated with virtualization -- two goals that are inextricably linked -- are about utilization and consolidation.

In the old days, when energy and space were not the scarce resources that they are today, if you had a problem that required a server, you just threw a new server at it. Or, if the problem required clusters of servers, you threw a whole bunch of new servers at it. If you were in the position of buying servers, you rarely if ever risked two or more mission critical applications on the same server. First, given the bursty nature of certain applications, it was hard to predict if, by putting a second application on some server where the first application was doing fine, the first application might at times choke on a lack of resources. It was far simpler and cheaper to just buy a new server. The resulting mess is one we're confronting today: millions of servers most of which are running at well-below their capability (a condition known as under-utilization) and thusly, all of which are idling, wasting enormous sums of energy (energy to run them, energy to cool them) and space.

When I think about the so-called "insatiable appetite" that Allen spoke of, that appetite was largely driven by this sort of brute force approach to matching servers with applications. Today, with all this talk about virtualization, things are very different. One of the primary goals of virtualization is to reduce idle time, thereby getting the most out of every box. Without getting too far into the details, virtualization represents a far more elegant way of loading one box with multiple applications than the way we might have done it a few years back. Yes, virtualization has been around for a while. But what's different today is that it's much easier to spot and characterize servers that are overloaded, and provided virtualization technologies are in use, move their workloads to other servers that can handle the load. Another thing that's different is that the hardware -- especially newer hardware like Barcelona -- is built with virtualization in mind. In AMD's case, the chip contains special instructions (instructions that are different from Intel's version of the same thing) to make virtualization technologies like VMware and Xen work better.

With these advances in the area of virtualization, theoretically, the net result is that instead of having 100 servers running at 50 percent utilization (needlessly wasting all sorts of power and space), you could have 50 servers running at 100 percent. And, instead of buying a new server every time a new server-based application comes online, you would load it into a virtual machine, model that virtual machine's resource requirements and then find a home for it in one of your underutilized servers. Buying your 51st server wouldn't theoretically happen until you were making the most out of the 50 you have in place.

There's a name for this practice. It's called consolidation and, maybe I'm mistaken, but the last time I checked, consolidation meant "fewer" not "more." Thanks to the latest virtualization technologies, you should be able to decommission some number of servers. If you can't, then something is seriously amiss. But wait. It gets better (or worse, depending on whether you're a gearmaker or not). Let's say, by virtue of the latest virtualization technologies, you had some existing number of servers in your datacenter that you were efficiently leveraging to their fullest potential. Now let's say you can buy a Barcelona-based server that, because of its faster speed (and because of how easy it is to shuffle workloads once you're using virtualization technologies), can do the work of two of your existing servers. Oops, you did it again. You figured out a way to do more with less.

Virtualization technologies and Moore's Law have been pairing up for years to make doing more with less possible. What was lacking however (in addition to immature virtualization technologies) was impetus. Enter, the cost of electricity and space: two cost centers that are no longer trivial blips on the bottom line. It's not that businesses haven't for years been looking for ways to shave their power costs. They have. But, for many IT folks, power has never entered the decision making equation. But now that power is hitting the bottom line in a more visible way, the same IT folks have a fiduciary responsibility to their companies to reduce the energy footprint of their servers. The fewer servers they run, the less energy that's required to run them directly (as well as indirectly through HVAC costs that are not just impact by the sheer number of systems that need cooling, but volume of space and air that needs to be cooled).

AMD's Barcelona launch is very much about that power efficiency as well. Remarked Allen:

You can buy a quad core that takes no more power and cooling than a single core processor required in 2003. Performance levels escalating at a very rapid rate. But the power budget is being held constant. And that's why you get this significant benefit in terms of performance per watt.

If that's not about consolidation (a.k.a. doing more with less), I don't know what is.

Finally, to add insult to injury (if you're a gearmaker), there's the investment protection piece. I asked Allen what was meant by this:

When we talk about investment protection, there are certain things we can control... we cannot control exactly the rate at which the rest of the industry will embrace the standards that we develop and put out there. But if you look at, for example, the fact that the same platforms that worked for our single core Opteron when it was introduced in 2003, our dual core Opteron dropped right into those platforms. And likewise, the dual core platforms that we introduced last year, today's product, this quad-core product goes into exactly those same platforms. Same cooling, same power delivery, same chipsets, same motherboards. And that's a value proposition that we can deliver that's completely under our control that benefits both end users as well as OEMs.

To me, investment protection is about getting off the upgrade treadmill. It's about how I make the most use of what I've got today, tomorrow. If today, one or more of my recently consolidated 50 servers have a dual-core Opteron that's socket compatible with Barcelona (as Allen implies), I may not even have to buy that 51st server to accommodate that next application. Instead, I just yank out the dual core Opteron and throw a Barcelona in its place. Instead of creating an underutilization scenario (to make room for the new applications) through the addition of more servers, I create it through the exchange of server chips. Allen says this benefits end-users and it's hard to disagree. Imagine if it were that easy to supplant your car''s powerplant? One thing is for sure -- the car manufacturers would have a new problem on their hands. Which is why I'm not so sure about the benefits to OEMS (the gearmakers).

No matter how you slice it, as more chips like AMD's Barcelona arrive on the market -- chips that offer users the opportunity to do more with less (in a variety of ways) -- a great many benefits will accrue to the users who learn to take full advantage of them. But I also think we've reached a new tipping point where that so-called insatiable appetite will get addressed through means other than buying more gear. This is not just a problem for gearmakers like IBM, HP, Dell, Sun, and others, but the companies who supply them like AMD and Intel.

Editorial standards