Power Systems: A better virtualization and big data platform than x86

Power Systems: A better virtualization and big data platform than x86

Summary: Give me your tired, your poor, your huddled x86s yearning to be replaced with something better — RISC-based technology for the heavy lifting you need but didn't think you could afford.

SHARE:
15

When it comes to squeezing out the last bit of performance from hardware, some see virtualization as the answer to the call. And for the most part, it is the answer. Server hardware is notoriously underutilized, but virtualization has also answered that call with its more efficient hardware utilization ratios. But what happens when you reach your utilization thresholds and performance begins to suffer? You add another x86 system or more to balance your workloads and to bring performance back to an acceptable level. What if there were a better way to add capacity, a better way to manage hardware partitioning and a better way to guarantee uptime?

There is a better way and it has nothing to do with x86 virtualization. But it's still virtualization, from the people who invented it.

It's called PowerVM virtualization on Power Systems (PDF). And it's brought to you by IBM, the creator of virtualization technology.

Now, you're probably assuming that such technology comes at a price. Well, you're right, it does, but it's much lower than you think. It's also not really about the cost of the hardware and software as much as it is about the cost per workload of the hardware and software. Let that sink in for a minute and continue reading when you've wrapped your head around the concept that what you're really buying for your virtual infrastructure is capacity — capacity to handle workloads. In practical terms, compare virtual machine densities for x86 architecture systems and Power Systems. I think you'll find that your cost per VM is significantly lower with the Power Systems. You'll also find a significant boost in performance as well.

Typically, we think of virtual machines (VMs) and VM densities per unit of hardware, with that unit of hardware being a virtual machine host system and that's fine. It's hard not to think of virtualization in those terms. However, you should also consider the following factors when attempting to compare apples (Power Systems) to oranges (x86 Technology):

  • Risk

  • Agility or time to market

  • Total cost of ownership

  • Service stability and reliability

  • Staffing needs for management

  • System efficiency

  • Satisfaction

  • Scalability.

In fact, there's a report that is the result of a study performed by Solitaire Interglobal Ltd. The study spans 61,320 customers and compares various virtualization technologies on the eight business metrics listed above.

My interest in IBM's new generation of Power Systems for big data and virtualization for SMBs came about as a result of a conversation I had with Colin Parris, GM, Power Systems at IBM. Parris' knowledge, excitement over the product line, and my many questions made us both miss a technical presentation that followed our interview. I blame myself.

The key points from this interview yielded the following information about Power Systems:

  • Minimum partition size as small as 1/10 of a processor with granularity of 1/100 of a processor

  • Automatic CPU adjustments based on load

  • Dynamic reconfiguration without rebooting

  • Dedicated and virtual devices in guest operating systems

  • Virtualize network and storage for guest operating systems

  • Active Memory Sharing

  • Live Partition Mobility between systems provides exceptional capability for the user consolidating homogeneous and heterogeneous workloads

  • Separation of physical processors from logical processors

  • Flexible hardware resource allocation based on the needs of high-priority virtual machines

  • Partitions can range from 10 percent of a CPU core up to 256 cores on the Power 795.

And since big data and analytics are on the minds of just about everyone in business these days, Power Systems are at the leading edge of those technologies as well. Check out these IBM Powercast videos to hear from actual clients and their experiences with IBM Power Systems. These view interviews discuss how IBM's Power Systems have propelled their small businesses into the "big time" by leveraging technology, specifically analytics, that, prior to IBM's new Power Systems for SMBs lineup release, was only available to large businesses.

There is sometimes a fear of what's called "vendor lock-in" with solutions such as IBM's Power Systems and Power VMs. Well, fear not, freedom fighters: IBM's Power Systems also run VMware (PDF), Red Hat Enterprise Linux (PDF), SUSE Linux Enterprise Server (PDF), and, of course, IBM's own AIX.

IBM's Power Systems for SMBs make sense for the heavy lifting (big data analytics and virtualization) that you require but were previously unable to afford or that you thought somehow were bound to x86 architecture. And while you're transitioning, IBM's PureFlex systems allow you to mix x86 architecture systems with Power Systems and manage them from the same application.

If you're a current Power Systems customer, I'd like to hear from you and how the transition from x86 systems to Power Systems has boosted your computing power for future posts.

Related stories

Topics: IBM, Big Data, Virtualization

About

Kenneth 'Ken' Hess is a full-time Windows and Linux system administrator with 20 years of experience with Mac, Linux, UNIX, and Windows systems in large multi-data center environments.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

15 comments
Log in or register to join the discussion
  • "RISC-based technology for the heavy lifting"

    Hmmmm. Been hearing that for a very long time, and yet it has never come to pass.
    cornpie
    • might be

      Is it possible that technology is simply outside of your budget?
      danbi
      • Not at all...

        Its just that RISC has never been able to realize all of its theoretical advantages in the real world. If it had, Intel would have been out of business a long time ago.

        Its not just servers either. I still remember my Mac loving friends talking up their G4/G5 etc systems and how they were "super computers". Remember that? Then all of a sudden Apple is using Intel CPU's and they don't even seem to notice.
        cornpie
        • where is the logic?

          I remember the days of the G4 being classified as supercomputer, because it exceeded the 1 TFLOPS limit for US export control. That let to delays for non-US users and for Apple forced to down-clock the CPUs. These computers were indeed faster than what Intel had.

          The reason Apple moved to Intel is IBM. IBM got pissed of, because Apple were selling mind-rangle PowerPC servers, running AIX for $5000, while an absolutely identical IBM machine would sell for at least $25,000. IBM made sure Apple will not get better CPUs anymore and so they were forced to switch.

          On the other hand, if it was not for AMD to invent the AMD64 architecture, which they licensed to Intel for the promise of "no more lawsuits, ever", Intel would still be struck with their awful segmented architecture. Intel are good at just manufacturing chips, not designing processors.

          Curious, Intel got in this business because of IBM, who at the time declared war at Apple (and Motorola). Then formed an alliance with both companies only to fail them both. Funny how things works out sometimes.
          danbi
          • Never understood why people assume that x86 needs go anywhere

            And if it did why it'd be the end for intel. They are the biggest, most advanced chip maker on the planet. If x86 does get replaced, I think they may move with it rather than shut the doors and go "oh well, we tried" utterly illogical. It is in their interest to promote x86.. Obviously, just like IBM and their power architecture.

            In home computing, competing architecutures has a history of being annoying. And now more than ever it's becoming irrelevant in the performance stakes. Almost nobody uses anywhere near the potential of their silicon these days, so performance advantage would be few and far between compared to the inconvenience of it all.

            Current home trends are towards energy saving, so we may well soon see full home arm computers. As RISC arm is totally incompatible with CISC x86 it causes programmers headaches to support the two at the same time.

            In servers x86, power and sparc have been at war for two decades. There is no winner. It is a case by case decision based on the needs of the business.

            What is changing is the low power was is starting as arm comes to the server market, along with atom.
            MarknWill
          • who said Intel or x86 has to go?

            Intel, many, many years ago had some very interesting designs for CPUs, such as the iAPX 432 -- which was great idea, just didn't succeeded for many reasons (most likely because Intel focused on PC CPUs).

            Intel has indeed the best chip production fabs in the world, but again, this does not mean they have the best CPU architecture(s). Even if they shut down their CPU business, there is plenty of other chips they can make, profitably.

            This article in particular, discusses just that: high performance computing systems suitable for heavy virtualisation. Intel is not very good in this area and this has nothing to do with RISC vs CISC.

            By the way, the world long ago stopped programming computers in assembler. Most well written code is very portable across CPU architectures. It does not matter what the instruction set is, or the architecture -- as long as performance is adequate for the task.
            danbi
  • Even better - GPU based technology.

    Meh, forget x86s and ARMs.

    Gimme some GPUs, and I'll show you massive parallelism that'll leave them screaming for their mommies.

    "Partitions can range from 10% of a CPU core up to 256 cores on the Power 795."

    Pulling some Tesla spec sheets, there are 2688 CUDA cores on the Tesla K20X.

    http://www.nvidia.com/content/tesla/pdf/Tesla-KSeries-Overview-LR.pdf

    That's a single GPU - a single piece of silicon. I can only imagine what a server filled with these things could do.

    You want "big data" to be big, right ;)?
    CobraA1
    • GPU technologies

      Unfortunately, GPU technologies are only appropriate for very limited tasks, mostly matrix transformations. The primary trouble is the limited instruction set and the insufficient bandwidth to the "GPU farm". It is no surprise, that the primary difference of Supercomputers to "normal" computers is the I/O capacity.

      For example, we have played with scaling encryption using GPUs. But it turns out, even the fastest GPUs are still slower than (say) commodity Opterons, because you simply cannot feed that much data to the GPU. Even if in theory the GPU will crunch it faster.
      danbi
      • humm

        "Unfortunately, GPU technologies are only appropriate for very limited tasks, mostly matrix transformations. The primary trouble is the limited instruction set and the insufficient bandwidth to the 'GPU farm'."

        Well, being good at matrix transformations is expected, considering the original purpose of GPUs was 3D graphics, which uses matrix transforms heavily.

        The limited instruction set could be worked around a bit - I'm pretty sure that ever since they went GPGPU, they've been Turing complete.

        As far as I/O goes - nVidia's claiming 250 GBytes/sec for memory bandwidth. No stats on the connection to the rest of the machine, but I'm pretty sure it's as big (or as small) as the bus on the board. Dunno how they do it on a server, but on most new PCs, that's likely to be PCIE 3, which is about 15.75 GB/s.

        And now that I think about it - it's the same for the CPU on a consumer machine - PCIE is the fastest I/O device on most consumer PCs. Maybe servers are different, but as far as most regular PCs are concerned, the GPU and the CPU have the same I/O bottleneck.
        CobraA1
        • bandwidth to GPU

          Unfortunately, for most GPUs that PCIe bandwidth is for transfers between the PC and GPU memory. For example, you need to encrypt a piece of data:

          pure "PC" case:
          you transfer data from memory to CPU, do encryption, write back to memory;
          the code is pretty much in the CPU cache

          CPU & GPU case:
          you transfer instructions for the GPU from memory to CPU to GPU how/what to compute, (ideally) the GPU transfers data from memory to GPU, does computation, writes data back to memory.
          In the non-ideal case, all data passes trough the CPU too.

          It is this instructing the GPU what to do, that turned to be slower -- because the amounts of the data that had to be processed were not large enough, and that data was not in any way resident in the GPU memory. You could "fix" that but it adds two more copy operations anyway.

          I am confident, that a GPU system can be built that resolves some of these issues, but so far the cost of GPUs are high and performance for non-matrix transformation type of tasks not so stellar.
          danbi
        • a reference

          By the way, if you are interested, a colleague has put one of his presentations online:
          http://svsf40.icann.org/meetings/siliconvalley2011/presentation-gpu-accelerated-rsa-13mar11-en.pdf

          talking about exactly that usage of GPU. There are some nice hints about GPU limitations in there.
          danbi
    • Not So Fast

      and Cobra, you need some really big hoses to put that BIG data somewhere, ain't you? good luck trying it on Intel's QPI.
      ZMike2013
  • Just say no

    This is the gift that keeps on giving and IBM wants you to have it.
    greywolf7
  • VMware on POWER?

    >IBM's Power Systems also run VMware

    Ken, please, take that back while nobody's looking. You've get it wrong, only x86 flex blocks can run VMware.

    Well, actually you _can_ run VMware Server (not ESX) on Power, but you don't want to. Trust me (TM).
    ZMike2013
  • Re: When it comes to squeezing out the last bit of performance...

    ...the last thing you want to do is slap on an extra software layer which adds an instant 15% hardware overhead.
    ldo17