The new Mac Pro: desktop mainframe

The new Mac Pro: desktop mainframe

Summary: The tiny little power-packed tube is engineered like a 1960s mainframe computer: a central processor with a lot of I/O. That makes the economic case for it very different than for other PCs.

SHARE:
TOPICS: Storage, Apple, Hardware
126

For decades, IBM mainframes were relatively wimpy processors with lots of I/O channels that could support dozens of disks, printers, terminals and comm links. The new Mac Pro is hardly wimpy, but its heavy focus on connectivity — 6 Thunderbolt ports, 4 USB 3.0, HDMI and dual GigE — makes it closer to the mainframe architecture than a traditional workstation.

The new Mac Pro is all about configurability. Not within the tube, but as a system.

Professional users have a different set of economic priorities than consumers. These include:

  • Reliability and availability. Uptime is crucial and the ECC DRAM a win.
  • Performance. Professionals make more money when they get more work done. It doesn't take much more performance to justify a faster system.
  • Specialized options. From 4K displays to fiber Channel networks to specialized I/O cards, professionals often invest as much or more in their peripherals than they do in their system processor.

What do some of these systems look like?

  • A serious coder might have an 8-core system with five 30-inch monitors and two fast RAID arrays.
  • A 4k video editor might have three monitors, a fast external array, Fibre Channel, a 4K optimized video card, a specialized control deck for editing and color grading and high-end audio and video I/O interfaces.
  • A musician might have two or three displays, a fast array, multiple MIDI-based instruments and high-end audio interfaces.

What distinguishes each of these configurations is that the attached peripherals may cost more than a Mac Pro. This changes the economics of a system in important ways.

If you have $12,000 worth of peripherals, then $2500 for the computer is less than 20 percent of total investment. Not a big deal.

If you are a professional billing $100k annually, a system that reduces wait time by 5 percent can pay for itself in less than six months. A bigger deal.

Modularity
For many pro users the ability to add as many PCIe slots as needed will be key. Several Thunderbolt card cages are available today, making it possible to build configurations impossible with the current design.

And this means that once you've bought your PCIe cards and enclosures, you can upgrade your processor with much less hassle. Upgrade what you need when you need.

The Storage Bits take
The new Mac Pro is designed for these kinds of users. The abundant I/O capabilities make this the most configurable and expandable Mac Pro ever made.

It will find a small but appreciative and influential audience. For people who require specialized I/O and make money from their system, the new Mac Pro will be very attractive.

Which is not to say the new design is perfect. Having all of that I/O going through flimsy thunderbolt connections is not ideal. Nor will the "light on rotate" be useful when a half-dozen cables are plugged into the back of the Mac Pro.

But these are minor issues that enterprising accessory makers will solve. For the few people that need what a Mac Pro offers, they'll have a machine they can use for years.

Update: Seeing the objections to the term "mainframe" let me re-iterate and expand. The IBM 360/370 families were channel-oriented systems. For example, a mid-range IBM 360 model 50 supported up to 768 "I/O units" - disk, drum or tape drives - and up to 1,984 slow-speed devices - terminals, card readers and printers. While I doubt anyone went to these maximums, the point is that these were I/O, not CPU-centric, machines. Likewise with the new Mac Pro: it is a CPU with a lot of fast Thunderbolt I/O for external expandability.

Few appreciate just how fast Thunderbolt is. It makes it possible to edit 4k video on a 2012 MacBook Air. Having 6 of these channels on a Mac - supporting up to 36 devices, makes the Pro more like a mainframe than a current high-performance workstation. While I doubt that will satisfy everyone - anyone? - that's the basis for the "desktop mainframe" comparison. End update.

Comments welcome. Part of me wishes I was still editing videos so I could justify a Mac Pro.

Topics: Storage, Apple, Hardware

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

126 comments
Log in or register to join the discussion
  • Why the strained comparison to a mainframe?

    The new Mac Pro is in reality nothing like a mainframe. The economics of mainframe computing don't apply. There's nothing I can apply of my knowledge of mainframes to better understand the new Mac Pro by way if analogy.

    I don't understand this urge to constantly box up new tech as being "just like" obsolete tech when there is no meaningful similarity. "We had cloud computing 35 years ago. It was called a VAX system." "We had an iPhone 20 years ago. It was called a Palm Pilot."

    If you're in a bar half in the bag and you scraw on a napkinl a black box diagram architecture of a square with a bunch of lines radiating out of it and scratch the letters "IO" above it, yes this might describe a mainframe or a Mac Pro. When you find that napkin in your pocket the next morning, you will rightfully throw it away because it is worthless.
    RationalGuy
    • Plenty more differences...

      True multi-user, multi-partition OS, dedicated staff to support, enterprise-level support. My old job, we spent 4M for a mainframe & storage (and that was a cheap one).

      Does this IO have dedicated processing power or rely on the CPU?

      Still, a very neat PC-class device, and the cost is probably justified to a lot of people.
      gtvr
      • Mainframe I/O typically has dedicated processors.

        Those dedicated processors offload the I/O from the main CPU(s). Mac Pro is nothing like a mainframe. Merely having six high speed external I/O connectors does not a mainframe make.
        ye
        • Yes but

          They aren't just connectors, they have dedicated Intel chipsets behind them which perform similar functions to the outward I/O processors of old. They are probably akin to the mainframe "channels" which were hardware running firmware to interface between disk control units and the actual processor. I suspect the same is still true, today there is a CPU, a chipset and an I/O controller in an external raid array for example. This is a three tier approach just as in the good old mainframe days.
          Robjsewell
      • processors?

        How quaint.

        Virtually everything now has a "processor" in it. My coffee pot has a processor in it. You can't connect things to these high speed serial buses without a "processor."
        poptones
        • Apparently you don't understand mainframe I/O

          I suggest you research it before commenting further.
          ye
          • How about you research it yourself?

            First, the poster said nothing about mainframes, and was commenting on the previous comment about whether the I/O management was CPU-based. (Answer: being that iTB is PCIe2, and is thus not controlled by the CPU, no, it is not CPU-based) so you have no knowledge from that question whether to not the poster "understands mainframe I/O".
            Second, apparently you don't understand TB I/O.
            .DeusExMachina.
          • Because I understand it.

            Unlike the person I responded to.
            ye
          • Apparently you don't understand G protein mediated cascade signaling.

            Posted because I understand it, unlike you, the person I responded to.
            .DeusExMachina.
      • Mainframes are weak

        I think the author is wrong in that he says IBM Mainframes have very weak cpus, but good I/O. This Mac Pro has not that.

        First of all, this Mac Pro has a much faster cpu than the strongest Mainframe cpu. Mainframes cpus are in fact, really slow. For instance, you can emulate a Mainframe on a laptop using the open source software TurboHercules. One old 8-socket Nehalem x86 server, gives 3.200 MIPS under emulation. Emulation is 5-10x slower than running native software. So the x86 server gives 16.000-32.000 MIPS if you ported the Mainframe software to x86. That MIPS number is like a big sized Mainframe. But todays E7 Intel cpus are at least 50% faster than the Nehalem, so 8-socket x86 servers should give 24.000-48.000 MIPS if running native code. This number is almost in par with the largest IBM Mainframe today, which sports 24 of the zEC12 cpus. Software emulation written by an Mainframe expert:
        http://en.wikipedia.org/wiki/Hercules_%28emulator%29#Performance

        Another Mainframe expert that ported Linux to IBM Mainframes, found out that 1 MIPS equals 4 MHz x86 of power. This means that the largest IBM Mainframe at 52.000 today, has processing power worth of 4 x 52.000 = 208 GHz. But one Intel E7 cpu has 10 cores, running at 2.5GHz, which gives 25 GHz. Thus, again you only need 10 or so of the E7 cpus to match the largest IBM Mainframe in terms of processing power:
        http://www.mail-archive.com/linux-390@vm.marist.edu/msg18587.html

        Also, a study of Giga Research showed similar numbers. In this study, a Mainframe z9 cpu is slower than a single core 900MHz Xeon cpu. The predecessor, z10 cpu is only 50% faster. And the z196 is only 50% faster than z10. And the newest zEC12 is only 50% faster than the z10. Adding all these numbers up, shows that a IBM zEC12 cpu is only 1.5 x 1.5 x 1.5 = 3.3 times faster than an old z9 cpu. Thus, the newest and shiniest IBM zEC12 cpu is only 3-4 faster than an old 900MHz Xeon single core cpu. This shows again, that you only need a few modern x86 E7 cpus to outperform the largest Mainframe.
        http://www.microsoft.com/en-us/news/features/2003/sep03/09-15LinuxStudies.aspx

        There is a reason IBM never publishes cpu benchmarks, if you wondered. And there is a reason Mainframes are never used for number crunching or HPC work. The cpus are extremely slow. We have three different sources all arriving at the same conclusions in different ways. One of the sources have written an emulator, so he is an expert. The other source ported Linux to Mainframes, so he is an expert on Mainframes too. BTW, he works with optimizing Mainframes. We conclude that the author is right on this point: very weak cpus.

        However, the author is wrong when he says that the Mac Pro has very good I/O. It has not, compared to an Mainframe. An Mainframe have extremely good I/O, because it has many coprocessors. (Imagine hanging on same nr of coprocessors on a x86 server, it would have similar I/O). Mainframes have good uptime, but an OpenVMS cluster beats the uptime easily. OpenVMS clusters measures uptime in decades. Just ask some people on OpenVMS clusters, they are brutal. 144 nodes running in a cluster, running different cpu architectures, different OS versions, etc.

        So, the author says the Mac Pro has superior I/O. Well, it has not. It has good I/O. But a brutal cpu. So the author fails in the comparison.

        PS. BTW, IBM calls the slow z196 for the "worlds fastest cpu". It runs at 5.26 GHz and has enormous cpu cache. Enormous. And still it is dog slow. Because of the legacy, it needs to be backwards compatible so it is bloated dragging it down. I wonder how IBM can claim it is the worlds fastest cpu, when any ordinary x86 cpu is faster?
        http://www.engadget.com/2010/09/06/ibm-claims-worlds-fastest-processor-with-5-2ghz-z196/

        On the other hand, IBM Unix POWER7 is a really good and fast cpu. IBM should swap the lousy Mainframe cpus to their POWER7 cpus instead.
        Orvar
        • Honestly, everything tends to average out.

          And I AM referring to processing speed on that front. It takes the same amount of time to wipe a 1TB drive (using the old DOS method) as it does a 30GB drive, and let me explain why: newer motherboards and processors only use newer drives, which are mostly at a higher capacity (i.e. A pentium 4 will only read 20GB or better or a Core i7 will only read 500GB or better--I don't know what the specifics are, but here's my point: As hard drives increase in capacity, older motherboards and processors do not read the newer drives, and you need newer boards and processors to do the job). Hence, the reasons why 100% of the United States cannot subscribe to Google Fiber.
          Richard Estes
        • Um, wow, just wow

          From totally misunderstanding the article (which actually agrees with your base premise, though you chose to argue against it) to your misunderstanding of how system architectures work, almost everything you wrote is completely false. Take for instance:
          "But one Intel E7 cpu has 10 cores, running at 2.5GHz, which gives 25 GHz"
          Uh, no, it does't work that way. You can't just take base frequency and multiply by number of cores. Hell, even if you could, you can't even compare core frequency across processor architectures and have a meaningful comparison!
          .DeusExMachina.
          • False where?

            "...almost everything you wrote is completely false....Uh, no, it does't work that way. You can't just take base frequency and multiply by number of cores. Hell, even if you could, you can't even compare core frequency across processor architectures and have a meaningful comparison!..."

            If these Mainframe experts Ive linked to are wrong, please point out the errors they made? Do you have something constructive to say, that refutes these Mainframe experts? Or are you just airing your opinion?

            Again, there is a reason IBM never published benchmarks for Mainframes. We all know that when IBM has something good they boast about it a lot and tell everybody about it. For isntance the POWER7 cpu, which is good. IBM released lot of benchmarks showing the hard numbers, how fast the POWER7 were. And IBM was right, it was fast. So IBM had nothing to hide, and could release all benchmarks officially, and invited people to scrutinize and study the benchmarks. Regarding the Mainframes, IBM never publishes anything and you are not allowed to bench them or scrutinize anything. All in the dark.

            So you must rely on 3rd party people. For instance the Mainframe expert who ported Linux to Mainframes, and could compile and run the same Linux software on x86 and on Mainframes. So he could assess the performance and found out that 1 MIPS equals 4 MHz of x86 computing power. Or do you claim this number is false? That the Mainframe expert, much more knowledgable than both of you and me combined, is wrong?

            There is a reason Mainframes are never used for HPC and number crunching. Because the CPUs are not fit for that.

            I suggest you show us some links to benchmarks and refute my links the hard scientific way, instead of airing your opinions. It does not matter how loud you yell, that will not convince anyone. Show us the benchmarks links that shows that my links are wrong. BTW, you will never find any such links, before IBM forbids publication of such benchmarks. If you buy a Mainframe, you are forbidden to publish such benchmarks. Why does IBM forbid it? Does IBM have something to hide?

            Where are your links disproving my links?
            Orvar
          • Um, I never said anything about your links!

            I commented about your conclusions, which are NOT supported by your links. How about you address what I actually wrote?
            .DeusExMachina.
        • Apples and Oranges

          You have researched very well, however there are many factors in comparing throughput of a processor. You might think of it as horsepower versus torque in a car - they are both produced by the engine but do different things, and an Intel processor is a very different beast to a mainframe processor. They are physically differently constructed for different purposes and different workloads. A single mainframe CPU can service huge numbers of users at a time, switching tasks very fast. An Intel processor can't do this, the users would see very slow response times and think the Intel processor weak. It's horses for courses, you can't compare Apples with Oranges using numbers alone. I was for sometime a performance specialist and understand where you are coming from, it's a hugely complex area. Another analogy may be useful, for example a very powerful truck will not be as fast as a less powerful car, and if you want to transport six elephants you will use the truck, 6 mice, the car.
          Robjsewell
        • I noticed the new IBM

          Mainframes have 5.5Ghz processors. how? I have no idea. All I know is that a mainframe is designed typically to do transaction processing rather than video or audio processing for editors, etc.

          A MacPro doesn't typically have lots of consurrent users hammering it, it's usually one user at a time, but it can multi users from a standpoint of user accounts.
          RichDavis1
        • I noticed the new IBM

          Mainframes have 5.5Ghz processors. how? I have no idea. All I know is that a mainframe is designed typically to do transaction processing rather than video or audio processing for editors, etc.

          A MacPro doesn't typically have lots of consurrent users hammering it, it's usually one user at a time, but it can multi users from a standpoint of user accounts.
          RichDavis1
      • Not quite so many

        OSX is a fully POSIX compliant, certified Unix. As such, it is just as "true multi-user" as any other Unix, the preferred OS of most mainframes.
        There is nothing inherent in the machine that implies how much support staff any given enterprise will chose to provide, so that is irrelevant, as is the comment about enterprise-level support, as these have nothing to do with what makes a mainframe a mainframe. A PDP-11 is still a PDP-11, even if IBM went out of business tomorrow.
        .DeusExMachina.
        • WHOA!

          UNIX are not all created equal
          OSX is not the equal of Solaris for example
          The last time OSX got POSIX compliance was back with Snow Leopard.
          Current versions of OSX doesn't have the same POSIX compliance so this line of argument is thin.
          warboat
          • And your point is?!?

            This is a comment about whether OSX is multi-user. Part of a full POSUX compliant certification Unix environment is that it provide this basic multi-user functionality. NOTHING of this has changed since SL. So again, the fact that it is Unix proves my point. Period.
            And since the XNU core that grants POSIX has no changed relative to certification standards, it most certainly DOES have the same POSIX compliance.

            Why do you insist on posting on subjects you have NO knowledge about? Why?!?
            .DeusExMachina.