X
Business

Apple's OS X and virtualization: A missed opportunity

OS X Lion still has no native virtualization capability. Apple is missing out on some serious revenue possibilities which could be just as disruptive to the "Post-PC" world as the iPad.
Written by Jason Perlow, Senior Contributing Writer
apple-data-center-icloud-photo-02.jpg

OS X Lion still has no native virtualization capability. Apple is missing out on some serious revenue possibilities which could be just as disruptive to the "Post-PC" world as the iPad.

I go to Mexico on vacation for a week, and then all hell breaks loose. What the heck is wrong with this joint, anyway?

So my long-time Frugal Tech Show partner-in-crime Ken Hess decides to address Mac OS X's lack of built-in virtualization capability, writes a very compelling argument why Apple needs this technology, and then my beloved Angry Mac Bastards decide to skewer him.

Also Read:

I mean, it's not like Ken shouldn't have his face kicked in on a daily basis, because he annoys me incessantly on Google Talk, but Peter and his two poo-flinging Mac monkeys really smacked the hell out of him. Mazel Tov! Today, Ken Hess, you are a MAN!

Oh and they beat up Scott Raymond too. But Scott deserves it, because like an undisciplined pet pit bull puppy, you need to pin them into submission in order to properly train them so they don't go and eat the next door neighbor's chihuahua.

But seriously, Scott. If you've pissed off the Angry Mac Bastards, you're doing it right.

Okay. Back to Mac virtualization. I've been saying that this has been a serious deficit in Apple's flagship operating system for years. Ken has simply brought the issue to the front and center, in light of Lion's pending release this fall. And he makes a whole bunch of good points about how VDI might be a good way for Apple to break into the enterprise. I'll elaborate on those in just a bit.

I noted in my original 2009 article where Apple disclosed their plans for their 500,000 square foot North Carolina datacenter-- which we now know is going to be the central hub of iCloud -- that without virtualization in Mac OS X, they wouldn't be able to pull off sufficient server density if they planned to dogfood their own technology to run said datacenter.

SURPRISE! The datacenter appears to run on... wait for it... wait for it... the suspense is killing me... HP Proliants. With HP external storage enclosures, NetApp NAS filers and TeraData data warehousing appliances, at least from what can be observed in the title photo of this article and in various stills extracted from Apple's WWDC keynote video.

[UPDATE: Additional WWDC frame-by-frame video analysis appears to indicate Apple has also purchased Oracle M-class and T-class Solaris 10 SPARC systems, presumably for running the Oracle 11i RAC RDBMS and other Oracle Middleware]

And what OS runs on those HP Proliants? Well I'm gonna fathom a guess, but it probably aint Mac OS X Server.

I'm assuming Apple did not waste valuable software engineering time trying to get HP's enterprise hardware drivers running Mac OS X or Darwin on the metal while they were busy planning a massive $1 Billion datacenter buildout, nor are they using existing paravirtualization layers alongside some form of special Apple-baked EFI on the Proliants for it to run on any 3rd-party hypervisor like VMWare, instead of on their old XServes.

(Note: You can run OS X Server virtualized on VMWare Workstation or VirtualBox today, but it only officially works on actual Mac hardware, since it requires an EFI layer. So the paravirtualization drivers exist, but it's useless to enterprises. VMWare's upcoming vSphere 5 is supposed to support OS X Server, but only on end of life XServe hardware.)

Well, maybe they figured out a way to run Parallels Server on them, but one would think they would have bought the whole dang Parallels company if they intended to do that. They didn't, and it would have cost them next to nothing to do it. The company isn't even public yet.

My bet is that in order to secure a fully supported service contract with HP, they had to load a certified OS on them. Why I am I betting this? Because I know something about large datacenter service contracts and you can read my disclosure statement at the bottom of this post that proves it.

[Next: How Apple Didn't Dogfood Its Datacenter]»

If Apple is stupid enough to run a billion dollars worth of hardware strictly on the metal, that certified OS is either Red Hat Enterprise Linux, Oracle Solaris 10 x86, or Microsoft Windows Server 2008 R2.

If they were smart enough to virtualize a large portion of their environment to increase density, which I suspect Apple is doing, it's running VMWare ESX, Microsoft Hyper-V, Citrix XenServer or RHEV (KVM), on top of which could lie a mix of anything.

The likely candidate for most of the stack, at least for the important middleware is FreeBSD, which would probably be the path of least resistance to port/install WebObjects and a number of other key Apple server-based technologies that make up the iCloud systems architecture, because it is fully supported as a paravirtualized guest on VMWare ESX.

Apple could also run FreeBSD and use "Jails" to create virtual environments on the metal on the Proliants, but I think that FreeBSD on HP x86 hardware is considered to be best-effort currently.

Nutshell summary? Apple's datacenter isn't Mac.

However, I do believe that Apple is eventually going to have to port a native hypervisor to Mac OS X or allow it to be paravirtualized on 3rd-party hardware, such as HPs.

Firstly, because I know that Steve Jobs is probably frothing at the mouth to have to pay HP very expensive special custom support for their own datacenter if they are running some weird combination of software that isn't supported under regular contract rates.

Secondly, Apple knows they have a perfectly good server OS with OS X Lion and the idea of having to run something like Linux or FreeBSD for their key back end systems probably doesn't sit well with them, long term.

There's also another issue here. Apple is wasting a huge revenue opportunity by not providing a way for enterprises to get Mac virtual desktops.

Now Peter and my poo-flinging surrender monkey simian friends at Bastards, Inc have stated that Apple doesn't make money from software, they make money from hardware. Okay, I get it, that's their traditional business model. But Apple's future business model is "Post-PC". Everything's gonna be iCloud and all devices have now been "demoted" to iCloud clients, right?

Bastards, if you're implying Steve Jobs is full of cow excrement with his Post-PC projection of the future, then I hereby excommunicate you as your new overlord Supreme Sith Mac Fanboi.

Bow down before Lord Perlow and pay your penance and surrender your iPhones, and retreat to the nearest adjacent Distorted Reality, on a membrane with Amiga Store retail locations or where Jean-Louis Gassee is selling BePhones and BePads.

I'm from Queens. We come out as Bastards from the womb.

Here's how Apple makes supplementary dough in the Post-PC world in the enterprise. Ready for it? They sell virtual hardware licenses.

Apple should port KVM or another major hypervisor to OS X, or just buy freaking Parallels. Or re-implement BSD Jails a way natively on OS X that doesn't suck, so that they have something comparable to Solaris Containers or Virtuozzo or OpenVZ. With a nice management GUI on it, please.

They should then work with the Big Four server vendors -- HP, Dell, Cisco and IBM -- which would act as their reseller/integrator partners, and sell OS X Server Enterprise Edition for $1000+ a copy, along with a special hardware PCI anti-piracy thingy with each copy so it's locked to that specific box, which could include a server GPU cluster on the card so desktop VDI sessions can be pre-rendered and offloaded from the server's CPUs.

Just like Windows Server 2008 R2 SP1 does with Hyper-V and RemoteFX.

That solves the Psystar and Enterprise Hackintoshery problem up front, as well as the virtual desktop performance issues.

Each server virtual instance license should be then priced competitively with Windows and Red Hat, and for virtual deskops, they should charge about the same what Microsoft, Citrix and VMWare prices RDP-based or ICA-based or VDI client access licenses cost. Or less, if they feel like creaming the competition overnight.

Oh and then there's the "iCloud Enterprise Server" a la Red Hat Satellite which would provide patch management and enterprise App deployment and messaging integration to the virtual Mac desktops as well as corporate iPhones and iPads.

How do you access those Virtual Mac desktops you paid Apple $500+ per CAL for? Good question. With iClients.

What's an iClient? Why, I suspect it's a rebranded Apple TV connected to a keyboard and mouse and a HDTV monitor. Or as I've been calling it for the last few years, The Screen.

A $99 A5-powered dumb terminal which Apple could sell by the millions, either direct from China or license to the Big Four to produce, which includes the required Thin Client protocols and a local GPU.

Did I just blow your mind, Bastardos? Good. Then I did my job properly. It doesn't take a friggin' genius to see that this is a valid business model.

Who wants Mac OS X Server and Desktops virtualized in the Enterprise on commodity server hardware? Talk Back and Let Me Know.

Disclaimer: The postings and opinions on this blog are my own and don't necessarily represent IBM's positions, strategies or opinions.

Editorial standards