Early this week Jonathan Schwartz, president and CEO of Sun, added a new entry to his blog. Excerpts:
A few years back, I remember sitting with a group of customers talking about wine, and virtualization (a natural pairing, if ever one existed). Wine, because we were at an event Sun was hosting in Napa Valley, the heart of California's wine country - virtualization, because the attendees were data center professionals who'd come to talk about the future.
The customers in attendance all ran very high scale, high value data centers, who would deservedly respond to the accusation that they "hugged" their servers with "and what of it?" They were the individuals who kept some of the world's most valuable systems running with exceptional reliability.
But they were all starting to see and worry about the same thing, running applications in "virtualized" grids of networked infrastructure ("cloud computing" wasn't yet in vogue, or I'm sure someone would've used the term).
Back to the datacenter, virtualization can enable extreme infrastructure consolidation - decoupling applications from hardware drives more efficient capacity planning and system purchases. And as exciting as that was to everyone, if things went wrong, you could also tank the quarter, blow those savings and end your career. So, why all the anxiety?
If I could sum it up, these customers worried that virtualization would dissolve the control they'd carefully built to manage extreme reliability. In essence, they could hug a virtualized mainframe or an E25K (hugging is the act of paying exquisite attention to an individual machine), but it's far harder to hug a cloud. Nor can you ask a cloud why it's slow, irritable, or flaky, questions more easily answered with a single, big machine.
As the wine soothed their anxieties, a few of them began to draw out their vision of an ideal cloud environment (our laptops were open to take notes). Summarized, here's what they wanted:
Extreme diagnosability. Datacenter veterans know that things rarely run as planned, so assuming from the outset you're looking for problems, bottlenecks or optimization opportunities is a safer bet than assuming everything will go as expected. They all wanted ultimate security in responding to the question "what if something goes wrong?" - their jobs were on the line.
Second, they wanted extreme scalability - they all believed the move toward horizontally scaled grids (lots of little systems, 'scaled out'), would give way (as it always does) to smaller numbers of bigger systems ('scaled up'). We're seeing that already, with the move toward multi-core cpu's creating 16, 32, 64 even 128 way systems in a single box, lashed together with very high performance networking.
But scalability applies to management overhead, as well - having 16,000 virtualized computers is terrific (like 16,000 puppies), until you have to manage and maintain them. Often the biggest challenge (and expense) in a high scale datacenter isn't the technology, it's the breadth of point products or people managing the technology. So seamless management had to be our highest priority, with extreme scale (internet scale) in mind.
They wanted a general purpose, hardware and OS independent approach. That is, they wanted a solution that ran on any hardware vendor they chose, not just on Sun's servers and storage, but Dell's, IBM's, HP's, too. And they wanted a solution that would support Microsoft Windows, Linux and not just Solaris. Ideally embraced and endorsed by Microsoft, Intel, AMD, and not just Sun.
And finally, they wanted open source. After years of moving toward and relying upon open source software, they didn't want to reintroduce proprietary software into the most foundational layer of their future datacenters. Some wanted the ability to "look at the code," to ensure security, others wanted the freedom to make modifications for unique workloads or requirements.
That's the rough backdrop to what drove our virtualization announcements last week - a desire to solve problems for developers and datacenter operators in multi-vendor environments. If you look to the core of our xVM offerings, you'll see exactly how we responded to the requirements outlined above: we integrated DTrace for extreme diagnosability. We leveraged the scale inherent in our kernel innovations to virtualize the largest systems on earth. We've built a clean, simple interface to manage clouds (called xVM OpsCenter, click here for more details), to address management and provisioning for the smallest to the largest datacenters. And everything's available via open source (and free download), endorsed by our industry peers (watch these launch videos to see Microsoft and Intel endorse xVM - no, that's not a typo, Microsoft endorsed xVM). We even leveraged ZFS to get a head start on storage virtualization
In many ways xVM offers multi-vendor data centers the kind of ease of resource allocation N1 promised to Solaris/SPARC users - and will no doubt prove useful in many larger scale deployments.
However, I'm a believer in containers and N1 style virtualization as a resource management tool, but not in virtualization as practiced by the PC/IBM industry and as supported in xVM. The difference is this: in resource virtualization you attach what amount to handles to packaged workloads so you can move them around easily - in the PC/IBM version you take one physical machine and pretend it's many different ones, each of which carries all or part of one application.
Thus the resource management version is largely independent of hardware, the PC/IBM version totally a function of hardware despite its software label.
xVM has features from both conceptualizations, but is being marketed and developed to sell into the PC/IBM community view of what virtualization is and how it's used.
For Sun to take Xen, add its own script set, and sell it as xVM is perfectly reasonable and may actually produce both cash flow for Sun and some value for customers, especially those who want to deliver Windows software on Sun Rays; but it's logically a dead end: a case of giving the customer what the customer wants while knowing that what the customer wants is conditioned, not by his needs or the technology available to meet those needs, but by what he knows and what his predecessors knew ten, twenty, and even forty years ago.
I think of xVM, in other words, as a product IBM could have done if it didn't have Tivoli to protect - something IBM customers can consider acceptable change because it just reconfigures legacy stuff and doesn't require them to rethink anything about themselves.
So if I had a chance to respond directly to Mr. Schwartz I'd say something like this: "Look I know you have to sell this crap, but this isn't what makes Sun great - if you learnt anything from StorageTek it should have been that most of these customers don't have any loyalties beyond themselves, so this should get a few sales, but it doesn't build the company.
You know what makes Sun great? It's stuff like ZFS; DTrace; using flash in the L2ARC; an awful, but working, identity management solution; OpenSolaris; ROCK's transactional memory and the compilers to use it; the whole coolthreads bit - these are the things that count in the long run, not broader claims for some x86 style VM redux.
You want to build friends and markets for Sun? Try selling some of that advanced technology to people who need it and desperately want it, but don't know about it - and don't make it to your Napa Valley focus groups.
There's a manufacturing resurgence going on across the United States - driven by Chinese quality, transportation delay, and change in the dollar. I think it's the long term real thing - so have a professional polling company pick 500 manufacturing firms in the 25 to 250 employee range from across the United States, bring the CFOs and IT heads from those companies to San Francisco for a long weekend - and tell them to bring their spouses, on you.
Listen to them - ask them what they're doing with IT and why they're doing it. My guess? you'll find that IT is the single biggest source of frustration they have. The ones who got into IBM's iSeries have reliability and predictable cost, but weak services and no flexibility, while those who started out with one or two Windows servers now don't know how many they have and will tell you straight out that if they could lynch their IT people they would - because it's just one frustration after another and every week somebody wants them to write another check.
You've got what they need - but no one's telling them. So that's your second day: show them what you've got: show them their applications running on Sun Rays, running on coolthreads; talk to them about Solaris; about running their legacy apps on high reliability gear; about moving some windows servers to Solaris so they don't write off their hardware "investments"; talk about your IT staff training support, about indemnification and legal issues, about the pending disaster when IFRS meets SOX at the Fortune 1000 and those customers want to backtrack controls to the component makers - and then talk about how Sun technology, Sun networking, and Sun people can help.
And on day three? listen to them some more - you'll be astonished at the opportunities you're giving up to Dell and the MCSE squad simply because ten and fifteen years ago you couldn't afford to sell into that market - but now you can; and, sure, you won't see any ten million dollar sales come out of it - but you'll be on your way to rethinking your marketing and to making tens of thousands of fifty and hundred K sales with follow-through revenue streams.
And you know what the real bottom line is? Those people will buy from you as long as their businesses run - which, given market size and movement of people between companies, pretty much means forever.