X
Business

Unixfication II

Can the Linux community get over its "not invented here" ideology which has often hindered its ability to adopt technological improvements from outside sources? I keep saying to myself, I hope so.
Written by Jason Perlow, Senior Contributing Writer
bob-marley-openunix.jpg
Can the Linux community get over its "not invented here" ideology which has often hindered its ability to adopt technological improvements from outside sources? I keep saying to myself, I hope so. But recent events have shown me that we have a long way to go until we become a culture of inclusion and not of exclusion and isolationism.

You Suck, Perlow

MyOpenSolaris 2008.05 write-up struck a chord with a bunch of folks in the community because the very title itself "What Ubuntu Wants to Be When it Groups Up" was taken as an affront to Linux's maturity and the hard work and milestones accomplished by Ubuntu project and the Free Software community's efforts. While I will claim a mea culpa in that I wanted to title the piece with a headline that attracted attention, that was not the intended message here.

Click on the "Read the rest of this entry" for more.

The message that was intended was that Ubuntu, for all its end-user enhancements and ease of use and hardware compatibility and vast library of software packages available, is still years away from becoming an enterprise and mid-range scalable OS on commodity hardware. Like Sun or not -- and despite its relatively minuscule share in the end-user desktop space, it is their UNIX OS that built the foundation of enterprise Open Systems computing, and it is those performance and scalability characteristics -- if not the characteristics of similar enterprise scalable UNIXes such as AIX and HP-UX that Linux must seek to emulate in order to become the predominant mid-range and enterprise computing platform of the future.

Here is what a typical zealot response was to my article -- although this is one of the more literate ones:

"Last time I looked, the list of most powerful computers (Pretty much the definition of what a computer is when it is 'grown up') that list was dominated by Linux based solutions." We don't need help from Solaris! The guy is definitely a Solaris backer"

Nope. I'm just someone who appreciates what OpenSolaris is trying to accomplish. In addition to this blog on ZDNet, I'm Sr. Technology Editor for Linux Magazine. Where I have been writing on Open Source and Linux for the better part of the last decade, and where I've probably yelled at Sun for doing bonehead stupid things more than anyone. I'm definitely not on their Christmas card list.

Monolithic versus Distributed and Grids

But let's forget my own personal political leanings for a moment. A supercomputing cluster for compute intensive tasks where loads are distributed among many networked systems doth not monolithic scalability make. I am referring to large, highly parallelized big iron systems, what would be referred to as a "big mini" or a midrange multiprocessor system such as a IBM pSeries 595 (which runs AIX or Linux, but scales Linux using LPARs and virtualization, not monolithically) or a Sun E25K or an HP Integrity.

To use a supercomputing cluster or grid such as a Beowulf or something like a CGI render farm for something like Renderman or a bioinformatics application such as genome sequencing requires writing to a specific distributed multi processor cluster API such as MPI/MPICH2 and does not address general computing, cloud computing or virtualization needs. An important consideration is that not all apps can take advantage of a distributed cluster or can be easily parallelized.

The systems which the reader in question are referring to are highly specialized, exotic configurations, and are not general purpose systems. Not the sort of thing you would be typically running in the Fortune 500. Let's also not forget that as supercomputing clusters are connected usually by Ethernet and/or Myrinet, and that I/O bandwidth and moving things in and out of memory is always going to be a limiting factor.

With virtual infrastructure, you can cluster a group of virtualization hosts, such as thru VMWare ESX or Xen, and manage it like one seamless box, but you are still limited by the size of each individual host as to how large the OS image can be. And again, we're not running the OS on bare metal, so that impacts performance.

But what about mainframes? Those got big amounts of memory and I/O, right? The IBM zSeries mainframe implements Linux on a hypervisor (zVM) and is also partitioned, so essentially it runs lots of little virtualized systems at once. This again is not true monolithic scalability, this is using virtualization technology to perform resource allocation. Don't get me wrong here -- It's wicked cool, it will be a great solution for a lot of customers, but it's not where Linux kernel development should end.

We will still need bare metal monolithic scalability for some time to come -- the hypervisor hasn't eliminated the traditional computing model yet, because many kinds of apps should not be virtualized -- such as anything requiring heavy I/O -- and I suspect it will be a while until it becomes the conventional way of doing things.

Now it's true that there have been a few monolithic implementations of Linux on large multiprocessor systems. For example, Unisys's ES7000/one, which is either X86-64 or IA-64 based, depending on configuration, can currently run on 32 dual-core processors and up to 512GB of RAM on a single system image and has open sourced its contributions to the mainline Linux kernel as the "largesmp" kernel in the "es7000" tree branch. SGI also has its own exotic version of the Linux kernel as well for use in its Infiniband-based Altix systems. So these implementations have been tested, but they are not exactly what you would call mainstream systems. And if I'm not correct, you don't currently see the level of geometric performance increases on Linux above 16 cores like you do with UNIX. The maturity in the Linux kernel for this level of enterprise performance and stability on this type of hardware just isn't there yet.

Solaris, Linux, GPL(2)(3) and the upcoming Free Software Civil War

So where are we all going with this? Today, I read two interesting quotes from Rich Green, Vice President of Open Source at Sun Microsystems, on the Czech Linux web site, abclinuxu.cz:

You expressed sympathies towards the GPL version 3. What reasons do you have for preferring it over version 2? There are a few reasons that tend to liberalize access in some areas. We generally feel the more open the better. That said, we're still a commercial business and there are capabilities and qualities of licensing that compel us to actually favor GPL both 2 a 3. So, we are a very GPL-centric company. And it's evidenced by what MySQL does, they're a GPL-centric company as well. We did announce the first GPLv3 project about four months ago, I think. This was xVM server. You can view that as an indicator of where we're headed but we don't have any dates for what goes next.

Should the license situation ever allow it, how would you feel about possible code-sharing between Solaris and Linux?

I think, in the long term, that is where we're going to end up. It's inevitable and it's a great thing. I was going to ask you when do you think Linux will be GPLv3. And that could be the conversion point for us. It might be that that's the right time for Linux and Solaris code to go to that form of licensing allowing infinite code-sharing. We certainly consider that a possibility. But it's an inevitability and a great thing for the code to be shared.

Read that last sentence again, kids. Its an inevitability. See my previous article on "Unixfication" if you want more historical perspective.

Linux and UNIX will eventually merge into the same operating system. Who's kernel and what the kernel ends up looking like and who's pieces it incorporates is irrelevant. The question is, how difficult are we going to make it for ourselves to get there? Taking a page from Lewis Black, I'd like to present to you my "Ripple of Solaris":

If OpenSolaris is released under GPL version 3, then we now have the inevitable situation where there are two GPL-licensed Oses in the wild. This has never been an issue before, because Linux was the only game in town. From the perspective of the Free Software Foundation, GPL3 is going to be the preferred license under which many, if not all - with the possible exception of the Linux kernel itself - will fall under. That means with OpenSolaris, we would have a complete GPL3 OS stack. Unless Linus decides to change his mind and move Linux to GPLv3, our favorite kernel is likely be left behind. You got that right people - Free Software Civil War.

The FSF has always referred to Linux as GNU/Linux. This isn't just Richard Stallman being bitter -- this is the official name of which the Debian distribution, that forms the basis for Ubuntu, is referred to. It might be a little bit of a stretch, but what if the OpenSolaris kernel and many of its other components were to fall under the auspices of the Free Software Foundation? Surely, Sun would have to give up some control, but if you follow the natural course of things, GNU/Solaris is not out of the question. With the "Kosher Certification" of the FSF and Richard Stallman, migrating Debian to a Solaris kernel would simply be an academic exercise. Or to put it this way -- "GNU -is- UNIX" would become their new motto.

We can avoid all the petty squabbling and unpleasantness as a result of a GPL versioning divide between the two players if Linux is changed to GPL3. Sun can then cooperate and license its OS into GPL3 as well, and we can get on with more productive work of engineering the Free Software OS of the future.

And Unixfication will no longer be a dream.

What's your take in Unixfication? Talk Back and let me know.

Editorial standards