Unixfication II

Unixfication II

Summary: Can the Linux community get over its "not invented here" ideology which has often hindered its ability to adopt technological improvements from outside sources? I keep saying to myself, I hope so.


Can the Linux community get over its "not invented here" ideology which has often hindered its ability to adopt technological improvements from outside sources? I keep saying to myself, I hope so. But recent events have shown me that we have a long way to go until we become a culture of inclusion and not of exclusion and isolationism.

You Suck, Perlow

My OpenSolaris 2008.05 write-up struck a chord with a bunch of folks in the community because the very title itself "What Ubuntu Wants to Be When it Groups Up" was taken as an affront to Linux's maturity and the hard work and milestones accomplished by Ubuntu project and the Free Software community's efforts. While I will claim a mea culpa in that I wanted to title the piece with a headline that attracted attention, that was not the intended message here.

Click on the "Read the rest of this entry" for more.

The message that was intended was that Ubuntu, for all its end-user enhancements and ease of use and hardware compatibility and vast library of software packages available, is still years away from becoming an enterprise and mid-range scalable OS on commodity hardware. Like Sun or not -- and despite its relatively minuscule share in the end-user desktop space, it is their UNIX OS that built the foundation of enterprise Open Systems computing, and it is those performance and scalability characteristics -- if not the characteristics of similar enterprise scalable UNIXes such as AIX and HP-UX that Linux must seek to emulate in order to become the predominant mid-range and enterprise computing platform of the future.

Here is what a typical zealot response was to my article -- although this is one of the more literate ones:

"Last time I looked, the list of most powerful computers (Pretty much the definition of what a computer is when it is 'grown up') that list was dominated by Linux based solutions." We don't need help from Solaris! The guy is definitely a Solaris backer"

Nope. I'm just someone who appreciates what OpenSolaris is trying to accomplish. In addition to this blog on ZDNet, I'm Sr. Technology Editor for Linux Magazine. Where I have been writing on Open Source and Linux for the better part of the last decade, and where I've probably yelled at Sun for doing bonehead stupid things more than anyone. I'm definitely not on their Christmas card list.

Monolithic versus Distributed and Grids

But let's forget my own personal political leanings for a moment. A supercomputing cluster for compute intensive tasks where loads are distributed among many networked systems doth not monolithic scalability make. I am referring to large, highly parallelized big iron systems, what would be referred to as a "big mini" or a midrange multiprocessor system such as a IBM pSeries 595 (which runs AIX or Linux, but scales Linux using LPARs and virtualization, not monolithically) or a Sun E25K or an HP Integrity.

To use a supercomputing cluster or grid such as a Beowulf or something like a CGI render farm for something like Renderman or a bioinformatics application such as genome sequencing requires writing to a specific distributed multi processor cluster API such as MPI/MPICH2 and does not address general computing, cloud computing or virtualization needs. An important consideration is that not all apps can take advantage of a distributed cluster or can be easily parallelized.

The systems which the reader in question are referring to are highly specialized, exotic configurations, and are not general purpose systems. Not the sort of thing you would be typically running in the Fortune 500. Let's also not forget that as supercomputing clusters are connected usually by Ethernet and/or Myrinet, and that I/O bandwidth and moving things in and out of memory is always going to be a limiting factor.

With virtual infrastructure, you can cluster a group of virtualization hosts, such as thru VMWare ESX or Xen, and manage it like one seamless box, but you are still limited by the size of each individual host as to how large the OS image can be. And again, we're not running the OS on bare metal, so that impacts performance.

But what about mainframes? Those got big amounts of memory and I/O, right? The IBM zSeries mainframe implements Linux on a hypervisor (zVM) and is also partitioned, so essentially it runs lots of little virtualized systems at once. This again is not true monolithic scalability, this is using virtualization technology to perform resource allocation. Don't get me wrong here -- It's wicked cool, it will be a great solution for a lot of customers, but it's not where Linux kernel development should end.

We will still need bare metal monolithic scalability for some time to come -- the hypervisor hasn't eliminated the traditional computing model yet, because many kinds of apps should not be virtualized -- such as anything requiring heavy I/O -- and I suspect it will be a while until it becomes the conventional way of doing things.

Now it's true that there have been a few monolithic implementations of Linux on large multiprocessor systems. For example, Unisys's ES7000/one, which is either X86-64 or IA-64 based, depending on configuration, can currently run on 32 dual-core processors and up to 512GB of RAM on a single system image and has open sourced its contributions to the mainline Linux kernel as the "largesmp" kernel in the "es7000" tree branch. SGI also has its own exotic version of the Linux kernel as well for use in its Infiniband-based Altix systems. So these implementations have been tested, but they are not exactly what you would call mainstream systems. And if I'm not correct, you don't currently see the level of geometric performance increases on Linux above 16 cores like you do with UNIX. The maturity in the Linux kernel for this level of enterprise performance and stability on this type of hardware just isn't there yet.

Solaris, Linux, GPL(2)(3) and the upcoming Free Software Civil War

So where are we all going with this? Today, I read two interesting quotes from Rich Green, Vice President of Open Source at Sun Microsystems, on the Czech Linux web site, abclinuxu.cz:

You expressed sympathies towards the GPL version 3. What reasons do you have for preferring it over version 2? There are a few reasons that tend to liberalize access in some areas. We generally feel the more open the better. That said, we're still a commercial business and there are capabilities and qualities of licensing that compel us to actually favor GPL both 2 a 3. So, we are a very GPL-centric company. And it's evidenced by what MySQL does, they're a GPL-centric company as well. We did announce the first GPLv3 project about four months ago, I think. This was xVM server. You can view that as an indicator of where we're headed but we don't have any dates for what goes next.

Should the license situation ever allow it, how would you feel about possible code-sharing between Solaris and Linux?

I think, in the long term, that is where we're going to end up. It's inevitable and it's a great thing. I was going to ask you when do you think Linux will be GPLv3. And that could be the conversion point for us. It might be that that's the right time for Linux and Solaris code to go to that form of licensing allowing infinite code-sharing. We certainly consider that a possibility. But it's an inevitability and a great thing for the code to be shared.

Read that last sentence again, kids. Its an inevitability. See my previous article on "Unixfication" if you want more historical perspective.

Linux and UNIX will eventually merge into the same operating system. Who's kernel and what the kernel ends up looking like and who's pieces it incorporates is irrelevant. The question is, how difficult are we going to make it for ourselves to get there? Taking a page from Lewis Black, I'd like to present to you my "Ripple of Solaris":

If OpenSolaris is released under GPL version 3, then we now have the inevitable situation where there are two GPL-licensed Oses in the wild. This has never been an issue before, because Linux was the only game in town. From the perspective of the Free Software Foundation, GPL3 is going to be the preferred license under which many, if not all - with the possible exception of the Linux kernel itself - will fall under. That means with OpenSolaris, we would have a complete GPL3 OS stack. Unless Linus decides to change his mind and move Linux to GPLv3, our favorite kernel is likely be left behind. You got that right people - Free Software Civil War.

The FSF has always referred to Linux as GNU/Linux. This isn't just Richard Stallman being bitter -- this is the official name of which the Debian distribution, that forms the basis for Ubuntu, is referred to. It might be a little bit of a stretch, but what if the OpenSolaris kernel and many of its other components were to fall under the auspices of the Free Software Foundation? Surely, Sun would have to give up some control, but if you follow the natural course of things, GNU/Solaris is not out of the question. With the "Kosher Certification" of the FSF and Richard Stallman, migrating Debian to a Solaris kernel would simply be an academic exercise. Or to put it this way -- "GNU -is- UNIX" would become their new motto.

We can avoid all the petty squabbling and unpleasantness as a result of a GPL versioning divide between the two players if Linux is changed to GPL3. Sun can then cooperate and license its OS into GPL3 as well, and we can get on with more productive work of engineering the Free Software OS of the future.

And Unixfication will no longer be a dream.

What's your take in Unixfication? Talk Back and let me know.

Topics: Software, Linux, Open Source, Operating Systems, Virtualization


Jason Perlow, Sr. Technology Editor at ZDNet, is a technologist with over two decades of experience integrating large heterogeneous multi-vendor computing environments in Fortune 500 companies. Jason is currently a Partner Technology Strategist with Microsoft Corp. His expressed views do not necessarily represent those of his employer.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Unixification II: The Awakening

    Sure, I think that some unixification is good for the sake of interoperability but I like having different OSs to rely on--call them niche or whatever but there is even a place for Windows...the only non-Unix OS left.

    I don't want just one open source OS, no more than I want just Windows.
    • That's the thing though...

      97% of the computing population doesn't want open source at all, and they don't understand what it means.

      The vast majority of users are 'I just want it to work so I can get back to my television.' we NEED windows for users such as these. I say that because I'd rather deal with an inexperienced windows user any day than an inexperienced macintosh fanatic.
      • 97%

        That's stretching it about 67% don't you think?
      • I love the smell of arrogant elitism in the morning

        News flash for you. Computer experience ≠ intelligence
      • Get it right, though...

        "population doesn't want open source at all"
        I agree that they don't know or understand open source. But that doesn't imply that they don't want it. If they don't know it, they can't know if they want it! Get your basic logic together, please.

        I'd rather deal with an inexperienced Linux user than a inexperienced MS Windows fanatic. As a fact, i rather deal with any non fanatic computer user. Fanatic or not. And they are as large part of MS Windows users as MacOSX or Linux. What I know and read from your text, you might be one...
  • You are forgiven Jason. Please don't let it happen again ;)

    Seriously. Not to worry. You bring to ZDNet what it needs: intelligence.

    Thanks for your excellent work.
    D T Schmitz
  • in(details, devil)

    [i]Unless Linus decides to change his mind and move Linux to GPLv3, our favorite kernel is likely be left behind. You got that right people - Free Software Civil War.[/i]

    Which, practically speaking, can only happen if Linus declares the 3.0 kernel series will be a rewrite under GPL 3.0 with all code resubmitted. Any contributions where the copyright holder isn't available for resubmission would just have to be rewritten in a clean room.

    I leave the matter of judging the probability of this to others, but if nothing else it would certainly stall most other kernel work for a Long Time. Perhaps those details are very cold.
    Yagotta B. Kidding
    • So, what's your take on Solaris and GPL 3?

      What do you think the annoying details might be there?
      • Different details

        With Sun having paid Caldera (who, it turns out, didn't have the authority to grant them) for the right to release old Unix code under the GPL, and in light of the fact that Novell never had full rights themselves (see the [i]BSD[/i] ruling) since nobody knows where a lot of it came from, Sun appears to not have the same problem that Linus has.

        I'm sure they have a totally different set.
        Yagotta B. Kidding
        • But....(there always is one!)...

          GPLv2 and GPLv3 are compatible so there isn't a conflict between the two.


          • Yes, but not this one.

            No they are not. If they where, GPLv3 wouldn't be needed, would it?

            Most programs with GPLv2 was licensed to "GPLv2 or never versions" as sudgested by FSF and GNU project. Linux has only GPLv2 and nothing else. Why Linus Torvalds did this I don't know, but he and Linux project did. So it can't be "upgraded" as easy as many other software.

            So to change to GPLv3 all contributors have to accept this, or the software/driver has to be rewritten. This is why ZFS (and with this I cry, as it is a really masterpice) not going into Linux, as it is now.
  • RE: Unixfication II

    Would there be an advantage for Ubuntu to simply switch to the OpenSolaris Kernel?

    That's the beauty of OpenSource. If the better technology is the OpenSolaris Kernel, why not move to it?

    Couldn't all the applications simply be recompiled for use under OpenSolaris?

    Would it be better to have a Unix Kernel rather than a Linux Kernel?

    • Thought are ...

      The thoughts are that you'd lose almost all your hardware compatibility and get very little that the Ubuntu market wants in return.
      • No, you don't

        You lose some, but not all.
        And you gain lots of tools good tools for enterprice servers (file system, virtualization and debugin). Please have a look, befor dismiss OpenSolaris.
        As Ubuntu want into server rooms, this is definitly something Ubuntu wants.
        If it gets into Debian, it will be possible to enter Ubuntu to.
    • Subuntu? :D

      Since we have (my nephew's favorite) Ubuntu, Edubuntu, Xubuntu, and (my favorite) Kubuntu, I don't see why Unix/BSD variants can't be added, to the Ubuntu "philosophy" of polishing up and refining ease of use in their OS offerings.
      D. W. Bierbaum
    • You should ask if it is for Debian

      Debian has already different kernels (BSD* and HURD). So it could be done, if license agree with the project.
      There is already distributions based on OpenSolaris and Debian. But as the licens on OpenSolaris was not totally free, it couldn't be included in Debian.

      Ubuntu is based on Debian.
  • You've got to show us this stuff

    Show me Linux doing whatever it is that you are talking about.
    • If I'm reading you correctly

      and I can't be sure that I am, What I'd say is this: We can't show you Linux doing these things that were mentioned, because Linux doesn't do these things natively (scale well on corporate mainframes).
      If I'm guessing incorrectly about the meaning of your question, then please ignore post.
      Later, dude. ;)
  • RE: Unixfication II

    Minor technical point: MPI is an API --
    MPICH2 is one (of many) software
    projects that provide middleware that
    implement that API.

    As such, in the context that you
    mentioned, you were probably better to
    just say "MPI" because the API is
    standardized between all the software
    implementations of it.
  • are you serious with the not invented here

    Linux is nothing but a copy of ... (everything), except maybe the implementation source files.

    What do you mean open souce and linux have ideological issues with 'not invented here'.