X
Tech

Why Linux owes (part of) its success to Microsoft

When you use Linux you should stop and thank Microsoft for making it possible - because Microsoft created the conditions Linux needed to succeed.
Written by Paul Murphy, Contributor

One of the things that characterizes humanity is our ability to adapt quickly to external change - it's the key reason, for example, that humans aren't confined to one climatic zone on the planet.

On the other hand, we're individually often really bad at adapting to external change - and I suspect that everyone who's worked in IT for more than few months has been personally guilty of refusing to learn to use a new tool for a job simply because you felt comfortable doing it with the older, and far less efficient or effective, tool.

When IBM needed a basic OS that could run effectively on Intel's 8088 processor the answer Patterson came up with for QDOS was to minimize CP/M's overhead by stripping the kernel/shell layering - effectively eliminating Kildall's commitment to portability and multi-processing to produce something tailored specifically to the limitations of one particular piece of hardware.

Similarly, when Linus Torvalds decided to build a "free Unix for the 386" he stripped away Tanenbaum's layered micro-kernel and hardwired Intel's approach to interrupt management directly into what has since become the largest monolithic kernel still in use.

Here's part of what Tanenbaum had to say about this in 1992:

...I think I know a bit about where operating are going in the next decade or so. Two aspects stand out:

  1. MICROKERNEL VS MONOLITHIC SYSTEM

    Most older operating systems are monolithic, that is, the whole operating system is a single a.out file that runs in 'kernel mode.' This binary contains the process management, memory management, file system and the rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360, MULTICS, and many more.

    The alternative is a microkernel-based system, in which most of the OS runs as separate processes, mostly outside the kernel. They communicate by message passing. The kernel's job is to handle the message passing, interrupt handling, low-level process management, and possibly the I/O. Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the not-yet-released Windows/NT.

    While I could go into a long story here about the relative merits of the two designs, suffice it to say that among the people who actually design operating systems, the debate is essentially over. Microkernels have won. The only real argument for monolithic systems was performance, and there is now enough evidence showing that microkernel systems can be just as fast as monolithic systems (e.g., Rick Rashid has published papers comparing Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.

    MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drivers are also separate processes (in the kernel, but only because the brain-dead nature of the Intel CPUs makes that difficult to do otherwise). LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea.

  2. PORTABILITY

    Once upon a time there was the 4004 CPU. When it grew up it became an 8008. Then it underwent plastic surgery and became the 8080. It begat the 8086, which begat the 8088, which begat the 80286, which begat the 80386, which begat the 80486, and so on unto the N-th generation. In the meantime, RISC chips happened, and some of them are running at over 100 MIPS. Speeds of 200 MIPS and more are likely in the coming years. These things are not going to suddenly vanish. What is going to happen is that they will gradually take over from the 80x86 line. They will run old MS-DOS programs by interpreting the 80386 in software. (I even wrote my own IBM PC simulator in C, which you can get by FTP from ftp.cs.vu.nl = 192.31.231.42 in dir minix/simulator.) I think it is a gross error to design an OS for any specific architecture, since that is not going to be around all that long.

    MINIX was designed to be reasonably portable, and has been ported from the Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016. LINUX is tied fairly closely to the 80x86. Not the way to go.

It's hard to believe that Tanenbaum was wrong when he said that tying Linux so closely to x86 was a mistake - and yet, x86 survives as the volume market leader and Linux has grown both with it and because of it.

Today, however, and despite having been ported to lots of other architectures, the kernel is still as closely tied to x86 as ever: try running Linux on the T52XX, for example, and you'd swear the SPARC system was hopelessly obsolete and somewhat slower than frozen molasses - because changing the Linux kernel to make effective use of a coolthreads processor would essentially require a full rewrite.

So how did this come about? Simple, Microsoft's near monopoly created the conditions needed for Torvald's strategy to succeed - because Microsoft's success depended on convincing customers that frequent adaptation to small changes reduces total risk relative to significant but infrequent change and resulted in, among many other things, a trailing cloud of low cost, Linux compatible, hardware and the resentments needed to motivate use of it.

Torvalds gets a lot of credit both for being among the first to exploit open source ideas and for his effective management of the Linux kernel development process - but I think he should also be credited for being among the first to see that a second best kernel design optimized for a third rate processor would offer the right combination for market success.

Editorial standards