The virtual machine revisited

George Ou hit the nail on the head when he wrote "The dumbest gripe with Windows Vista to date."Does anyone remember the original Basic interpreter found in the Apple II and in PC-DOS?

George Ou hit the nail on the head when he wrote "The dumbest gripe with Windows Vista to date."

Does anyone remember the original Basic interpreter found in the Apple II and in PC-DOS? How about the Pascal-based p-system from UCSD -- introduced in the late 1980s?

The early days of integrated software suites were no better. I remember one particular software suite that had to load its 256KB interpreter (an old term for today's run-time engines) in order to execute the command to clear the screen! On a 640KB machine, this represented a lot of overhead. The MS-DOS "" and today's "cmd.exe" are none other than simple run-time interpreters for executing commands.

All of these tools were attractive because they permitted non-programmers to better utilize processors more powerful than the operating system running on those machines.

Well, Java and .NET are just the latest software tools to take advantage of an important side-effect of Moore's law (that processing power doubles every 18 months or so.) Of course, a modern virtual machine, such as Java's JVM or that of .NET, is a great deal more powerful than the code of yesteryear.

Software capabilities always lag behind the hardware they run on -- as evidenced by the fact that Windows XP Professional is quite capable of running on a Pentium II processor running at 300 MHz. No, it's not pretty, but it will do everything the user needs. But, put that same OS on a 3,000 MHz (3 GHz) PC and watch it fly. The fact that today's operating systems (either Windows or some flavor of Unix/Linux -- including MacOSX) can run on such a wide range of processor speeds is itself quite remarkable.

In 1969, Unix was born with the concept of OS portability in mind. The development of C made it possible to port any software written for Unix on one platform to run on Unix on any platform simply by re-compiling the C source code using the compiler accompanying the OS. Once the original version of Unix was re-written in C, porting Unix from one platform was just as straightforward.

The virtual machine model dates back to the early 1980s -- on IBM mainframes. This model permitted dissimilar operating systems to run side-by-side on the same hardware -- by running under a virtual machine which effectively tricked the OS running under it into thinking is was running either by itself on the native hardware or even on other hardware the virtual machine OS was emulating. This made it possible for developers to leverage expensive but under-utilized hardware for cross-platform software development.

It's the marriage of these two concepts that brings us to today's virtual machine design. Returning to the thrust of George's article, it is important to understand the job at hand in order to determine which tools are appropriate.

For instance, the Java Virtual Machine is ideally suited for web-based applications. The JVM itself (call JRE for Java Runtime Environment by Sun) is OS-dependent -- Windows, Solaris, Linux, or Macintosh -- but only the Windows flavor is unique. The others are all variations of a basic Unix implementation of the JVM -- making porting almost as straightforward as re-compiling the source code. As a result, versions of the JVM have now been ported to handheld computers running PalmOS, RimOS, SymbianOS and others. The only other component necessary to make web-based Java applications (called applets) work seamlessly across the web is a browser-dependent plug-in.

Prior to the advent of the personal computer, the cost of hardware far exceeded the cost of the programmers. With the commoditization of personal computing, just the reverse is true. Add to this the fact that processors are much faster than they need to be to keep up with the software available today, and you have a very good argument for using virtual machine-based tools for a lot of software development tasks -- but certainly not all of them.

Sure, a small dedicated OS (such as that on a cell phone or handheld computer) can be designed around a virtual machine for a few dedicated tasks, but as soon as you introduce pre-emptive multi-tasking to the list of capabilities a general purpose operating system must have, the suitability of the virtual machine model must be called into question.


You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.
See All