To finish off my project I've been running Virtual PC 2007 at home and at work. At home my lone XP Pro box hosts a selectively mirrored clone of the registry of the Virtual PC system at work. At least in the non-hardware related settings, the two 2 virtual PCs are the same.
The hardware system at work is a Core 2 Duo "P4" running at 2.8Ghz with 3 GB of RAM and the hardware system at home is a P4 running at 2.6Ghz with 1 MB of RAM. Both Virtual PC's are set to run with 256MB of RAM space and have what I call the "expando-matic" type of virtual "hard drives".
What I find intriguing about this pair of setups is that the one at work is only slightly more "responsive" than the system at home. By responsive I mean in human terms it seems to run only just a little faster. Is the limiting factor the host OS?
The question becomes then, is Virtual PC tuned so as to mimic multiple PC's in such a way as to only seem like its operating as multiple PC's to humans? The reality is that humans running office word processors and other human related programs don't really task a computer enough to keep it busy even 25% of the time. -added 8-6-2008 I realize that it truly is operating a second image inside a "bubble" running on the host. I guess my observation meant more like- Since the application is mostly an IO device for a human-machine interface, response time can be slower than what would be necessary for a machine-to-machine interface.
SETI@Home and other Internet distributed computational programs took advantage of that and utilized not only the hours after work stopped but could use the slices of time between keystrokes during regular work hours.
If individual programs were written in such a way as to REQUIRE complete isolation between tasks and IO privileges then there wouldn't need to be packages such as Virtual PC.
If the operating system enforced "virtual PC" workspaces in RAM, in swap-space and on the hard drives then the entire computing world would be considerably more secure.
-8-6-2008- Since I'm still dreaming, perhaps "virtualising" computer processes through inter-process communication links using IPv6 could allow the use of iptables and the like to enforce security. That sounds really stupid but if you really want to speed up Windows XP Pro, turn off networking. Everything in Windows goes through the Remote Procedure Call "router-redirector". You turn off networking then the system speeds up since the system doesn't need to look for "RPC traffic" from remote processes. Turning off networking also shuts off things like IIS, Apache, Remote Terminal Services (shudder!), obviously http, ftp and so forth.
Also obvious, you can't really have a computer without network functionality so take it to the ultimate, make everything follow the same security model. Enforce Kerberos or SSL (at least) security between processes by forcing them to make connections through ONE security process. That also would allow the system to optimize that new "redirector" process to the extent that there wouldn't always be performance hits on specific processes.
Since I'm thinking in big IFs (or "wishes") what would happen to the computing world if it became possible to run any program written for Windows, DOS, OSX or Linux in its own sandbox, have it operate nicely with other similar sandboxed programs and the operator didn't have to be an IT wizard to make it work? -8-6-2008 If a process call comes through a IP connection that makes remote applications as possible as local applications. In other words you could distribute computing functions across the network either inside or outside the corporate or household firewall. The operator could set his own priorities as to what needed more time versus what the OS told him he was going to get.
Now that would be a "Hyper-Visor" worthy of the name. -8-6-2008
Don't pinch me I'm still dreaming.