X
Tech

Whatever happened to 64-bit computing?

Remember the transition from 16-bit to 32-bit operating systems? Larry Seltzer examines the comparative inertia surrounding the 64-bit transition.
Written by Larry Seltzer, Contributor

Remember the early 90's when we were transitioning from 16-bit to 32-bit operating systems? Some people were unimpressed, but I think most of us could see that 32-bit systems were going to solve an awful lot of problems.

32-bit processors had actually been around for many years, even on PCs, and the PC market was quickly dominated by them, even just to run 16-bit operating systems which incorporated 32-bit hacks, such as Windows for Workgroups 3.11's VFAT system and QEMM. Re-architecting the operating system and applications was a major endeavor, but 32-bit operating systems quickly displaced their inferior ancestors. Gone were segmented programming, extended vs. expanded memory and any practical limits on physical memory. Certain programming problems went away too, such as having 16-bit integers. It was a big improvement.

Why isn't there even a trace of the same effect for 64-bit computing? 64-bit processors have been available on certain RISC architectures for many years, and many of them even have 64-bit operating systems, but neither Intel nor Microsoft seem to be in a major hurry to move the world to 64-bit software. In fact, unless Windows for 64-bit systems turns out to be completely compatible with Win32 code I have a hard time seeing a large-scale migration to it. The benefits are not compelling enough and there's way too much 32-bit code out there.

The benefits of 64-bit architecture relative to 32-bit aren't as obvious as those of 32-bit architecture relative to 16-bit. The first one that gets mentioned now is the limitation on physical memory. A 32-bit pointer can address 4GB of memory; a 64-bit pointer can address 16 gazillion Foofoobytes. Actually, whatever the limit is, it's going to be a long time before we have to worry about the limitation showing up in real systems. (Personally, I think limitations in semiconductor manufacturing processes will slow things down before this happens.) If you want an image of how large it is, first imagine the maximum 4,294,967,296 bytes in a 32-bit address space. Now imagine 4,294,967,296 32-bit address spaces; that's one 64-bit address space.

Intel long ago found temporary hardware hacks to put off the 4GB limit. The Xeon processors introduced the PAE-36 and PSE-36 modes to allow 36-bit addressing, and therefore support up to 64GB RAM, and the highest-end server versions of Windows have supported the extended addressing since Windows NT 4.0 Enterprise Edition. Even today, 64GB is a lot of memory for all but the largest clusters of servers. But it will be an issue eventually, and before too long. In the average desktop though, I think even 32-bit addresses will be good enough for several years more at least, as long as we continue to use memory the way we do now.

The trick to making 64-bits desirable is to design new types of applications that make use of them. Consider that obscenely large address space I described earlier: What if we were to design memory-mapped file systems? Imagine that you didn't have to open and close files, but that you manipulated them through data structures directly, and that it was the operating system's job to page the data into and out of memory as-needed. 64-bit addresses are still large enough to handle any file system. It makes a lot of programming a lot easier.

Or imagine using integers for a lot of math problems that call for floating point now. The number space for a 64-bit integer (or perhaps a 128-bit integer that spans two registers) is very large and using it should improve the performance of such applications a great deal.

But Microsoft seems more interested in a simple, smooth introduction of Win64 than in using it to introduce radical new programming techniques, and they're probably right. As I said before, there's an awful lot of 32-bit code out there, and when they start buying 64-bit servers, for whatever reason, people are going to be most concerned initially with running their existing code on them, and on porting it to the new architectures. The initial models of 64-bit Windows actually are designed to support 32-bit programs, with essentially the existing API set, but to add support for 64-bit data, because it's the most immediately needed feature provided by 64-bit architecture. It also adds new 64-bit data types, but they are not the priority right now. There's an emulation system called WOW64 for running Win32 programs. And applications, by default, have a stingy 8 Terabyte address space to work with.

For at least a few years there's almost no reason for mainstream businesses to consider, let alone adopt, 64-bit systems. Eventually the migration will become easy enough, and the hardware cheap enough, that people will do it because there will be little reason not to. Extremely compute-intensive applications, like predicting the weather, will always migrate to the fastest platform available. But in the interim, all but a few business applications have a greater need of improvements in existing 32-bit systems: better security, more streamlined administration, and other things that doubling the size of our registers and address lines does nothing to address.

When looking into your crystal ball for the future of the PC industry, it's usually a good idea to predict the most conservative amount of change. There may have been a time when 64-bit computing was going to solve big problems, but it's just turned out to be another evolutionary increment in computing.

Is your company using or planning to use 64-bit servers? What's your experience? Share your thoughts in our Talkback forum or send an e-mail to Larry.

Editorial standards