The importance of being 64-bit

The importance of being 64-bit

Summary: IT vendors such as Microsoft and Intel have grand plans for 64-bit computing and the improved processing potential it promises but convincing customers may not be so straightforward


At the launch of Microsoft's 64-bit versions of Windows XP and Windows Server 2003 late last month, Bill Gates promised great things for the "64-bit decade". The transition from 8-bit computing to 16-bit required a reinvention of the operating system, and even moving to 32-bit ten years later was still a bit messy. But the shift to 64-bit hardware and software will be different, according to the Microsoft boss. "This is going to be the simplest one, and it's going to happen far more rapidly than any of the others," Gates claims.

Gates may have a clear vision of the significance of 64-bit but that doesn't necessarily mean that all his potential customers are similarly informed. Many IT professionals in enterprise-sized companies, and their smaller brethren, are probably at a loss over exactly what the benefits of 64-bit are — a knowledge gap that Microsoft has to tackle for the technology to really take off.

But along with the carrot of potential benefits Microsoft will also be wielding the usual stick of forced compliance. A lot of companies are going to end up buying 64-bit hardware such as servers anyway, for the simple reason that 32-bit hardware will be phased out. Microsoft's Jim Allchin, group vice-president for platforms, has said it will be difficult to buy a 32-bit server by the end of the year.

Unlike 32-bit computing, which introduced immediate and dramatic improvements, it seems likely that many organisations will end up with a 64-bit infrastructure and only afterwards begin to discover its benefits. "It's possible that everyone will have something they never use," says analyst James Governor of RedMonk. "Just look at Microsoft Office — people only use 5 percent of its functionality, but we all buy it."

That said, 64-bit computing can offer real benefits, and many analysts say it is now a mainstream reality. The wide availability of 64-bit capable chips that also support 32-bit applications, and now the launch of 64-bit Windows, mean that there are essentially no extra costs or complications associated with making the switch, at least on the hardware side. "The difference is that the hardware is cheap now. There is commodity pricing on servers, that is different," says Governor.

Topic: Apps

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • In your article "The importance of being 64-bit"

    So what is all the fuss about, exactly? Technically speaking, a 64-bit chip is one that has integer registers that are 64 bits wide, allowing them to process 64 bits at a time. CPUs store the address of locations in virtual memory in integer registers. This means that the total amount of data the CPU can keep in its working area is determined by how wide these integer registers are.

    Which isn't the case for the CPUs you are talking about. The X86 64 bit CPUs still have 32 bit integers. They have 64 bits of address space but the default integer word size is still 32 bits -- that's why it only increased the die size of AMD's original chip by something like 7%. If it really had a 64 bit integer word size the chip would have nearly doubled in size.
  • It is very sad that the AMD-Linux x86-64 initiative is completely hijacked by WinTel in your article. It is thus IMO a biased and incomplete mis-information.

    BTW, to the previous comment, making "default" integer size 64-bit won't actually increase the die-size a bit. The ALU and registers in AMD64 chips are already 64-bit capable. It's the OS / compiler / programming langauge that dictates the 32-bit integer size now (and that is not an unreasonable default).