X
Tech

Do we need to wipe the slate with x86?

Do we need to completely rethink the entire commodity and utility computing platform itself? Do we need a fresh start?
Written by Jason Perlow, Senior Contributing Writer
cleanslate1.jpg

A few days ago an industry colleague and I were having a discussion about Linux and whether or not it is necessary for it to be application compatible or simply just "interoperable" with Windows from a protocol and data exchange standpoint.

His view is that Linux should strictly seek to be interoperable with Windows and its native applications, that Linux should pursue its own native application development at all costs and not be cloistered into the "second system" mentality.

While I believe that interoperability and platform standardization is important for Linux, it got me thinking on another tangent entirely -- do we need to completely rethink the entire commodity and utility computing platform itself? Do we need a fresh start?

Let's look at where we are today. Virtually all commodity computing and PC technology today is based off the x86 and IBM PC architecture, which was invented by Intel and IBM in the late 1970's. It all started back in 1979 with the Intel 8088 and in 1981 with the IBM PCand PC BIOS.

While today's entry-level x86-based systems are hundreds, nay, thousands of times more powerful than the original IBM PC, in theory you can still run OSes on them bare metal that last saw the light of day on a production system in the early 1990s or even earlier.

If you're so inclined, you can still boot MS-DOS 3.3 on the latest generation of Core Duo, Xeon, AMD64 and Opteron processors.

Why? Because while the amount of core instructions increased in order to support new features (such as protected mode), the instruction word length increased from 16 to 32 to 64 bits, the amount of memory that the processor could use increased geometrically, the number of processor cores increased, and the bus technology improved to handle higher clock speeds.

However, at the end of the day, the basic architecture is fundamentally no different than what we started with in 1981. So why exactly do we still need systems than can still run CP/M and DOS on the metal?

Right now, we're approaching serious scalability issues with modern x86 processors in terms of the limits of manufacturing the chips itself and being able to maintain legacy compatibility. We can't keep turning up the clock speed and shove more and more transistors onto the silicon and retain 100 percent legacy compatibility with the x86 platform.

Legacy x86 chews up way too much power and generates far too much heat.  And while the breakneck pace of Moore's law of "more, more, more, faster, faster, faster" worked fine for the 1990's and the end of the 20th century, it's definitely showing its age in the beginning of the 21st.

Mitigating the limitations of the architecture by throwing more cores at x86 isn't the long-term solution either.

While I don't agree with a lot of the things the company has done in recent years, I happen to think that the Sun Niagara UltraSPARC T2architecture or something that looks very much like it is probably going to be the wave of the future.

The sort of thing I envision is lots and lots of RISC cores (16+) on a single die running at a lower clock speed, heavily hyper-threaded and massively parallelized, all running at much lower levels of power consumption and heat output. The same could be said for IBM's pSeries and zSeries machines, although I have to give Sun props for open sourcing their chip architecture.

Intel's IA-64 architecture itself wouldn't be bad, if it wasn't for the fact that it's an incredibly expensive platform in terms of what the supporting chipsets would cost to manufacture at scale and not too many Taiwanese and Chinese companies are banging down the doors to build factories to produce them.

Arguably, significant demand for the processor would drive the cost down, but Intel hasn't said anything to the effect that newer versions will be significantly greener or output any less heat.

With Open Source operating systems, compatibility is no longer an issue. Once Linux applications are at par with their Windows equivalents, who really needs the x86 architecture anymore? We just port them to Linux on whatever target architecture we want, be it UltraSPARC, POWER, zSeries, or whatever superscalar, massively parallelized, hyperthreaded and power-miserly and cool (as in low thermal footprint) architecture comes next.

When most of the computing is going to be done at the cloud, the end user doesn't care what architecture their applications run on, especially if they are going to be more increasingly web-based.

And Windows? Despite Linux's natural advantage of dealing with new architectures by having a development community ready to move it to whatever comes next, don't count Microsoft out yet. The Windows NT architecture that XP and Vista runs on was designed from the ground up to be portable.

Windows may only run on two major architectures now, x86 and Itanium, but it once ran on NEC MIPS, Intel i860, Motorola PowerPC and DEC Alpha before it ever ran on the 386. Should Microsoft see a need to port it to zSeries, Niagara or POWER, they'd really have to ramp up Dave Cutler and his labs again, but they're ready to go if the green field of the Almighty Cloud beckons them.

Windows Server 2016 128-bit edition running virtualized on z/VM in a green datacenter, accessed via my house from a thin client over high-speed fiber optic connection. I can see it now.

Should we toss x86 architecture and wipe the slate with something greener and more scalable? Talk Back and let me know.

The postings and opinions on this blog are my own and don’t necessarily represent IBM’s positions, strategies or opinions.

Editorial standards