Do we need to wipe the slate with x86?

Do we need to wipe the slate with x86?

Summary: Do we need to completely rethink the entire commodity and utility computing platform itself? Do we need a fresh start?



A few days ago an industry colleague and I were having a discussion about Linux and whether or not it is necessary for it to be application compatible or simply just "interoperable" with Windows from a protocol and data exchange standpoint.

His view is that Linux should strictly seek to be interoperable with Windows and its native applications, that Linux should pursue its own native application development at all costs and not be cloistered into the "second system" mentality.

While I believe that interoperability and platform standardization is important for Linux, it got me thinking on another tangent entirely -- do we need to completely rethink the entire commodity and utility computing platform itself? Do we need a fresh start?

Let's look at where we are today. Virtually all commodity computing and PC technology today is based off the x86 and IBM PC architecture, which was invented by Intel and IBM in the late 1970's. It all started back in 1979 with the Intel 8088 and in 1981 with the IBM PC and PC BIOS.

While today's entry-level x86-based systems are hundreds, nay, thousands of times more powerful than the original IBM PC, in theory you can still run OSes on them bare metal that last saw the light of day on a production system in the early 1990s or even earlier.

If you're so inclined, you can still boot MS-DOS 3.3 on the latest generation of Core Duo, Xeon, AMD64 and Opteron processors.

Why? Because while the amount of core instructions increased in order to support new features (such as protected mode), the instruction word length increased from 16 to 32 to 64 bits, the amount of memory that the processor could use increased geometrically, the number of processor cores increased, and the bus technology improved to handle higher clock speeds.

However, at the end of the day, the basic architecture is fundamentally no different than what we started with in 1981. So why exactly do we still need systems than can still run CP/M and DOS on the metal?

Right now, we're approaching serious scalability issues with modern x86 processors in terms of the limits of manufacturing the chips itself and being able to maintain legacy compatibility. We can't keep turning up the clock speed and shove more and more transistors onto the silicon and retain 100 percent legacy compatibility with the x86 platform.

Legacy x86 chews up way too much power and generates far too much heat.  And while the breakneck pace of Moore's law of "more, more, more, faster, faster, faster" worked fine for the 1990's and the end of the 20th century, it's definitely showing its age in the beginning of the 21st.

Mitigating the limitations of the architecture by throwing more cores at x86 isn't the long-term solution either.

While I don't agree with a lot of the things the company has done in recent years, I happen to think that the Sun Niagara UltraSPARC T2 architecture or something that looks very much like it is probably going to be the wave of the future.

The sort of thing I envision is lots and lots of RISC cores (16+) on a single die running at a lower clock speed, heavily hyper-threaded and massively parallelized, all running at much lower levels of power consumption and heat output. The same could be said for IBM's pSeries and zSeries machines, although I have to give Sun props for open sourcing their chip architecture.

Intel's IA-64 architecture itself wouldn't be bad, if it wasn't for the fact that it's an incredibly expensive platform in terms of what the supporting chipsets would cost to manufacture at scale and not too many Taiwanese and Chinese companies are banging down the doors to build factories to produce them.

Arguably, significant demand for the processor would drive the cost down, but Intel hasn't said anything to the effect that newer versions will be significantly greener or output any less heat.

With Open Source operating systems, compatibility is no longer an issue. Once Linux applications are at par with their Windows equivalents, who really needs the x86 architecture anymore? We just port them to Linux on whatever target architecture we want, be it UltraSPARC, POWER, zSeries, or whatever superscalar, massively parallelized, hyperthreaded and power-miserly and cool (as in low thermal footprint) architecture comes next.

When most of the computing is going to be done at the cloud, the end user doesn't care what architecture their applications run on, especially if they are going to be more increasingly web-based.

And Windows? Despite Linux's natural advantage of dealing with new architectures by having a development community ready to move it to whatever comes next, don't count Microsoft out yet. The Windows NT architecture that XP and Vista runs on was designed from the ground up to be portable.

Windows may only run on two major architectures now, x86 and Itanium, but it once ran on NEC MIPS, Intel i860, Motorola PowerPC and DEC Alpha before it ever ran on the 386. Should Microsoft see a need to port it to zSeries, Niagara or POWER, they'd really have to ramp up Dave Cutler and his labs again, but they're ready to go if the green field of the Almighty Cloud beckons them.

Windows Server 2016 128-bit edition running virtualized on z/VM in a green datacenter, accessed via my house from a thin client over high-speed fiber optic connection. I can see it now.

Should we toss x86 architecture and wipe the slate with something greener and more scalable? Talk Back and let me know.

The postings and opinions on this blog are my own and don’t necessarily represent IBM’s positions, strategies or opinions.

Topics: Hardware, Linux, Open Source, Operating Systems, Processors, Software, Windows


Jason Perlow, Sr. Technology Editor at ZDNet, is a technologist with over two decades of experience integrating large heterogeneous multi-vendor computing environments in Fortune 500 companies. Jason is currently a Partner Technology Strategist with Microsoft Corp. His expressed views do not necessarily represent those of his employer.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Toshiba Cell Laptop anyone?...


    Slap Yellow Dog Linux on it and you are good to go!
    D T Schmitz
    • Toshiba?

      I wasn't aware Toshiba had licensed the Cell from Sony. But yeah, a SONY PS3 multi-cell Linux laptop using low-power Micron or Toshiba SSD's would be SICK. Linux apps, PS3 games :)
      • Iz Nice, да? Master the possibilities.

        D T Schmitz
      • Try IBM

        The Cell processor was a joint venture between IBM, Sony, and Toshiba. Sony was planning on pushing the Cell for the PS3 and Blu-Ray players while Toshiba wanted to put the things in TVs for some reason or another.

        Point being, the Cell could be a quick jumping point into a different tier of computing all together with some familiarity to already existing technologies.
      • Toshiba is a co-developer, and yes, they did...

        Toshiba is a co-developer, and yes, they did license it.
        However, since Cell is based on the Power architecture,
        Toshiba developed their own "PPE" called the Spurs Engine.
        It drives 4 SPEs.

        The laptop can transcode an 1GB HD movie in about 10
        minutes. :)
        • In Perlow parlance: That is so SICK ;)

          D T Schmitz
    • I'd will agree ....

      I would LOVE to see some "PC" style CELL based computers ....

      They would be BAD to the BONE !!!
      • The Cell

        excels at floating point calculations, but there's no issue with integer math.
        D T Schmitz
    • It doesn't use Cell

      Please, for the love of peet - read the article. The Toshiba laptop includes a Cell processor for offloading computer intensive processing such as encoding and other number crunching. It is also only enabled when the laptop is on mains power - so if you're getting in the hope of 'teh speed' on the move - you're going to he sorely let down.

      Here is the inforamtion on it:,39029450,49295004,00.htm

      Again, it is only being used as a co-processor.
      • I stand corrected and thanks kaiwai!

        D T Schmitz
  • Do we know if backwards compatibility is holding back...

    ...progress? Or is it just purists who would like to see legacy support end?

    Currently x86 outperforms US T2 in just about every benchmark despite x86's backwards compatibility. I like Sun (I currently own eight of them) but performance wise I don't see x86 at a least not as a result of backward compatibility.

      • OK: Do we know if backwards compatibility is holding back...

      • Green-ness == TDP?

        Intel has a 25 watt quad-core in the works. While Niagra T2 is nice it really is better suited to certain types of applications performance-wise, so we lose something. As we are getting closer to Nehalem where the clock speed can be controlled on each core and hyper-threading enabled, intel is moving closer to green-ness (low power) while maintaining improving multi-thread performance without sacrificing single thread performance.

        A hybrid approach is also possible where the main CPU is x86 and then there are risc modules for heavylifting. Similar to Cell except use of PPC.

        Another approach is something that looks like x86 externally but is implemented with many non-x86 components.

        In all actuality most desktop apps and some server-side apps can't take advantage of many-core risc processing due to the serial nature of their domain. Single thread performance will still be important for some time.
      • Intel x86 leads greenness as well

        Intel x86 leads greenness as well and here's the results.
    • Power to the Process!

      [i]Currently x86 outperforms US T2 in just about every benchmark despite x86's backwards compatibility. I like Sun (I currently own eight of them) but performance wise I don't see x86 at a least not as a result of backward compatibility.[/i]

      You can get some idea of the relative architectural efficiencies by comparing process nodes at equal performance. Keep in mind that Intel is more than anything else a manufacturer who always leads the world in mass-producing the most advanced processes, and they get a good bit of their performance from being able to shovel more and smaller transistors onto a die.

      Compare Intel's 90 nm products' performance with others at that node and you'll have a better idea of the relative merits of the architectures.
      Yagotta B. Kidding
  • RE: Do we need to wipe the slate with x86?

    The idea is fine, as long as the new architecture allows someone to design a NEW DOS which would probably only use a part of the chip's potential - but would still run. Understanding that old DOS's would no longer run (such as 3.3).

    The reason is, for security reaons in some applications, you need DOS to avoid both Windows' and Linux's behavior of writing RAM to swap files. That behavior allows recovery of proprietary data and even passwords from the disk through corporate espionage.

    To my knowledge, DOS is the only OS that allows the safety of not having your RAM written out to disk. Perhaps if Windows and Linux placed an option in their Control Panels allowing swapping to be temporarily turned off and back on after a secure app is run - they would be acceptable.

    john roberts
    • swap file, and non-x86 systems

      "To my knowledge, DOS is the only OS that allows the safety of not having your RAM written out to disk. Perhaps if Windows and Linux placed an option in their Control Panels allowing swapping to be temporarily turned off and back on after a secure app is run - they would be acceptable."

      Uhh.. they do. Windows XP, 2000, and 98 at least all have an option to disable the pagefile (XP immediately starts complaining it's almost out of memory, even with 300MB+ free, though...) Linux, you can install without swap file or swap partition, and it just won't use one. Or, you can run "swapoff -a" to disable swap, and then "swapon -a" to turn it back on.

      It's highly irregular usage, but you can also use a Linux kernel, and run just one app if you want, to make sure you don't have other stuff being a potential security problem. (Technically you can even run 0 apps -- there's a Linux-based firewall floppy that sets up routes, etc., then quits -- the kernel itself forwards packets, otherwise there's actually 0 programs running.)

      I'm all for other CPUs dethroning x86 if they are deserving -- I can assure people that Linux is ready for them. I've got a PowerPC running Ubuntu at work; I've run desktop Linux on a DEC Alpha and HP PA-RISC in the past; the HP machine didn't look normal, but the desktop did. These behaved 100% like a x86 Linux desktop.. videos.. USB plug'n'play.. everything. The only things that's really non-portable (x86-specific) right now are flash and NVidia's binary video driver. "gnash" is being worked on to replace flash (it more-or-less plays youtube videos, it's pretty close to being nice). And there's 2.5 solutions to the nvidia problem 1) Buy an AMD (ex-ATI) video card. 2) noveau's being worked on as an open-source (and so non-x86-specific) replacement for nvidia's driver 2.5) Give up on 3d support, use "nv", the *existing* reaplacement for nvidia's driver. A 25watt quad-core's pretty good, but if you can have a nice desktop that uses like 2 watts, that's better than 25; Linux isn't Vista, it's not all bloated, and doesn't require a multicore CPU just to get going.
  • A better bet

    Just search around for "Chalkboard Resurfacing"... the word
    slate brings back too many bad memories for me since I had
    to haul those very-heavy and slow 16' units out of schools to
    replace them with a newer surface.
  • Linux architecture is ancient

    Linux architecture is ancient, it's still based on a monolitic kernel