Apple's <i>Grand Central</i> threat to Microsoft

Apple's <i>Grand Central</i> threat to Microsoft

Summary: Apple's quiet preview of OS X.6 - Snow Leopard - promises ".

SHARE:

Apple's quiet preview of OS X.6 - Snow Leopard - promises ". . . unrivaled support for multi-core processors . . . " through a "new set of technologies" named Grand Central. What makes Grand Central so powerful - and how can Microsoft respond?

Fresh from Google's Seattle Scalability Conference - which focused on just the questions Grand Central purports to answer - Apple has only so many choices. What are they?

Peeling the Apple Apple is clearly a leader in implementing multi-core support, beginning with the first dual processor Power Macs 5 years ago, while the DayStar multi-processor Macs date back to the mid-90s.

The situation is more urgent today: multi-core systems are the only easy way to drive performance up while controlling power use. The problems of multi-processor systems have been studied for over 30 years.

The chief issue is scalability: does an additional processor add performance that exceeds the cost of the processor?

Alternatives Compiler technology is capable of decomposing applications into appropriate multiple jobs that can be spread over multiple processors. The real issues are coordinating those jobs and their access to storage - either RAM or disk.

The common approach is to manage the multiple jobs as threads and access to storage as locks. The problem is that the overhead of managing those threads and locks grows with the number of processors - limiting scalability.

Another approach - used in Azul Systems 800+ core Java compute servers - is something called Transaction Memory. Instead of locking file or memory access, TM treats memory much like databases treat commits: as atomic transactions that are either completed or rolled back.

Grand Central could also include a more elegant scheduler - like ULE that promise better performance under load. Or improved compilers such as Cray's Chapel, whose knowledge of multi-core architecture makes it much simpler for programmers - something Grand Central promises.

Microsoft's problem Whether Windows 7 is simply perfuming the Vista pig or a significant re-write, Apple's Grand Central challenge can't be ignored. If Apple achieves real speed-ups and Microsoft doesn't they will look like a pitiful, helpless giant. And their IT defenders will look like idiots.

Windows 7's rapid development schedule suggests a marketecture refresh rather than a fundamental re-write. For a much smaller company to steal the lead in multi-core performance would be a long-remembered humiliation. Imagine the PC guy vs Mac ads.

The Storage Bits take Apple is likely working on all these technologies. With Sun's powerful Dtrace tool standard in OS X, they have the means to assess all options and choose the winners next year.

Update: Another element in the mix is Intel's Digital Enterprise Group, whose paper Enabling Scalability and Performance in a Large Scale CMP Environment [available from the ACM] discusses a project that

. . . enables near linear improvements in performance and scalability for desktop workloads such as the popular XviD encoder and a set of RMS (recognition, mining, and synthesis) applications. Another key contribution of this work is its use of McRT to explore non-traditional system configurations such as a light-weight executive in which McRT runs on "bare metal" and replaces the traditional OS.

[Emphasis added]

BTW, McRT - Multi-core Run Time - is pronounced "MAC-ar-tee."

Intel is stirring this pot because they know that their advanced hardware is useless without software. They also know that Microsoft and Dell won't invest a nickel in advancing computing's state-of-the-art because they are very happy with the status quo. Supporting Apple makes good business sense for Intel - and for anyone interested in a competitive computer market. End update.

Microsoft has a deep technical bench, but their executive leadership can't seem to prioritize. Just as Henry Ford lost a 25 year lead in automobiles through inflexibility, Microsoft's cavalier attitude towards Windows customers is steering them to disaster.

Comments welcome, of course. I want fast multi-core performance and I don't care who delivers it first.

Topics: Hardware, Apple, Microsoft, Processors

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

103 comments
Log in or register to join the discussion
  • Necessary motivation

    It could go something like this ...

    1. Software company (APPLE) realises it will get good press and follow-on sales by harnessing computing power in existing multi-core CPU's. If only they had better integration with the hardware ...

    2. GPU Hardware companies (ATI, NVIDIA) realise they will get good press and follow on sales by releasing the computing power in existing multi-threaded GPU's. If only they had better integration with software ...

    3. Since all parties above are 2nd in their league they will realise that helping each other out could make them all first in their leagues.

    4. Someone will have the 'brilliant' and 'novel' idea that instead of having a marketecture (like that word) based on the existing split of CPU/GPU and OS/applications they could design a cheap, modern SUPERCOMPUTER instead of an expensive old-style TOY.

    5. Ten years later historians will say "how surprising that it took so long to happen".

    6. M$ may indeed wake up smarter which would help consumer pricing further via competition, or it may not.
    jacksonjohn
  • Don't think that it will be a serious threat for M.S

    I personnaly would really like to see if Apple will be able to solve in such a short time,a problem which has been making run for their money a lot of quite smart people for years,if not decades now.
    Even, if Apple succeeds, i don't see how this will represent any significant threat for Microsoft.
    Such achievement would appeal to only a niche market.
    Why the average joe/jane who just want to use a office suite which fit him/her or browse on internet would need a much faster computer ?
    timiteh
    • Interesting question..... when is good enough NOT?

      Take SUV's for instance before them people did do fine or
      well enough with the options available to them. Then came
      the SUV and a change in attitude. Suddenly everyone
      needed one. That term NEED was used at the drop of a
      hat. The so called Soccer Mom was born however when I
      observe people on the road in their SUV's what I see is
      ONE person in the vehicle every single time. (I kid not) So
      the SUV full of kids is to my mind a urban myth. Then
      safety came up however as it turns out those SUV's are not
      all that safe and pron to rollover and such. Yet again the
      whole illusion of safety was in their size. So perhaps speed
      is not the killer item needed today however I think it will
      be a strong selling point.

      Pagan jim
      James Quinn
      • But is it "good enough" to switch?

        We're not talking SUV's where it was also about perception: "I got an SUV, I'm part of the 'in' crowd", and other such nonsense, but those who actually have a real need for them, not those that drive them to "show them off".

        But at the same time, there are alot of things out there that people haven't upgraded as what they have now is more then adequate for their needs.

        Will a marginal increase in speed make a difference? I really don't see that it will.
        AllKnowingAllSeeing
        • Well there is the question......

          What kind of performance increases might we see? Some
          applications would surely benefit. Like movie and photo
          editing I would imagine. Graphic arts applications? What
          about the all important to some games? I think games
          have only begun to touch on the potential virtual reality
          that a powerful computer might create for a player.
          Imagine an entire fantasy role playing world where every
          character and monster has AI and a goal of their own.
          Random chaotic events happening in a real world
          environment where anything is possible. It would take
          some pretty impressive power to generate said. Also I
          don't know who incremental the improvement might be? I
          could be more than incremental if one considers that
          based on this article some people in the field at least don't
          thing we are tapping into the real potential of multi-core.

          Pagan jim
          James Quinn
    • Because Jan and Jon Doe

      are not just using an office suite and browsing the internet. This
      is the paradigm error folks are making.

      Jane and Jon doe are doing HD home video editing. And THAT'S
      why they'll want the horsepower.
      frgough
    • Threat to Ms?

      Average joe/jane also drive a huge big SUV in suburbia thats why. The oneupmanship is the motor of progress these days
      c.bartlefsen@...
  • You can already have it now

    Let IBM show you how:
    http://www-03.ibm.com/press/us/en/pressrelease/24405.wss

    And it runs on:
    "Roadrunner operates on open-source Linux software from Red Hat."

    So, why not get yourself a Fedora, or the even easier Ubuntu and OpenSUSE? For free.... :-)
    pjotr123
    • And for <span style="color:red">EXTREME</span> performance, Gentoo

      will do the trick ....
      fr0thy2
      • Could you point out

        any link showing a recent performance comparison between Linux distros to back your claim? Gentoo users all the time are claiming their performance advantage, but I do not know (really) whether that is true.
        markbn
      • Gentoo gains arnt that big. But it is the best distro customization-wise.

        I suppose compiling every application so its optomized to the best of your cpu's capability is the kind of gains talked about in the article. It is easily done in gentoo by using a recent version of Gcc and -march=native. Also Gentoo can be customized to support this certain application but not this other application.
        g2g591
  • I don't see a major technological break through

    The OS won't make that big a difference as far as multi-core goes. It is more of an application issue. I find that most OSes already do a good job of scheduling the resources assuming the demand is there.

    BTW, Windows goes further back on MP than Apple. Perhaps not multi-core since x86 was slow to get there. But Apple had to borrow another OS (BSD) in order to get there. OS9 was not good at this.

    I don't want to sound rude but I don't see any technological basis for this article. It doesn't make any sense.
    DevGuy_z
    • "IF" as the article states people in the industry

      have been working on this for 20 to 30 years or more and
      continue to this day too work on this then I have to think that
      at least some in the industry disagree with you.

      Pagan jim
      James Quinn
      • That's a compiler issue (application) not an OS one.

        The idea is to make it easier to write APPLICATIONS that take advantage of multi-core. That part is hard. But there isn't some OS technology that is going to work on existing hardware that is suddenly going to give a big performance boost. And an OS can't make a single threaded app multi-threaded. OSes are schedulers, an app requests a resource and the OS attempts to fill the request in an intelligent and efficient manner but once the process or thread has its resource it is up to the application to use it efficiently.

        He made it sound like Apple has some new OS based technology that is going to make the OS a whole lot faster on existing hardware. While he didn't SAY, existing hardware, it is implied because it is meaningless if the hardware doesn't exist.

        There are some ideas like NUMA machines which can help but Microsoft has added NUMA support to Vista and Server 2008
        DevGuy_z
    • Hard to comment without the details

      [i]"The OS won't make that big a difference as far as multi-
      core goes."[/i]

      It's really hard to comment one way or the other without
      knowing all of the details. All we have is a high level
      description of what Grand Central is, etc. However, when
      you look at something like the BeOS, it's clear that the OS
      can make a big difference. If Apple reworks major APIs in
      a multithreaded way, little if any work would have to be
      done by the developers.

      [i]"BTW, Windows goes further back on MP than Apple.
      Perhaps not multi-core since x86 was slow to get there.
      But Apple had to borrow another OS (BSD) in order to get
      there. OS9 was not good at this."[/i]

      The classic Mac OS used cooperative multithreading as
      opposed to preemptive multithreading. You're right, it
      wasn't as good, but it does seem to predate Microsoft's
      efforts.

      Further, I'm not quite sure why people just assume OS X =
      BSD. Apple's kernel, XNU, is a mix of Mach, BSD and
      Apple's own engineering. When it comes to SMP support,
      this was one of the basic features designed into Mach from
      the beginning.

      Finally, does it really matter where a feature came from?
      Apple used to suffer from the "not invented here
      syndrome". I'm happy that's not the case anymore. Why
      re-invent the wheel just to say you invented one too? It
      makes sense to use open source in many cases, improve
      on it if necessary and focus your time where you can make
      a difference. Also, just because someone claims to have
      developed something themselves, doesn't mean it's entirely
      true. Look at Windows networking for example. People
      were still finding traces of BSD code up until very recently.
      They should have just admitted to where they "borrowed" it
      from in the first place.
      techconc
      • I over generalized on the BSD thing...

        Yep, I did over generalize there but my point was that Apple wasn't that savvy on MP or MC they inherited this capability by adopting Mach kernel technology that had it.

        BTW, I disagree still on pre-dating. Microsoft had pre-emptive MT back to to NT 3.1 which significantly predates OSX and on the desktop it's had cooperative multi-tasking form Windows 3.0 - Win3.11. In Win95 it was pre-emptive on all 32bit apps and cooperative on 16 bit. but if I remember Windows did a better job of MT than Mac SE, it was later MacOS that really improved on MT. Windows has handled MP since Windows 2000 and perhaps NT 4.
        DevGuy_z
        • NT 3.1 had SMP support (nt)

          nt
          blu_vg@...
          • Well I wasn't sure and too lazy to look it up.

            When did NT 3.1 come out?

            And Win 2.1 actually had cooperative multi-tasking it just wasn't very popular.

            While Win9x didn't have SMP it did support 32bit pre-emptive MT
            DevGuy_z
        • SMP and Multitasking details

          Just to get some facts right in the discussion since it's way to late to add facts to this article...

          Windows NT in 1993 had Symmetric MultiProcessing (SMP) as part of its core architecture.

          Windows 1.0 in 1985 had both cooperative multitasking (for WinApps) and preemptive multitasking (for MS-DOS apps)
          Mike Galos
    • Don't think so

      I think you are mistaken on Windows going further back than Apple...ever hear of Radius Rocket? The were cool on those old Mac II's - they were Nubus cards....
      billmsu