NVIDIA responds to Intel's Larrabee GPU

NVIDIA responds to Intel's Larrabee GPU

Summary: Intel's forthcoming multi-core Larrabee graphics processing unit (GPU) promises a chip "with comparable performance to GPUs on the market at that time." The upcoming chip has raises the hackles of fierce GPU competitors ATI and NVIDIA.

SHARE:
6

NVIDIA responds to IntelÂ’s Larrabee GPUIntel's forthcoming multi-core Larrabee graphics processing unit (GPU) promises a chip "with comparable performance to GPUs on the market at that time." The upcoming chip has raises the hackles of fierce GPU competitors ATI and NVIDIA.

NVIDIA PR sent me an email with "a couple of things to bear in mind" about Larrabee:

  1. With current multi-core X86 processors struggling to scale from 2 to 4 cores, how well will the *same* X86 architecture scale to 32 cores?
  2. If Ct is the answer to this, then why not leverage it now for the millions of customers who are struggling to get more performance from current CPUs?
  3. Again, if this is the same familiar X86 architecture we are used to, will applications written for today's CPUs run unmodified on Larrabee?
  4. Similarly, will apps written for Larrabee run unmodified on Intel multi-core CPUs?
  5. How will Larrabee not severely damage Intel's CPU business?

Apple currently uses an NVIDIA GPU in the MacBook Pro (GeForce 8600M GT) and offers several choices in GPU for the Mac Pro including the ATI Radeon HD 2600 XT and the NVIDIA GeForce 8800 GT. 

Topics: Processors, Hardware, Intel

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

6 comments
Log in or register to join the discussion
  • CISC and RISC.

    I have wondered how Intel proposed to scale their CISC x86 instruction set up to more than a few cores. They can only do this by reducing the complexity of each core, and use less transistors. That means dropping all the clever stuff they came up with to make their CISC processors fast.

    On the other hand, a RISC based approach can go back to small fast corea, and scale up far better. Many modern RISC chips have large instructions sets, needing many transistors too, so they are also stuck.

    But for a GPU, how much legacy support do we need. Why do we need x86 in the GPU?

    I suspect Intel are a bit worried that a return to small-core RISC designs will scale out massively better than they can do with x86, and via technology like CUDA, we'll end up with a several core CPU and thousand-CPU co-processors, and ultimately plenty of code that isn't written for x86.
    TheTruthisOutThere1
  • Not the same but same instruction set.

    They mentioned that there were modifications to these chips to optimize them for doing graphics. That doesn't mean you can't run any x86 application but the performance is optimized for the types of things that need to be done for graphics.

    Since the chip is designed for x teraflop (I take this based on that the chip is a product of the terascale research.) It should be plenty fast.

    Intel was able to hit 3 TFlop with overclocked 80 core chip. I don't think the two are related but they showed that it was possible.

    SSE 4.2 is capable of a DP float multiply in a single clock cycle.
    DevGuy_z
  • RE: NVIDIA responds to Intel's Larrabee GPU

    1. Current CPUs aren't designed to run relatively simple tasks on predictable data in a massively parallel way. But one made with lots of simplified cores would.
    2. You could try... see answer #1.
    3. Why should that be a requirement? Ease and familiarity of coding does not equate to wholesale unmodified porting of code, and vice versa. Who wrote this dumb question?
    4. Why would someone want to use a hammer as a shovel? Why run applications that are not designed to run well on a piece of hardware? See answer #3.
    5. Because they're designed to do different things. Please ask again when Nvidia starts running Windows XP on their GPUs.

    I can't believe this. This can't be coming from Nvidia PR. It's an insult to reader intelligence.
    Gigahurt
  • Who can trust NVidia right now

    I'm sitting with 2 laptops that have been having video (Gateway) problems and with NVidia stonewalling on the problems with their current chips I have no confidence that the company will take care of me as a valued customer now or in the future. It is a good thing that Intel is entering the video market. Perhaps it will put pressure on the incumbents to step up hardware and driver quality.
    Keeping Current
    • I've always been an ATI guy...

      ...the last time I bought an Nvidia card, it was a Geforce 4 MX 440. Since then, ATI has consistently been the better price/performance leader (whenever I've been in the market), and they tend to have top notch drivers over Nvidia, too. They used to have trouble with driver quality... but I don't think anyone would argue that Nvidia's drivers are superior these days.
      A_Pickle
  • RE: NVIDIA responds to Intel's Larrabee GPU

    whoever wrote that obviously doesn't know shit. That is the stupidest response to Larrabee that I have ever heard. It would have been MUCH better to state the practicality of Larrabee even being competitive in 2010. Multiprocessors are FANTASTIC for data processing which is what graphics processing is. The market for Larrabee comes from the fact that it won't be so hard coded, but on the other hand it probrably won't be as powerful for graphics processing. It probrably will be more functional across a wider variaty of applications. But it won't at all compete with processors designed to parrallel compute instructions, witch is very difficult. Anyway, I love nVidia, and I believe they will hold the top place as the BEST (not cheapest) graphics processing company. Though, as more and more graphics processing and central processing move closer, I think it's feasible that we might end up with another competitor in the market to force prices down and driver better and better technology. WOOHOO!!!
    shadfurman