Is NVIDIA dead? (UPDATED)

Is NVIDIA dead? (UPDATED)

Summary: Reports are circulating that NVIDIA is to kill off the GTX 260, GTX 275, and GTX 285 and exit the high-end and mid-range graphics card market. Is this the end for NVIDIA?

SHARE:
TOPICS: Processors
134

Reports are circulating that NVIDIA is to kill off the GTX 260, GTX 275, and GTX 285 and exit the high-end and mid-range graphics card market. Is this the end for NVIDIA?

SemiAccurate has the scoop:

Word from sources deep in the bowels of 2701 San Tomas Expressway tell us that the OEMs have been notified that the GTX285 is EOL'd, the GTX260 is EOL in November or December depending on a few extraneous issues, and the GTX275 will be EOL'd within 2 weeks. I would expect this to happen around the time ATI launches their Juniper based boards, so before October 22.

Which leaves the GTX 295, but given that AMD/ATI now has 5xxx series cards that annihilate it, its lifespan can't be that long either (Demerjian has a good analysis on why vendors won't take a chance of stocking NVIDIA cards). And what does NVIDIA have in the pipeline for the near future ... hmmm, not much. A series of technical issues, combined with what appears from the sidelines as a high degree of mismanagement has steered the company into the tar pits.

So where does this leave the market? Well, it makes AMD's investment in ATI now seem like a pretty good buy (especially given the dismal run that AMD has been having lately). If NVIDIA is exiting the high-end and mid-range graphics card market then this leaves a very lucrative field open to AMD.

So is this a good thing or a bad thing. Well, it's good for AMD, and bad for NVIDIA, that's pretty obvious. But what about consumers? Well, I'd hate to see a world without NVIDIA, because in a market with only one big player, things tend to stagnate, and prices tend to be higher than they could be. If you're looking to buy a graphics cards (or buying a new PC) then NVIDIA exiting most of the GPU market is not a good thing at all.

But ...

If you've been watching the GPU market closely over the past few years you must have noticed how games are no longer driving the GPU industry. Gone are the days of needing to spend hundreds of dollars on GPU hardware to run a game well. Nowadays you can pick up a sub-$100 GPU that will run games very well indeed. GPU vendors have been spinning their wheels truing to invent markets for high-end GPUs and multi-GPU rings. Truth is that the market for these sort of fringe applications is small. The importance of the GPU is dwindling, and this means that there might not be room for two big players any more.

Thoughts?

[UPDATED: This in from an NVIDIA spokesperson:

"We are not phasing out the products you list below [the GTX 260, GTX 275, and GTX 285] by the end of this year.  We are in the process of transitioning to Fermi-class GPUs and, in time, phasing out existing products."

I'm getting conflicting reports from supply chain and hardware insiders. Stay tuned ...]

Topic: Processors

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

134 comments
Log in or register to join the discussion
  • Don't forget Intel

    There have been rumors for some time that Intel was going to enter the discrete graphics card market, with cards designed to work with their processors and MB chipsets. Perhaps nVidia felt like the market is getting squeezed too much to make a profit in that arena.
    itpro_z
    • I don't think it's that.

      I don't think it's that. nVidia has never been shy
      of competition before, and Intel, try as it might,
      never really understood gaming graphics.
      CobraA1
      • Intel = teh suck

        You ain't kidding. I bought a laptop with X3100
        graphics on it, loaded LINUX and boom! it
        failed to run at any decent speed. It's been a
        year I've had that thing and it's never run
        graphics as well as the cheap old Dell D610
        does (which I was trying to replace). Every
        week, I see another note on the forums saying
        that the next release of the kernel will
        include fixes for friggin' Intel embedded
        drivers.

        Poo! it's not happening! I shudda bought
        something with a discrete NVIDIA card in it. No
        kidding. It would have been worth the extra
        $100.
        Olderdan
        • Try it on a real OS

          ... I mean, seriously ... why should the kernel have to be updated in order to fix problems in Intel's low-end graphics?

          How does Win7 handle your machine?
          de-void-21165590650301806002836337787023
          • Dude

            Doesn't matter what OS you run on an Intel IGP, they are renown for being absolute stinkers . So do stop being such a fanboy.

            Bozzer
        • There's the problem

          you bought a laptop with intel integrated graphics, even Ati graphics is better. But for bang for buck go Nvidia integrated.
          I Hate Malware
          • Doesn't look good at the moment

            http://arstechnica.com/hardware/news/2009/10/day-of-nvidia-chipset-reckoning-arrives.ars
            a more recent update.
            I Hate Malware
      • That will all change when Larrabee arrives

        Historically, Intel didn't understand much other than business computing. Heck, there was a time when they didn't support any form or amount of overclocking!

        But now look at Intel. Their latest CPU's automatically overclock themselves so long as their temps don't climb too high.

        What Intel is planning on delivering in Larrabee will forever change the face of GPU's as we know them. When I can add a Larrabee 32 x x64-core core card to my PC with 8+ cores of its own, a whole new world of computing suddenly becomes available.
        de-void-21165590650301806002836337787023
        • Larabee...

          I'm not exactly sure what you've read on Larabee, but my hopes for it aren't nearly as high as yours. The initial demo showed off what an 8800GTS would have easily beaten several years ago. It also requires a software rendering engine for DX games, which adds another layer weakening performance.

          It's also not based on Core 2 or any of the other newer architectures. Basically what Intel did was take a Core 2 die and cram as many Pentium 1 processors into that die space as possible, shrunk down to modern manufacturing scales with a vector processing unit and double the L2 cache per core. Each individual "core" (the actual P1) can execute two instructions with 32Kbx2 cache along with the VPU. In the space of a single Core 2 die they were able to fit 10 of these cores. So each indivual processing unit is 10x Pentium 1s, however these are clocked at 2.0Ghz instead of the older speads. On a Larabee card you'll have anywhere from 8 to 32 of these processors on it. So you'll have anywhere from 80 to 320 P1s @ 2.0Ghz optimized for Ray Tracing.

          Eventually this will have a huge impact, and will almost certainly be used in large rendering farms immediately. So while great for Ray Tracing, which in the long run will probably be the way graphics are rendered, it's not that great for DirectX. However devs can write their own 3D engine using standard x86 code, which is an upside. I just think they missed the real window of oppourtunity because a 5870 has more horsepower than what I outlined. Using the Compute Shader in DX11 everything you can do with Larabee can be accomplished on any DX11 capable card. With 2.5 Teraflops of performance at your fingertips.
          LiquidLearner
          • What you fail to mention

            is that ray tracing is not something that is within the realm of existing GPU's. Fermi-class GPU's (GTX 300 series) are supposed to be capable of it, but Intel already has working demos. x86 instruction sets also allow for added flexibility in the programming pipeline that existing GPU's don't offer.

            What Intel is demonstrating right now on Larrabee is leapfrogging existing rasterization API's. Performance during gaming still remains to be seen, but the demos are more impressive than anything that NVIDIA is currently working on.
            Joe_Raby
          • yes...

            with CUDA, GPU enabled raytracing is possible,
            but with the advancements of polygon rendering
            most of the advantages of raytrace rendering,
            at least in entertainment, are gone. All a GPU
            is, is a massively parallel FPU. Larabee is a
            not so massively parallel CPU, they each have
            their advantages. Each core will have memory of
            it's own (not like the cache) where code it's
            to run can be stored, decreasing the lag
            between the memory and the CPU. CPUs are more
            multipurpose and GPUs are more specialized.
            CPUs obviously have floating point capability,
            but not near in the volume of a GPU.

            I think nVidia is looking into using it's
            processors in other applications requiring
            massive floating point calculations such as
            dynamics and simulation.
            shadfurman
          • "Advancements of polygonal rendering"

            haven't even approached the level of realism that is possible with raytracing, sorry. Raytracing uses much simpler coding techniques to rasterization too, but processing overhead is what has kept it from going mainstream - at least, until now.

            I remember when 3dfx released the first Voodoo chipset. This was a chip that was designed for floating-point math from the start, and it showed what was possible when you could optimize your coding techniques around a simplistic (but uber-fast) piece of hardware. It is still rasterization even today, but now it's bloated with complexity such as shaders, JUST to get to the same level of realism that raytracing has provided for years prior to the concept of rasterization. It's time for a change, and the hardware has finally caught up. This is a good thing - it enables developers to create very realistic environments without all of the coding overhead to build a rasterization rendering engine.
            Joe_Raby
          • raytracing - closer than you think

            "but with the advancements of polygon rendering
            most of the advantages of raytrace rendering,
            at least in entertainment, are gone."

            You haven't seen anything yet ;).

            While what has been done in today's games is
            impressive - I can still see many flaws in
            them. Especially when it comes to shadows,
            lighting details, and reflections.

            How many times have you seen the following:

            -Blocky, low-resolution shadows
            -Shadows that are in the wrong place
            -Shadows that cover some things but not others

            All that is gone in ray tracing.

            Problems with reflections happen too - while
            it's generally difficult to see, reflections
            are often distorted and not accurate.

            There are all kinds of things which may not
            "look right" in a game that will look far
            better with ray tracing.

            In fact, some of the "shaders" in high end
            games like Crysis are borrowing ray tracing
            techniques - I think some of them may actually
            shoot rays. I don't program them, but with per-
            pixel shaders it's certainly possible.

            "I think nVidia is looking into using it's
            processors in other applications requiring
            massive floating point calculations such as
            dynamics and simulation."

            Indeed it is, especially with the buying of
            PhysX and the implementing of physics into its
            drivers.
            CobraA1
        • Lemme get this straight . . .

          Lemme get this straight . . .

          You honestly think a 32 core CPU is going to
          outperform a GPU with 100-200 (or more) shader
          units?

          No, Intel still doesn't understand graphics.
          CobraA1
          • Actually

            There are far more graphics cards in use with 32 or less shader units. If you look at what Apple is using as standard fare, the vast majority are 9400-based units which have only 16.

            And yes, raytracing isn't possible with any amount of smoothness on a 9400, unlike what Intel has shown.

            The big problem is that GPU's are based around rasterization, whereas Intel's approach supports a programmable rendering pipeline that is adaptable to a number of rendering techniques.
            Joe_Raby
          • re: actually

            "There are far more graphics cards in use with
            32 or less shader units. "

            Well, yes, because most people aren't buying
            gaming class machines. Most gaming machines are
            by some gaming-specific name like Alienware or
            self-built.

            "The big problem is that GPU's are based around
            rasterization, whereas Intel's approach
            supports a programmable rendering pipeline that
            is adaptable to a number of rendering
            techniques."

            While rasterization is indeed normally used,
            that's not to say a shader can't be
            reprogrammed to do something else. The shaders
            on nVidia's DirectX 10 cards [b]are[/b] fully
            programmable. It's called CUDA on nVidia's
            cards.
            CobraA1
        • You are a PR drone, am I right?

          Because only a PR company would actually write such dross.

          "a whole new world of computing suddenly becomes available".

          Care to elaborate past this soundbite? What is this "new world" you speak of? In fact, no, don't bother, I'm sure I can find the script myself on the Intel website....

          Bozzer
        • And it will like likely take...

          Intel that "32" yrs to get al the bugs out. Seriously, they intro'd TurbCaching a couple of yrs back (with full Vista support) and yet they still haven't even gotten the bugs out of that project. Be stuffed if I'd trust them to handle my GPU as well!
          kaninelupus
        • Re: Larrabee FUD

          It's amazing how many people buy into the Larrabee FUD. Larrabee is to the GPU world what Itanium was to CPUs. The architecture is absurdly inefficient, the idea of using the horrendous x86 ISA for GPUs is just laughable. It will be a total flop in the market. The only thing Intel will gain from Larrabee is scaring competitors off (Nvidia looks to be the 1st victim). Same thing happened when Inhell pushed Itanium as the next Big Thing, causing HP, DEC and Sun to abandon their next-gen chips. Fool me once...
          chefp
    • Rumors?

      Larabee hasn't been a rumor in over a year and was just demo'ed a few weeks ago for the first time. I'm not exactly sure why you would think these are rumors, they are concrete facts. Larabee will be out early next year, but it's basically a hardware software rendering engine. I'm not exactly sure what to expect from it.
      LiquidLearner