X
Tech

Nvidia: Next big supercomputer player?

Nvidia unveiled its next-generation graphics processing unit (GPU) architecture, dubbed Fermi, and announced a key supercomputing win with the Oak Ridge National Laboratory.The takeaway: Graphics and visualization are becoming key to scientific discoveries.
Written by Larry Dignan, Contributor

Nvidia unveiled its next-generation graphics processing unit (GPU) architecture, dubbed Fermi, and announced a key supercomputing win with the Oak Ridge National Laboratory.

The takeaway: Graphics and visualization are becoming key to scientific discoveries. And Nvidia could be a major player. Oak Ridge's supercomputer will be used for research in energy and climate change and is expected to be 10 times more powerful than today's fastest supercomputer (statement).

Fermi, announced at Nvidia's GPU Technology developer's conference (statement), is a new architecture designed to deliver computational GPUs. The game for Nvidia CEO Jen-Hsun Huang is to make his company more than just a graphics chip player.

Fermi will have tools for large data center deployments, 512 CUDA Cores and a host of other goodies including NVIDIA Parallel DataCache, which speeds up algorithims for physics solvers and raytracing.

CNET News' Brooke Crothers notes:

The Fermi chip integrates three billion transistors, about three times the number of transistors in Nvidia's most powerful graphics chip now on the market. In the future, the chip will also find its way into Nvidia's GeForce product line for PCs...The architecture would use both graphics processing units (GPUs) from Nvidia and central processing units (CPUs), according to Nvidia. Intel and Advanced Micro Devices, among others, make the CPUs.

The problem? Nvidia is a big battleground among analysts. If faces fierce competition from AMD. In addition it's unclear whether these big supercomputing announcements boost Nvidia's commercial standing.

The battleground was evident on Thursday among Wall Street analysts.

JMP Securities Alex Gauna says there was a lot of buzz around the Nvidia tech conference, which was filled to capacity. Gauna gushes:

We would characterize the energy of the event as booming, with attendance some 50% oversubscribed beyond event organizer expectations and participants spilling into multiple overflow rooms to witness the dramatic inaugural keynote that jumped off presentation screens in stunning 3-D. Testimonials emerging from the conference came from the echelons of scientific supercomputing, academic research, medical devices, and industrial design in addition to the standard cadre of multimedia and entertainment advocates. We also overheard the real exchanging of ideas and business prospect on the exhibition floor at the end of the day, versus the usual one-sided market pitches we have come to expect from technology forums. This groundswell of support and enthusiasm for a technology roadmap that did not even exist two years ago is stunning to us and we believe promises dramatic scientific, commercial, and virtualization breakthroughs in coming years. It also represents an opportunity for Nvidia to entrench itself as a major mainstream computational force in the electronics industry in the coming decade, in our view.

And then there's the other side of the Nvidia equation. J.P. Morgan analyst Shawn Webster downgraded Nvidia. He said that Nvidia is on track for its fourth quarter, however, "we are concerned on risk to Consensus estimates for calendar 2010/2011 and continued competitive pressure from AMD should drive under-performance." Webster writes:

AMD appears ahead of NVDA on product cycle, despite “Fermi” announcement yesterday. AMD recently launched its new family of graphics processors on 40 nanometer technology. Third party reviews have been positive on the family in terms of its performance, cost and power consumption. We analyzed the early reviews and estimate AMD's "5800" family of products improves performance roughly 50% versus its prior generation at lower power, but at a chip size which is only 18% larger (albeit on a new process technology). While Nvidia announced its new “Fermi” (aka “GT300”) GPU architecture at a conference yesterday, many details were missing and it is not available yet and we believe may not arrive until December or later, which adds risk the company will miss the holiday season and the launch of Windows 7.

Webster adds that the Fermi architecture could result in a "poor cost structure" relative to AMD.

In the end, the Gauna vs. Webster takes couldn't be more different. As usual, the truth is probably in the middle somewhere. Bottom line: Nvidia can be a supercomputing player, but it may not matter to the company's commercial standing.

Editorial standards