X
Innovation

Q&A: Nvidia's chief scientist on the future of graphics

Dave Kirk, chief scientist at Nvidia, talks about the next challenge for graphics processors and why Moore's Law does not apply
Written by Matt Loney, Contributor
Dave Kirk, Nvidia's chief scientist, was in London recently as part of a European tour. ZDNet UK caught up with him to talk about the future of PC and console graphics, whether they will ever really match mainstream movie quality, and how the company will maintain the performance curve. Q: There seems to have been a lot of confusion over the GeForce branding lately. Have you learnt any lessons from this?
A: It is interesting to compare what we saw in the market after launch versus what we thought about when we decided to do it. The picture we had was that GeForce4 was the 2002 model -- the name denoted the model year, not the architecture, but a lot of customers perhaps did not realise this. It is something for us to think about next time -- the same naming dilemma will come up again when we have another product in autumn 2002 and spring 2003. Say we call our next processor the GeForce5 -- this does not necessarily mean it has a new feature set, just that it is a new product, so we may call it something else. Did Nvidia's philosophy change with the purchase of 3DFX?
Not too much -- we still want to be profitable and we still want to stay in business -- so they haven't influenced us in that. What we did though was to mix the development teams up completely. I didn't want 3DFX people versus Nvidia people -- I wanted to have us all learn from each other and make different products. Both companies had products in development at the time, and we could have just picked up 3DFX's products and developed those -- but instead I took the two teams and shuffled them around. I got the Nvidia people to argue for 3DFX products and the 3DFX people to argue for Nvidia, so they all had to learn the advantages of the competing products. We ended up changing the projects so much that they really weren't recognisable from before, and that was the goal -- we wanted the best from both sides. Plus, they are all Nvidia people now. What is the limiting factor for the quality of 3D games?
One of the disappointing things for us is that we bring out a new piece of hardware like the GeForce3 and time passes but there still aren't many games coming along to take advantage of it. No developer can develop for a new technology before they have it, and the time between us getting a new technology (we have developed) and shipping it is only a couple of months. Xbox is a unique (console) platform because it is not painful to program so it did not take long for developers to get competent with it. In contrast, when the PS2 launched it was so difficult to program that all the games looked like PSone games: they had the same shading, and nobody looked at the screen and thought 'wow'. The way I see consoles work is that the first generation games are 'learning hardware' games. With the Xbox the graphics hardware is GeForce3 so I would expect the first wave of games to be impressive but not amazing. But the second wave -- that is when the developers will really take advantage of the platform. There will always be a difference between games on a PC and games on a console, because console games developers develop for a fixed platform. On the PC, developers are always on the 'just learning' part of the curve, but consoles are a fixed platform, and this Christmas is when we will start to see games taking advantage of effects available in the Xbox and be really be amazing. So where do you go from here?
Once you get to running at resolutions of 1600x1200 pixels with a refresh rate of 75Hz, speed is not an issue and you need to think less about more pixels and more about better pixels. We passed this point about a year ago, with the GeForce3. Now we are looking at improving features such as anti-aliasing (smoothing jagged edges). We have already added dedicated hardware for this, and it is something that is most noticeable when it is not there: people don't jump up and say hey, look at this, when they are not there, but you really do notice jagged edges when the anti-aliasing is switched off. It has to do with the quality of experience you are creating in-game. It is about story and game-play and is less about computer graphics. In the movies they call it suspension of disbelief. So anti-aliasing is about peeling away stuff you don't want to see. It turns out to be more important for laptops and flat panels than CRT monitors because pixels on LCD panels are really clear and square. We now want to get to the point where you don't chose anti-aliasing or not -- it just happens. How far off is this?
It is never going to be true that there is no penalty for anti-aliasing -- now that we have added dedicated hardware the penalty for switching on anti-aliasing is less than 50 percent. Soon we will stop optimising non-antialiased graphics so that when you switch it off there will be no difference in speed. We are no longer thinking about how to make aliased rendering go faster -- instead we want to concentrate on things like making smoother edges, better shadows and better reflections. So we have 'better pixels'. Where does this get us?
I want to have PCs and game machines making images that look as good as what you see at the movies, and we can't do that just by making a faster GeForce3. Last year we took a scene from Final Fantasy, dumbed it down a bit and were able to render it in real time at 12 to 15 frames per second. That same scene runs at well over 30 frames per second on a GeForce4, so now, if we did it again, we should be able to render the original scene from the movie at 12 to 15 frames per second. We are less than one year away from rendering it with full details and at full speed of 30 frames per second on PC hardware. It is a chase though. For rendering frames for the movies you can always afford to wait around for a couple of hours, but in games, you need them instantly. So we will be able to render movies like Final Fantasy, Shrek and Toy Story in real time on a PC next year, but of course the movie studios will raise the barrier. But once we can get movie quality in games and can start getting movie studios to use games hardware to create their movies, that will really bring movies and games much closer together. The biggest opportunity we have is that graphics is an infinitely parallelisable problem, much more so that PCs. With GPUs (graphics processing units) we are able to take advantage of more transistors because we can keep them busy better than in CPUs (central processing units), and this helps us double the speed much faster then every 18 months (the rule dictated by Moore's Law for CPUs). But what happens to yields as you increase the number of transistors?
Already the GeForce4 has more transistors on it than the Intel Pentium III and Pentium 4 combined. You have to remember that Moore's Law is not the rate at which semiconductors get faster and more condensed, but the rate at which CPU manufacturers can make CPUs more productive. The number of transistors in a given area for a given cost rises faster than Moore's Law. The reason that CPUs are unable to keep pace is that everything in the sequential architecture of the CPU has to go through one pipe. With graphics, where we can have more a more scalable architecture, the curve is much steeper; we can double performance every six months. And since our computational growth rate for the same number of transistors is faster, we don't have to be ahead of the CPU manufacturers such as Intel in terms of process technology. Read a full review of the Nvidia GeForce4 Ti 4600 here.

Editorial standards