With the release of Cg, a language for writing graphics for games and movies, Nvidia is attempting to create a new standard. ZDNet UK talked to Nvidia's chief scientist, David Kirk, on his recent visit to London.
Q: Why is Nvidia, a hardware company, releasing Cg?
A: CPUs get faster two times every 18 months, and people are willing to spend two hours rendering a frame. So the amount you can compute in your graphics movie doubles every year, year and a half. Five years, a factor of ten. Graphics hardware gets faster at a much greater rate than that -- it doubles every six months. In five years you have a factor of a thousand. So what you can do in software with a CPU and in hardware with a graphics processor is going to collide. When we talk to developers about the next generation of hardware it scares them to death. That motivated us to try and improve the environment. It's a pretty big change. It's going to be major. So making movies and making video games will be the same?
This is a big step towards convergence -- not films and movies and games being the same, but the way people create them being the same. Artists can use the same skills on both. Cg is almost guaranteed to be efficient in hardware, and any Renderman program can be translated to Cg, by hand or by a tool that someone's developing. Once that happens, all the moviemaking can take place in Cg. How does this relate to what other companies, such as Microsoft, are doing?
We've been working with Microsoft for sometime to collaborate on the development of this technology, and we are 100 percent compatible with DirectX's high level shading language. They haven't decided what they're going to call it -- Microsoft has a way of branding things -- but it's the same language. Will it be the same product?
Not the same code base, but it's the same language specification, as C is C then Cg is Cg. Our goal is that we want this to be a standard. We have to be open and flexible and take away all the reasons not to go with Cg -- so not being compatible with Microsoft is out. But someone could do a GCC Cg?
It's open source, so someone could do a GCC Cg. We don't want to stop them. But we want to make it so they don't need to. GCC exists because it was created when compilers cost thousands of dollars. But we're giving Cg away. We don't give away the source code that's related to optimisations, but we do give away the source to the parser. Although it's a tool that automatically creates the assembly code, we think it'll do better than most programmers. Every time we look at assembly programs, we think we could do better -- and the trick is encapsulating that knowledge in the compiler, and we can do that. We output either DirectX or OpenGL. Any place where those two run, Cg will run. It will run on Windows. Will it be good for, say, Xbox developers?
If people want to be Xbox developers, they have to go to Microsoft and do that. Then if they want Cg, they can come to us. Microsoft might not release DirectX 9 for Xbox... And PS2?
We haven't done any work yet with the PS2 -- we're not a registered developer -- but there's no reason why it couldn't work with the PS2 and because it's open source Sony could choose to do that or we could choose to do it later. But you must have been talking to developers who are doing PS2...
Everybody loves it, everybody wants it on all the platforms, so I think it'll happen. I can't say when people will do things, but it seems good to me. The powerful thing is that it's open and it is multi-platform and it's multi API, so it's pervasive and can be used everywhere. It just works on other hardware, so if people want to do that they can, because the compiler is open source. One of the exciting pieces of this is the single complier architecture, where the single complier works on multiple platforms with multiple APIs and multiple hardware, including future GPUs that haven't been made yet. How can you optimise for GPUs you don't yet know about?
What happens is the compiler reads the specification of the hardware from DirectX, works out what capabilities are and creates code that runs well on that hardware. But don't you normally need more knowledge than what's in the capability list in DirectX to optimise well?
In some cases you do, but we know what we're building next, and we can release the Cg compiler now that creates different code for the next generation than the current generation, even though nobody has the current generation yet. We can do that now. But other people couldn't?
They actually could. Because what we will create is DirectX 9 code, and we assume that anyone else who wants to be anybody will be DirectX 9 compatible. Since the compilation happens at runtime, if you install a new game that has a new compiler in it you acquire all the new capabilities of the new compiler. As a developer you ship the Cg source code as part of your game along with the compiler. So you have to give your source code away to everyone else.
You can encapsulate it in your program. And in terms of shaders, IP (intellectual property) is overrated. There's not anything anybody's doing that everybody else doesn't know about. What you want to do when you create a new game, you want to create something that's new. It's only new once. Everybody is going to want to create their own vision, and nobody'll be interested in copying. And if someone wants your code they're going to get it. What do you get in the box?
The universal compiler archtecture is the big thing, What we're releasing today is our first beta, supporting DirectX 8 vertex and pixel shaders and also Open GL 1.3. It includes the compiler, language specification, standard library, browser tool and user manual. A typical programmer who knows how to program in C can program in Cg in around an hour. And what happens next?
This fall we will be releasing the first full version for DirectX 9, we're also adding Cg FX, which is an extension of Microsoft's FX framework. You can write multiple shader effects and have one chosen automatically for the target hardware. That's part of the mechanism by which you'd write something for say, GeForce, something for GeForce 3 and something for future hardware that you know it's not possible to run it in real time now, but that you might expect to see soon. If I'm an artist, what will I notice about this?
We also want artists to use Cg, and they don't have to know they're using it. By encapsulating it in other applications, all artists will be creating Cg code for the effects that they're creating. So we've been working with all the major content tool providers, Alias Wavefront, Discrete, Softimage, to produce their tools in Cg-enabled versions. By this fall there'll be a native Softimage release, we expect, and it'll be available in other tools as plug-ins, and included in the next release. The only thing they'll notice is that because Cg compiles to run in hardware, artists won't have to worry whether it's preview or final render, because it'll all be done in hardware. So it's always going to be fast. If you're working on Final Fantasy today, you work in wireframe, move a light around and go to lunch while it renders the new lighting. Now you can play back the scene in real time. You can do it interactively a thousand times more efficiently than this batch way of doing things now. You'll get better results quicker. We'll get real-time hardware rendering to the quality of cinematic rendering, both in terms of capability and ease of development. How are you making money at this?
If we can create a better experience from high-end GPUs, then everyone has to have one. We want to create a demand for the more advanced technology, make GPUS easier to use and entertainment better. We're about 70 percent of the GPU market, so if the market increases, we increase more than anyone else. The question is who supports it. Microsoft supports it, the content creation tool makers support it, it's like water flowing -- you can't stop it. And it'll be what people want it to be, that's the nice thing about open standards.