Interview: Nvidia's Chief Scientist David Kirk
Chews the fat about ray-tracing and CUDA programming language
Nvidia was left looking a little lonely after ATI, its main rival in the graphics market, was gobbled up by AMD. But Nvidia's Chief Scientist David Kirk says the future remains bright for the world's leading producer of PC graphics chips.
For a wider discussion of Kirk's views on the future of Nvidia and the threat posed by ray tracing, fusion processors and Intel's foray into graphics, see our main story: Do new CPUs threaten Nvidia's future?
In the meantime, chew on these key highlights from our discussion with Kirk on hot topics including ray-tracing and the CUDA programming language.
The impact of ray-tracing
TechRadar: Other than simple performance issues, why has ray tracing not been widely adopted for real-time rendering on the PC?
David Kirk: It would be easy enough to "just do everything using RT (ray tracing)," but then you would have to do everything using RT! Doing everything using RT in practice means tracing an enormous amount of rays, more rays for anti-aliasing, more rays for soft shadows, more rays for global illumination, more rays for glossy reflections. And so on.
There are certainly clever ways to avoid each of these, but every clever thing requires more software and more special cases. After all that, RT is not sounding so much like the "simple, elegant, handles-everything-easily" solution.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
I mentioned anti-aliasing [removing or avoiding jagged edges] first, because I've from some commentary that people seem to think I'm ignorant of all of these techniques. There are ways to do anti-aliasing and not trace a lot more rays, but they all require more work (clever software) and they all have flaws.
One example is adaptive anti-aliasing. In this technique, you trace fewer rays, and look for edges by comparing adjacent rays to see if they are different. If they are different, you have found an edge and you trace more rays to make it smooth.
This has several problems. First, you may miss small things or small parts of things if they fall between the rays. Second, (I wrote a paper about this about 10 years ago!) this method is flawed because it introduces bias. Bias means that the picture could be arbitrarily wrong. This decision making influences the resulting picture in undesirable ways.
One other issue is that RT is famous for shiny, metallic looking reflections. What if you don't want that? Maybe you want a glossy, soft reflection, like brushed metal, or something more like fabric? You require a more complex shader, that either looks a lot like the shaders that people write in a rasterisation pipeline, or ... (here it comes again) ... you need to trace a lot more rays.
Parallel graphics processing
TechRadar: You've suggested the idea of a hybrid approach to the introduction of ray-tracing rather than the wholesale replacement of raster hardware. How do you see this happening? Can ray-tracing taking place simultaneously with other methods such as raster in future game engines?
David Kirk: Yes, RT and rasterisation can (gasp!) coexist. I don't understand why people find this remarkable. A game engine could rasterise the environment (using hierarchy, to make the complexity log, not linear, as it touted with RT), and find out what object is in each pixel. This is much faster than RT.
Then, for each pixel, the shading could either be done using conventional (and hardware-accelerated) pixel shaders, or by tracing some rays to find reflections, shadows, or ambient occlusion / light inter-reflection, or any combination of the two techniques.
This is totally doable on current GPUs, since you can rasterise and shade with OpenGL or DirectX, and trace rays with a program written in CUDA (Nvidia's parallelised version of the 'C' programming language), running on the GPU. Not only is this doable, I believe that this is the preferred way for using RT.
Technology and cars. Increasingly the twain shall meet. Which is handy, because Jeremy (Twitter) is addicted to both. Long-time tech journalist, former editor of iCar magazine and incumbent car guru for T3 magazine, Jeremy reckons in-car technology is about to go thermonuclear. No, not exploding cars. That would be silly. And dangerous. But rather an explosive period of unprecedented innovation. Enjoy the ride.