Q+A: Nvidia and AMD talk up DX11
Nvidia's Tony Tamasi and AMD's Richard Huddy
One of the key changes in Windows 7 – behind the fancy desktop dressing and pared down control panels – is DirectX 11.
As the name suggests, it's the successor to DirectX 10, Microsoft's home grown API for software developers to interact with graphics hardware.
It still builds on the codebase of its predecessor DX10 and because of this it won't be available to people on Windows XP, but it will run on both Vista and 7. It's not a huge rewrite of the code base, but brings a couple of really big steps forward when it comes to the GPU pipeline.
It'll be popular too. Virtual shelf stackers at the world's biggest online hypermarket, Amazon reckon that more copies of Windows 7 were pre-ordered in the first eight hours of links going live than Vista got in its entire 17 week build-up period.
In addition, some of the key new features will work on DX10 hardware too, so developers won't lose much and will gain a lot by switching to DX11 codepaths relatively quickly.
If Windows 7 is the subject of a more muted marketing campaign than Vista – currently there's no 'Wow starts now!' on the cards, thankfully – then DX 11 is a mere whisper on the breeze. Compared to the hype built up around DX10 using hugely anticipated games such as Crysis, Supreme Commander and Alan Wake, it's an unknown quantity.
Is it because DX10 was as disappointing as watching any Woody Allen film post-1995? Is someone afraid of overhyping a technology that won't deliver anything new in terms of games for a long time yet? We don't know, we're journalists not programmers (Okay, a couple of us are programmers).
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Therefore, PC Format magazine asked a couple of people who are privy to all its secrets: one from AMD and one from Nvidia, and asked them what they think of its most significant features and what kind of impact it will have.
We should note that our experts weren't interviewed together and aren't responding to one another. We cornered them alone, we've found that it's safer that way.
The experts
Name: Tony Tamasi
Job title:
What he does: Oversees Nvidia's interaction with games engine designers and incorporating their wishlists into future hardware plans
Name: Richard Huddy
Job title: Senior Manager Developer Relations, AMD
What he does: Heads up the team of AMD software engineers that visit games studios and help programmers use the latest hardware features
We're not in graphics anymore
There are five key new features that arrive with DX11, and the first has nothing to do with gaming graphics.
DirectX Compute is Microsoft's GPGPU language, joining Nvidia's CUDA and OpenCL in allowing graphics cards to be used for things like PhysX, AI and video transcoding – the later of which is supported as an impressively quick drag and drop feature in Windows 7.
Importantly, any DX10 graphics card will be able to run some DX Compute code, although because of hardware differences it's likely that most in-game routines will target DX11 class cards only.
Tony says:
"GPGPU, in general, is the most important initiative for the whole graphics card industry period.
"First of all, it's an entirely new market of potential software developers and customers that we can reach. The market for people who play games is obviously very large and very important to us, but the market for people who watch video, record things on YouTube and use Flash is much larger still, and in that market today there's no strong motivation for those people to buy high-end graphics processors.
"In GPGPU computing there's a whole realm of application possibilities and potentially large growth. Nvidia believes, at its core, that the more ways you have to make use of the graphics processor for any kind of parallel process is good.
"The exciting thing is that now we have Microsoft getting into the game and standardising the process, and completely legitimising GPGPU computing to the extent that Windows 7 has core functionality that will be accelerated.
"Using the GPU for general computing isn't just about Photoshop or video transcoding though. It's also about game functions outside of rendering. We're helping developers work on physics and artificial intelligence, for driving animation systems and so on. It can be used for a whole host of things."
Richard says:
"I've got 30GB of video on my hard drive and my portable video player just died, so I'm probably going to have to move it to another platform and reconvert my video. Transcoding that on the GPU is typically three times as fast as doing it on the CPU, and better than realtime.
"In Windows 7 it's so easy that my mother could transcode video just by dragging and dropping one file into another folder. It's a thing of beauty, it hides all of the tech from you.
"Compute shaders in games are very difficult to write for at the moment, though: even my engineers are struggling with it. They can run anything from half the speed of the original code if you mess it up to a factor of three or four times faster if it works well.
"A member of my staff now has his compute shader running three or four times faster than the pixel shader, and we like that. It means that things like post-processing effects will typically benefit from using compute shaders from now on.
Extra instructions
As part of the DirextX overhaul, the Higher Level Shading Language (HLSL) developers use to write shaders is being expanded to allow for longer shaders and subroutines, and for more to be done in a single pass.
Tony says:
"This is a big deal because now developers can write what I call 'uber-shaders'. It sets the stage for increasingly sophisticated shadow and lighting models, and the ability to have sub-routines allows you to have increasingly complex code, but in a manageable way."
Richard says
"The thing that strikes me as the most instantly useful part of it is the ability to pick up four texels at once in a single clock cycle.
"Typically, when you look down at the fine grain detail of a GPU, we're limited by one of two things: math operations and texture fetches.
"Either one of those will be the killer for any shader, the bottleneck. Texture fetches are a real problem, they're limited by bandwidth and they're limited by the number of texture instructions you fit into the shader and get them through the pipeline.
"If you can fetch four texels at once, you may be able to do four times as much texture work, especially if you're working with information that's in the texture cache. This can be a big win in some cases.