The ultimate guide to graphics cards

Sapphire ATI Radeon HD 4870 X2
Sapphire ATI Radeon HD 4870 X2: two graphics chips, one card

There's a good chance the most powerful chip inside your PC, in raw computational terms, is on your graphics card. So, how did graphics get so powerful, what are graphics cards good for right now and how on earth do you choose from the baffling array of 3D chipsets on offer?

A little history

The origin of today's 3D monsters can be traced back to ancient 2D cards designed to improve the resolution and colour fidelity of the PC. The very earliest cards had specifications that seem impossibly modest by today's standards.

The first truly modern graphics processing units (GPUs) arrived in 2001. Nvidia's GeForce 3 and the Radeon 8500 from ATI were the world's first GPUs to support so-called programmable shaders designed to enable more realistic lighting within graphical 3D simulations. Since then, no other company has been able to keep up with the relentless pace of those two brands (though ATI was bought by AMD in 2006).

Early programmable chips could typically apply their shading effects to just four pixels per operating cycle. Since then, GPUs have become ever more powerful, programmable and, most of all, parallel. AMD's Radeon HD 4800, for instance, packs a ludicrous 800 shader units (also known as stream processors in a nod to their increasingly programmable nature).

Current cards also sport huge memory buffers as big as 1GB, enabling them to drive extremely high-resolution displays, and are backed up by massive bus bandwidth thanks to the PCI Express interface in its latest 2.0 format. Finally, the very latest development in graphics technology is support for multiple cards sharing the rendering load.

But today's GPU's don't just pack painfully powerful 3D pixel pumping engines. They also support 2D video decode acceleration for modern HD video codecs like H.264 and VC-1, as used on Blu-ray movie disks.

That's how graphics cards got to where they are today. But what makes the latest chips tick?

3D rendering

This is the biggy, 3D rendering is the graphics card's raison d'etre. With the launch of Windows Vista, Microsoft introduced the 10th iteration of its DirectX graphics API. The DX10 API is now well established and fully compliant cards are extremely affordable. There's no need to compromise on DirectX support.

Whether it's pixel, vertex and geometry shaders, or support for high quality anti-aliasing and high-dynamic-range lighting, they're present in all DX10 GPUs. What you do need to worry about, however, is raw rendering horse power and below is everything you need to know to judge a chip's performance.

Pixel throughput

Broadly speaking, it's the texture and render output units (ROPs) that define the number of pixels a graphics chip can spit out every cycle. And remember, a 1,920 x 1,200 pixel grid on a typical 24-inch monitor works out at over two million pixels. For smooth gaming you'll need to refresh that at least 30 times a second. In other words, well over 60 million pixels per second.

Nvidia's top chip, the £300+ GeForce GTX 280 has a huge 32-strong array of ROPs and no less than 80 texture sampling and address units. AMD's best, the Radeon HD 4800 series, has 16 ROPs and 40 texture units, facts reflected in pricing that kicks off around the £180 mark.

Mid-range cards like Nvidia's GeForce 9600 series and the Radeon HD 4600 from AMD, typically have significantly reduced ROP and texture unit counts.

Shader processing

This is where the real computation grunt is housed and where those shimmering, shiny visual effects that dominate the latest games are processed. Intriguingly, AMD's relatively affordable Radeon HD 4800 packs 800 shaders to the Nvidia GTX 280's 240 units.

However, not all shaders are equal and it's worth noting that Nvidia's GPUs usually boast much higher shader operating frequencies than competing AMD chips. Again, mid-range chips typically suffer from cut-down shader counts in an effort to reduce chip size and cost.

TOPICS
Contributor

Technology and cars. Increasingly the twain shall meet. Which is handy, because Jeremy (Twitter) is addicted to both. Long-time tech journalist, former editor of iCar magazine and incumbent car guru for T3 magazine, Jeremy reckons in-car technology is about to go thermonuclear. No, not exploding cars. That would be silly. And dangerous. But rather an explosive period of unprecedented innovation. Enjoy the ride.