6 best Crossfire and SLI graphics cards on test

For students of the science of graphics processing unit architectures, there are no finer subjects to scrutinise than the ATI Radeon HD 4890 from AMD and Nvidia's GeForce GTX 285.

As individual graphics boards go they are considered the purist's choice and the very finest single GPUs from the two masters of 3D graphics hardware. These two graphics cards also make for an intriguing comparison. The GTX 285 is nothing less than a brutal graphics bludgeon for your games.

With 1.4 billion transistors, it's the biggest single processor die currently available for the PC – and that includes both graphics processors and the more traditional CPU.

By way of comparison, Intel's latest quad Core i7 processor, for instance, gets by with a measly 731 million transistors, roughly half that of the GTX 285. Indeed, by just about every measurement the GTX 285 is a bona fide heavyweight.

It packs no less than 32 render output units and a massive 80 texture units along with 240 of Nvidia's unified and symmetrical stream processors (it's always worth remembering that AMD and Nvidia's shaders are not directly comparable - although for a rough guide, divide AMD's numbers by five for comparisons). The GTX 285 then can pump out a huge number of pixels.

But in the context of multi-GPU performance, it's the GTX 285's memory subsystem that really makes the difference. With AMD throwing in the towel on uber-GPUs, this chip is uniquely equipped with a 512-bit memory bus. Combined with GDDR3 memory running at an effective rate of over 2.4GHz, the result is 159GB/s of raw bandwidth.

When you're attempting to shuffle around the immense quantities of graphics data that come with running the latest games running at stupendously high resolutions, such as 2,560 x 1,600, that much meaty bandwidth comes in extremely handy.

The Radeon HD 4890 is a very different beast. It has a significantly lower transistor count at just 956 million. It is, in short, a much less complex chip.

But in many ways, it's also a much cleverer one. AMD has really gone to town on the chip's shader array, cramming in 800 stream processors. Consequently, it actually delivers more theoretical computational throughput than the much bigger GTX 285. For the record, we're talking 1.36TFLOPs from the AMD camp compared to 1.06TFLOPs from Nvidia.

Limited remit

Of course, to achieve that floating point performance in a smaller, cheaper but arguably more elegant chip something has to give somewhere else. The 4890 has literally half the render output and texture units, just 16 and 40 respectively, compared to the GTX 285. Its 256-bit memory bus is likewise 50 per cent down.

AMD has offset that to some extent by using the latest and greatest GDDR5 memory interface running at effective clock speed of 3.9GHz. But the total available bandwidth still falls significantly short at 125GB/s. In single-GPU configuration, the design choices AMD has made make an awful lot of sense. After all, the 4890 is not a full enthusiast class GPU.

The vast majority of people who buy it will never run it at resolutions higher than 1,920 x 1,200. So why waste engineering resources and push up the cost of the chip to optimise it for the likes of 2,560 x 1,600?

We're happy to leave Nvidia to chase that tiny market, seems to be the current message from AMD. However, when it comes to running these cards in multi-GPU trim, those higher resolutions and image quality settings suddenly become a much more important issue. The more that you crank up the pixel count or add lashings of anti-aliasing and anisotropic filtering, the more any bandwidth shortcomings, both inside and outside the GPU, will drag performance down.

On paper, therefore, what we have is a contest between a pair of cards cleverly designed to deliver maximum performance within a relatively limited remit (that'll be the Radeons) and a pair engineered with no expense spared (yup, that's the GeForces). But is this reflected in the performance results?

For the most part we'd say yes. When the going gets really tough, it's the two GTX 285s that give the best results. However, the benchmark numbers are slightly distorted by the fact that the overhead of running two cards tends to put a cap on average frame rate results. That's why the average frame rate figures at 1,680 x 1,050 and 1,920 x 1,200 look pretty similar. Hence, in many ways it's the minimum frame rates that count in this part of the market.

In other words, what matters is whether frame rates remain high enough for smooth rendering in the most demanding scenarios. Here, the GeForce boards have a distinct advantage. Even at 2,560 x 1,600 with the rendering engine set to full reheat, a liquid smooth 50 frames per second is as low as the GTX 285 setup will go in Far Cry 2.

The Radeons, by contrast, trough at 37 frames per second. It's a similar story in Crysis Warhead, with the GTX 285s coming the closest to actually running this most excruciatingly demanding game engine smoothly. Also working in Nvidia's favour is the sense that in this part of the market, value for money is much less of a factor.

In single-card configuration, a £310 GTX 285 looks like poor value next to a £195 Radeon HD 4890. But if you can afford £400 for a pair of 4890s, you can probably stretch to £600 for the GeForce boards. And if you have a 30-inch LCD monitor and want the best dual-card performance, that's exactly what we would recommend you do.

Contributor

Technology and cars. Increasingly the twain shall meet. Which is handy, because Jeremy (Twitter) is addicted to both. Long-time tech journalist, former editor of iCar magazine and incumbent car guru for T3 magazine, Jeremy reckons in-car technology is about to go thermonuclear. No, not exploding cars. That would be silly. And dangerous. But rather an explosive period of unprecedented innovation. Enjoy the ride.

Latest in GPU
An AMD Radeon RX 9070 XT made by Sapphire on a table with its retail packaging
Bad news PC gamers - it seems AMD's aggressively low price for its Radeon RX 9070 GPU will only be for a limited time
NVIDIA GeForce RTX 50 Series image
Nvidia's 572.70 Game Ready Driver promises a black screen fix - but unless you have an RTX 5070 it's probably best to avoid updating for now
An Nvidia GeForce RTX 5080 resting on an RTX 5090 on a gray crafting mat.
Corsair tells us only one of its prebuilt PCs with an RTX 5000 GPU has suffered from chip-level fault, suggesting it’s as rare as Nvidia claimed
An AMD Radeon RX 9070 XT made by Sapphire on a table with its retail packaging
Last-minute AMD RX 9070 XT stock rumors are making me hopeful for a much better launch than Nvidia’s RTX 5000 GPUs – with just one snag
The Nvidia and AMD logos clashing with lightning bolts around them.
Sure, Nvidia DLSS 4 is incredibly impressive - but AMD's improved upscaling tech could be a real game-changer
An Nvidia GeForce RTX 5070
Nvidia confirms that an RTX 5070 Founders Edition is coming... just not on launch day
Latest in News
Stock photographs of people smiling and looking at laptops in a small business environment.
This web hosting platform elevates your online presence
The Samsung Galaxy S25 Edge on display at Galaxy Unpacked
Exclusive: the Samsung Galaxy S25 Edge will have durability to match its ‘sexy’ form
Metaphor: ReFantazio
Sega was Metacritic's highest-rated publisher of 2024 thanks to the critically acclaimed Metaphor: ReFantazio and Like a Dragon: Infinite Wealth
AirPods Pro Review
Apple has quietly updated its guidance on how to clean your AirPods, and suggests you buy a kit… from Belkin
China
Chinese hackers who targeted key US infrastructure charged by Justice Department
A screen shot of Lady Gaga in her interview with Zane Lowe for Apple Music
Lady Gaga’s Spotify press conference is being live streamed today – here’s where you can watch Spotify’s big step forward in fan inclusion