HDR explained: better results from multiple exposures

Dynamic exposures
HDR image editing combines three different shots at different exposures

The sensor in a modern digital camera has less dynamic range than the human eye. That's why we're often disappointed with photographs we take: we don't see the sky as washed out, or the shadows as dark as they appear in our photos.

Naturally, there are now ways to circumvent this using the power of the PC: enter the high dynamic range image processing algorithms.

When we purchase a digital camera, we're often concerned with the resolution of the sensor (the number of megapixels), whether it produces images in JPG or RAW format, and whether we can use different lenses to get images from close up or far away. We're not generally concerned with the dynamic range of the sensor in the camera – in other words, the range of light levels that the sensor can capture.

It turns out that old-style film has less dynamic range than a CCD (charge-coupled device) – the sensor that registers light information that's built into modern digital cameras. If you like, we've moved forward in terms of dynamic range and also, incidentally, in terms of noise: film is noisier at low light intensities than digital.

figure 1

EYE VS SENSOR: How the eye perceives light intensity (blue line) compared to the way the camera does (red line)

Both types of camera perceive a narrower range of light levels than the human eye. That's why when we take a photograph of a landscape, for example, we get an image that doesn't register the cloud formations in the sky (the sky becomes washed out) and the shadows become undifferentiated black.

If you're anything like me, you tend to get a little frustrated and disappointed that the camera isn't recording what you're seeing exactly, but I dare say we've all become rather used to the problem.

What is HDR?

There is, however, a way around this called HDR image processing (high dynamic range). This is a set of algorithms that process images to increase their dynamic range.

With HDR, it's possible to produce an image that has a much greater range of light levels to approximate what the human eye can see, or even to make fantasy images that look nothing like real life.

However, I'll also note that you will run into some issues. For example, the monitors we use to view images also have a smaller dynamic range than the human eye.

Back in the days of film cameras, it was possible to increase the dynamic range of a photo when you printed the image after developing the film. Photographers like Ansel Adams were experts at using this kind of image manipulation – known as dodging and burning – to produce the dramatic photos we've all seen and perhaps bought as posters.

Dodging decreases the exposure of the print making the area lighter in tone, whereas burning increases it making the area darker in tone. Recall that in film photography, the film is a negative version of the photo. Dark areas on the negative will show up as lighter areas on the print paper because the light-sensitive silver salts in the paper will be less exposed, and therefore appear lighter once the print is fixed. Light areas will show up darker on the print, because more light hits the silver salts.

To apply dodging to the print, the photographer would cut out a shape from some opaque material like card to block off part of the scene, and then expose the print paper with that card between the projector and the paper. Because less light hit that part of the scene, it would appear lighter.

Burning was done in a similar way, but this time the photographer wanted to expose part of the scene longer than the rest. They would cut out a shape that would block off the rest of the scene, letting the part to be brightened receive more light.

There are other techniques and materials that can be used, but as you can imagine, dodging and burning this way was a labour-intensive process and was usually only done for art photos and the like.

Another issue is that dodging and burning are physical manipulations that happen in real time, and it's hard to replicate the same effects across a set of prints so that the resulting images are all the same. With digital photographs and programmatic image manipulation, it's a lot easier to create images with a higher dynamic range.

The process goes like this. First you take at least three photographs of a scene. Ideally, these photos are taken with your camera on a tripod so that they all register exactly the same scene. I've tried using a hand-held camera and it just doesn't work as well during the HDR processing – there's always some obvious scene shake that can be seen.

Similarly, the scene itself should be as static as possible: any moving parts (like leaves fluttering in the wind, waves crashing on the beach, or cars or people passing by, for example) won't be the same in each image, causing scene shake in the processed HDR file.

Although the photos are of the same scene and have the same focus and aperture settings, you take them using different exposure times. For best results, you should shoot one photo as normal, and the others two stops either side. Increasing exposure by a single stop doubles your exposure time, and decreasing by a stop halves it.

The dynamic range in photography is the number of stops between the darkest part of an image where you can still resolve detail and the lightest part. DSLRs generally have about 11 stops of dynamic range at low ISO values, and point-and-shoot cameras a stop or so less.

The external Viewsonic monitor I'm using with my MacBook Pro has almost 10 stops, which means that the photos I take with my camera already have twice the dynamic range that my screen can show.

Some cameras, especially DSLRs, have a mode whereby you can shoot three photos as a set, the other two bracketing the first in terms of exposure. The different exposures are regulated by the camera automatically.

On my Canon Rebel XTi (also known as the 400D), this is AEB mode (auto-exposure bracketing) and I can set the required +/- 2 stops there. If your camera shoots RAW instead of JPG and your HDR image processing application supports it, it's possible to just use one photo. The results won't be nearly as good though.

Once you have your three differently exposed photos, it's time to process them. The first stage is to analyse all three photographs in order to merge them as a single HDR image.

Latest in Cameras
Canon EOS R5 Mark II on yellow background with lowest price text overlay
The Canon EOS R5 Mark II is our camera of the year and it just got its first-ever price cut
Viltrox 135mm F1.8 Lab lens for Nikon Z-mount, in the hand, attached to a Nikon Z6 II
I tried the stunning Viltrox 135mm f/1.8 LAB lens for Nikon and it’s my new favorite portrait lens, except for one drawback
Canon EOS R6 Mark II camera on a magenta / blue background with radar overlay
Canon EOS R6 Mark III: 5 huge upgrades the rumored full-frame camera could have – and needs
DJI Osmo Action 5 Pro
Say goodbye to GoPro and get the highly-rated DJI Osmo Action 5 Pro for its lowest-ever price
Image showing detail of the Leica D-Lux 8
Still can't get a Fujifilm X100VI? This premium Leica compact costs less, and it's in stock
Fulaim X5M wireless microphone
I tested the Fulaim X5M wireless mic, and found it a tempting budget alternative to the DJI Mic Mini
Latest in News
The Nanoleaf PC Screen Mirror Lightstrip being used on a desktop computer.
Mac gaming could get an intriguing boost – but not in the way you'd expect
Snapdragon G Series
Qualcomm poised to muscle in on AMD's territory with powerful gaming handheld processors
Student sat at a desk with a laptop in a dormitory looking at a mobile phone
Windows 11 could eventually help you understand how fast your PC is - as well as offer tips for making your PC or laptop faster for free
Veresa attacks an enemy in Genshin Impact.
Genshin Impact Version 5.5 arrives next week, adding a new five star character obsessed with food
Google Pixel 9a
Google just launched the Pixel 9a – and I reckon it embarrasses the iPhone 16e
AI tools.
Not even fairy tales are safe - researchers weaponise bedtime stories to jailbreak AI chatbots and create malware