HDR Image Editing

High dynamic range imaging has become a popular and useful technique. In this section, we discuss issues related to HDR imaging.

HDR basics
How our eyes see the world
How digital sensors see the world
When to use HDRI
HDR limitations
HDR file formats
Color management with HDR
Shooting HDR

HDR basics

High Dynamic Range Imaging (HDRI) is an imaging technology developed to capture the entire range of light present in a scene. HDRI can far surpass the dynamic range restrictions of traditional chemical and digital photography, and can more accurately be used to recreate a scene in terms of how human vision works. Specialized software (tone mapping operators or TMOs) is used to compress the greater tonal range of HDR images into a tonal range that can be displayed on conventional monitors or printed.

Shooting HDR scenes with conventional digital cameras requires photographing a sequence of exposure-bracketed Low Dynamic Range (LDR) captures, and then using specialized software to merge them into a single HDR file. The HDR file is then “tone mapped” back into an LDR image that can be displayed and printed. These tone mapping applications, called Tone Mapping Operators (TMOs), can be stand-alone programs or plug-ins within another application such as Photoshop or Aperture.

HDR tree
Figure 1 The “HDR” images we see on the web and in print are really low dynamic range images that have been compressed from an HDR file using a tone mapping operator. In this case the HDR file was merged from a sequence of five separate exposures from 1/30 to 2 seconds, then tone mapped in Photomatix Pro.

How our eyes see the world

When we’re talking about photography, we measure light in terms of Exposure Values (EV). This is simply a measurement of how much light is coming through a lens and hitting a sensor, be it film, a CMOS chip, or your retinas. “EV” is often used interchangeably with “stop” or “f-stop,” though strictly speaking the latter terms refer specifically to the opening of the lens, while EV applies to both shutter speed and aperture. Our eyes are able to see upwards of 25 EVs of tonal range, depending on the lighting conditions. Film and digital imaging systems can capture anywhere from 1 to 15 EVs, but our eyes differ in two significant ways: adaptation and non-linear response.

Adaptation

In human vision, adaptation is our ability to adjust to dramatically different lighting conditions. Our brains can adjust so that we are able to see clearly on the brightest summer day and in a candlelit room. It’s a much more complicated, unconscious and organic version of ISO.

Local adaptation

Our ability to adjust different areas of our field of vision to accommodate different levels of brightness, different color temperatures, color casts, etc is called local adaptation. Think about sitting at your desk and looking out a window at the magic hour. You probably have a 60w incandescent (orange) lightbulb over your desk, while the late-evening daylight out the window is much brighter and much, much bluer. You aren’t aware of it, but there might be as many as 12 EVs difference between the light at your keyboard and the light outside, both of which your brain perceives as white light.

Non-linear response

Non-linear response is our ability to accommodate drastic changes in sensory input without overloading our brains. In terms of light, this means that if you double the brightness, it doesn’t double your perception. Bright highlights or light sources might be 5,000-10,000 times brighter than their surroundings, but our excellent brains compress that to fit within our ability to perceive. This same process is at work in our other senses, as well: think about talking to someone in a whisper at a rock show versus the band turning up to 11. The band can literally be a trillion times louder than the whisper, but your brain doesn't overload at the extreme contrast in volume. At least if you're not over 30.

How digital sensors see the world

Digital sensors, on the other hand, “see” in terms of linear response and have a much more limited dynamic range.

Linear response

Each time the number of photons hitting a sensor doubles, the brightness doubles. This is linear response. The practical effect of this is that most of the image’s information is gathered in the higher values, while the lower values get comparatively little information (This is why we “expose to the right”). A linear image would appear unusually dark and contrasty. To correct for this, gamma encoding is used.

Gamma encoding

Gamma encoding holds down the darker tones, brings up the middle tones and compresses the highlights. Doing this more accurately represents how we see. This happens in the background of almost every imaging system.

Dynamic dange

Dynamic range is the ratio of contrast in a scene. Unlike our eyes, the dynamic range (DR) of digital cameras is usually 8-10 EVs, much lower than most natural lighting conditions. In photography, this is expressed in exposure values. A typical outdoor scene lit by bright sunlight may have a contrast ratio as high as 100,000:1 (about 17 EVs). Compare that to your computer monitor, which might manage a 200:1 contrast ratio (about 7 EVs). Anyone who shoots interiors is familiar with the problem of windows “blowing out” (having no discernible detail), even in brightly lit rooms. The scene could cover as many as 15 EVs, far more than even the fanciest digital sensors can capture.

When to use HDR

Everyone who takes pictures can understand the difference between low dynamic range and high dynamic range scenes. An exterior scene that excludes the sky on a cloudy day typically covers around 4 EVs and will usually fit comfortably within the dynamic range of a digital camera sensor. A front-lit building with blue sky and white clouds might be a medium-high dynamic range scene (8-10 EVs), while a scene shooting in the direction of the sun or including the sun’s disc could get as high as 25 EVs. For what it’s worth, the difference between the disc of the sun to starlight is about 40 EVs.

3 examples of dynamic range
Figure 2 From left to right, three types of scene: low dynamic range (cloudy day, no sky in the frame), medium dynamic range (front-lit subject and background with the sky in-frame), and a tone mapped high dynamic range scene (deep shadows with a bright background).

When do we need to shoot for HDR? It’s ultimately up to your preferences. Any image can be tone mapped, even single LDR shots, so it’s a matter of taste. Not every situation benefits from HDR shooting. Scenes with a lot of movement are almost impossible in most HDR software, and shooting the bracketed exposures necessary requires careful digital asset management, not to mention a sturdy tripod. This isn’t just a limitation when shooting people: windy trees and fast-moving clouds can be just as much, if not more, of a problem. Many newer cameras can shoot very fast auto-bracketed sequences, making hand-held HDR shooting a possibility, and making in-scene movement limitations less and less of a problem.

single image HDR Figure 3 This image was tone mapped in Photomatix from a single exposure. When to use tone mapping software, even on images that aren't truly HDR, is a matter of the preferences and tastes of the photographer.

At present there are very few commercially-available single-shot HDR solutions (there is a panoramic scanning camera that can shoot HDR in one go, but it takes upwards of five minutes and costs around $50k). Several companies are working on compact HDR sensors, but unless you’re an electrical engineer or obscenely wealthy, you’re not likely to be getting your hands on one.

HDR limitations

HDR files require more post-production time than LDR images. The raw captures must be merged into an HDR file and tone mapped, and almost always require some time in Photoshop before being ready for prime time due to ghosting (subject movement between brackets) and other necessary local adjustments, such as sharpening.

There are very few devices able to display HDR images. “HDR” images you see printed or on the web are tone mapped LDR images created from HDR files. HDR display technology is not far off, though; BrightSide Technologies (bought by Dolby) developed a display with a contrast ratio of 200,000:1 (around 17 EVs) but its list price is $49k.

As mentioned before, shooting scenes with movement is very difficult. Because creating an HDR image requires multiple exposures, any change in the scene will create ghosts in the merged file. Some TMOs are better than others at reducing ghosting during the HDR merge. FDRtools in particular is very helpful for ghost reduction, however there is still little that can be done when there is rapid movement in a scene.

HDR ghosting Figure 4 In this image, the subject was moving rapidly, causing a series of ghost images. Ghost images can also throw off tone mapping in the surrounding areas of the image.

HDR file formats

There are a lot of HDR file formats, some of which are open source and some proprietary. Each format evolved for a specific use, many are for scientific applications or computer generated imaging. For still photography, Radiance or OpenEXR are the best choices.

Radiance (.hdr/.pic)

  • Open source
  • Most compatible between HDR tone mapping apps
  • Compromise between file size (30% compression) and color fidelity (~1% color error)
  • Suitable dynamic range for stills
  • Most common format for HDR stills
  • Fast read/write

OpenEXR

  • Slightly better compression (40%) than Radiance
  • Designed for imaging, specifically HDR CGI
  • Allows for additional channels (other than RGB), such as alpha channels
  • "Open standard," but not fully open source
  • Not as widely supported as Radiance

Other formats worth noting

Floating Point TIFF

  • Very accurate
  • Largest dynamic range
  • Allows for layers and other TIFF features (Bear in mind that programs write TIFFs differently, so there are no guarantees that these features will be read.)
  • Huge files, very slow reads and writes
  • Floating point TIFF is not a bad choice if you don't have a lot of files, but need very high color accuracy.

LogLUV TIFF

  • Very Accurate
  • Not widely supported by HDR tone mappers

Color management with HDR

HDR behaves differently from LDR images. Because of the wide tonal range, shadows and highlights often appear oversaturated. On film or LDR digital, highlights are usually either overexposed or entirely blown out, thus they are mostly desaturated. HDR imaging, capturing the entire dynamic range of a scene, will preserve saturation in the highlights.

wide dynamic range Figure 5 Because of the wide dynamic range being captured, the window in this scene appears too dark and saturated after tone mapping.

It is always advisable to shoot a color checker when possible. A checker shot should be photographed under the same conditions and with the same exposure bracket as the hero shot. Having an HDR checker shot may not always return accurate or desirable results since the tone mapping software can sometimes create color casts. HDR tone mapping operators often exaggerate color casts in a scene, and include some version of local adaptation, so the color checker is really just a starting point.

color checker
Figure 6 An HDR scene including a color checker, showing three different color balance results from the three lightest gray patches.

One of the major drawbacks of working with HDR is that there isn’t an easy or reliable way to preview images. Unlike JPEGs and other output-referenced LDR files, HDR files cannot be displayed on a monitor until they are tone mapped. This is because HDR files contain a much greater dynamic range than a monitor can display.

HDR display
Figure 7 Your monitor is not able to display the extremely wide dynamic range present in an HDR image. Compare Photoshop's rendering of an HDR file (left) vs. its final, tone mapped state (right).

Additionally, software that isn’t dedicated to working with HDR files often can’t read or display HDR images, or renders them incorrectly. Most tone mapping applications have unreliable previews. Tone mapped images often don’t reflect the previews that were being worked on. It is often necessary to go back several times and redo tone mapping before an acceptable result is achieved.

Photomatix preview vs. tonemapped
Figure 8 Compare the preview from the Photoshop Photomatix plug-in (left) to the tone mapped result (right).

Shooting HDR

There are three ways to create HDR files: CGI, single-shot capture using an HDR sensor, and multiple shot merging. Unless you’re an animator or have an extensive research and development department, you don’t have to worry about the first two. For our purposes, shooting HDR means capturing a sequence of bracketed LDR exposures and using software to merge them into an HDR file.

Bracketed sequences should begin at the darkest exposure necessary to prevent highlight clipping and end with the darkest shadows in the midtones. In other words, there needs to be clear detail in the brightest and darkest parts of the scene. There should be no highlight clipping in the first frame, and the darkest values should be in the middle of the histogram in the last frame. Shooting a sequence this way ensures that the entire dynamic range of the scene has been captured. Since changing the aperture will change the depth-of-field from frame to frame, we always bracket using shutter speed as the variable.

Depending on the scene and the method of shooting, the photographer may not be able or willing to capture the entire dynamic range. A shot including the sun or a bright light source will almost always have some clipping, so how far to go is at the discretion of the shooter.

Since changing the aperture will change the depth of field from frame to frame, we always bracket using shutter speed as the variable.

HDR tripod Figure 9 HDR scene shot with a tripod

Below are two methods of how to determine exposure brackets for HDR shooting.

Determining darkest and brightest exposures for a manual bracket

  1. Set your frame, aperture, and focus.
  2. Using the in-camera histogram (or on-screen histogram, if you’re tethered), determine the shortest exposure needed for the scene. This should be just past (1/3-1/2 stop) the shortest exposure where the highlights are still clipping.
  3. Shooting in 2-stop steps, start slowing down the shutter speed.
  4. Keep shooting until the darkest shadows are in the middle of the histogram.

In-camera histograms Figure 10 Correct in-camera histograms for the shortest (top) and longest (bottom) exposures. The shortest exposure's histogram should have its lightest pixels just left of where they are clipping. The longest exposure should have a histogram where the darkest pixels are represented in the middle of the histogram.

Windows users can use Breeze DSLR Remote to shoot automated brackets while tethered to a computer, and can even link up with Photomatix to create HDR files on the fly.

Usually, HDR sequences are shot with the camera on a tripod, but many new DSLRs have the ability to shoot very fast auto bracket sequences, opening up the possibility of shooting hand-held.

Shooting hand-held

  1. Determine your camera’s settings for auto-bracketing. Some cameras will only allow three shots in a sequence, while others will allow five or more. This will determine how wide your bracket can be, and thus how much of the scene’s dynamic range you can capture. The exposure steps in an auto bracket may also be limited: some cameras allow for only one-stop bracket steps.
  2. Find the median exposure for the scene. This can be accomplished either with the camera’s built-in meter or a handheld meter.
  3. Speed is of the essence, so the camera should be set to shoot at its maximum continuous speed, and set to stop automatically at the end of a bracket.

Handheld Figure 11 HDR scene shot handheld using a Canon 1D Mark III set to a five-exposure auto-bracket.

Workflow

Like with any other kind of photography, it is essential to have a consistent workflow with HDR images. Because of the large number of exposures, additional HDR files, and the larger amount of computing time compared to a raw workflow, there are some important considerations. In addition to the basic difficulty of keeping all the files organized, metadata becomes a greater problem. Many tone mapping applications don’t pass metadata through to the tone mapped image, so it becomes the photographer’s responsibility to re-embed the correct information.

After the considerations for the additional files and organization, files shot for an HDR bracket should be treated like any other files you shoot. HDR image creation will most likely resemble an optimized image workflow.

HDR & raw workflow commonalities

HDR and raw workflows share a lot in common, largely because your HDR files are most likely going to be created from raw files. The similarities are:

  • Both processes are non-destructive: just as a raw file can be demosaiced over and over, an HDR file can be tone mapped to an LDR file an unlimited number of times, with as many variations as your software and imagination will allow.
  • Both processes are based on parametric image edits.
  • Similar to the raw file having everything the sensor captures, an HDR image is a record of the entire range of light in a scene. It is up to the photographer to determine how much or how little of that record he wants to use.
  • Tone mapping, though it takes longer, is essentially the same process as making edits in your raw converter. Both processes take an image file (whether an HDR file, CR2, NEF, DNG, JPEG or whatever) and a set of rendering instructions and create a new image based on those instructions.

File naming

The choice that must be made when naming HDR files is whether to keep your usual naming convention and rely on metadata to identify HDR brackets, or to use batch numbers that separate each bracket.

Sequential numbering

Sequential numbering is a fast approach, but makes it difficult to identify bracket sets. Sequential file numbers in a bracket sequence would look like: Stack_090729_8456, 8457, 8458, etc. Same scene but different exposures. Bracket sequences are identified visually or by metadata (added during ingest or editing), and the merged HDR files are named after either the first number or the "normal" exposure in the sequence. (eg Stack_090729_8456.exr).

The main advantage of sequential numbering is that you can get right to work. Because merging bracket sequences to HDR and tone mapping are so time consuming, it may be to the photographer's advantage to start producing final images as soon as possible. However, this approach can easily lead to post-production confusion if the HDR scenes look similar to one another or if there were multiple versions of a shot. Do you really want to examine each frame to figure out which sequence has the fork on the correct side of the plate? Sequential numbering can also become a problem if files from a shoot mix low dynamic range (LDR) files and HDR bracket sequences. There is no way to identify which files are HDR sequences by name, thus requiring the use of an image browser, such as Bridge.

Batch Numbers

Using batch numbers to identify each bracket sequence will be more time consuming during image ingest and editing, but can help to identify the bracket sequences quickly. Each HDR sequence receives a sequence number, and discrete frames get a unique modifier added. File names look like: Stack_090729_029a, 029b, 029c. Merged HDR files would be named for the batch number (Stack_090729_029.hdr).

Batch numbering can speed your workflow up in post-production, but will take time up front as each set of brackets will have to be renamed separately. The file naming process can be sped up with a workflow tool capable of sophisticated renaming such as Photo Mechanic. Batch names can also solve the problem of mixing LDR and HDR in a single shoot; files with a letter or other suffix in their name are easily recognized as part of an HDR sequence.

naming Figure 12 A folder containing mixed LDR shots and an exposure blend series for an HDR image. The files for HDR are easily identified because of their file name suffix. Using the same frame number for all five images also makes naming derivative files easier.

When shooting a large set of bracket sequences (such as for an HDR panorama), it is advisable to sort each set of images into discrete folders. Images are kept in a job folder, and then shot folders for each bracket sequence. The HDR file is then named for the shot folder. While separating the files into folders can be time consuming when capturing to card only, it is easy to set up separate capture folders for each bracket sequence when shooting tethered. This method is recommended, particularly when shooting panoramas or when the files will be handed off to a retoucher or other post-production person who wasn't at the shoot.

naming Figure 13 A job folder containing folders of exposure blend series for HDR images. This method is recommended, particularly when shooting panoramas or when the files will be handed off to a retoucher or other post-production person who wasn't at the shoot.

Breeze Systems offers a helpful solution for tethered HDR shooting on a Windows platform. Breeze DSLR Remote Pro allows the computer to control the camera to shoot an automated bracket sequence, which it can name with a batch number (123a, 123b, 123c, etc.). This can be used in conjunction with Photomatix's batch processing function to create HDR files on the fly.

Metadata

Files shot for HDR and the final, optimized images need to have embedded descriptive metadata just like any other image file. Metadata is also particularly valuable for HDR shooting because we have the ability to embed descriptive tags identifying bracket sets.

Many HDR tone mappers don’t pass a bracket sequence’s metadata through to the tone mapped image. For instance, Photomatix will preserve partial IPTC metadata in a tone mapped image, but only through a session and only when using rendered files to create the HDR file (ie if the HDR file is created, saved, and closed, then reopened later, there will be no metadata, even if the file itself has metadata embedded, and there will be no metadata at all if the HDR is merged from raw files).

It is essential to test the software you’re working with and maintain a consistent workflow. Constant vigilance!

When to merge to HDR

HDR files should be created after the source files have been edited, renamed, had their metadata embedded, and converted to DNG. The greatest dynamic range is achieved by merging to HDR from raw files. However, depending on the HDR software you choose, it may be necessary to create merged files from rendered source files (TIFFs). With the exception of Photoshop, HDR software will not read any parametric image edits from any raw converter. In order to preserve these edits, such as camera profiles, white balance, saturation or vibrance, lens corrections, etc, it is necessary to create the HDR file with rendered source files. These source files can then be discarded.

Archiving HDR

The primary difference between an HDR and an LDR workflow is the creation of the HDR files themselves. Due to the time involved in the creation of an HDR file (particularly when the source images were shot handheld or are parts of a panorama), HDR files should be archived in the same manner as raw files. Keeping these files in your archive means that an HDR image can be revisited and tone mapped repeatedly, without having to take time to recreate the HDR file. As this technology develops and as HDR displays become commercially available, preserving these files may become a boon.

Depending on the time put into their creation, tone mapped images may be saved as an intermediate step between HDR and optimized masterfile. Alternatively, the tone mapped result might be preserved as the base layer in a masterfile.

HDR post-production

Tone mapped files nearly always require some pixel editing to be finished. HDR tone mapping operators (TMOs) frequently produce images that require final black and white point adjustments, and often benefit from either a contrast boost or sharpening (such as an unsharp mask for local contrast). HDR files also frequently contain ghosting artifacts due to being constructed from bracket sequences. Mouse over the finished image below to see the tone mapped result from the TMO.

HDR Rollover
Figure 14 Tone mapped files nearly always require some pixel editing to be finished. HDR tone mapping operators (TMOs) frequently produce images which require final black and white point adjustments, and often benefit from either a contrast boost or sharpening (such as an unsharp mask for local contrast). Mouse over the finished image above to see the tone mapped result from the TMO.

Up to main Image Editing page
Back to Sharpening
On to Panorama Stitching

feedback icon
 
Last Updated September 22, 2015