Siggraph Part 3: Tonemapping Papers

When you want to see what's really happening on the frontier of imaging technology, you'd have to watch some technical paper sessions. The smartest people from universities all over the world present their latest research, and you get a glimpse of what's coming - way before it's incorporated into a product.

It's actually quite funny how HDR images are often used as test material for all kinds of new imaging algorithms, as if HDRI would already be the standard. That's how far these guys are ahead. However, I would like to highlight some new tonemapping approaches. Developers - please pay close attention, users - please refrain from drooling all over your keyboard.

Lischinski's Edge-Preserving Decompositions

The most important part in a Local TMO is the separation of different detail levels. It's usually done in a preprocess, before you can adjust any settings. This is the part where small-scale details are isolated from large-scale contrasts, and the quality of this isolation has a huge impact on the tonemapping result. If it's done poorly, you get halos.

Most programs use a bilateral filter for this extraction, which used to be the best way to preserve hard edges while smoothing global lighting changes. If you're a Photomatix user, you've seen this decomposition at work:

Highslide JS

Example implementation of a tonemapper based on the new WLS preprocessing. Watch the movie to see this in action.

Now a project group at The Hebrew University, in collaboration with Microsoft Research, has developed a new method with significant improvements. They call it the Weighted Least Squares (WLS) Optimization, and it does a much better job at preserving edges and thus suppressing halos.

Check out this project's website for example results, the full paper, and some code. Do not miss the movie presentation!

Tonemapping Quality Evaluation

How do you judge a tonemapping result?

Isn't it purely a matter of taste? Can this be quantified at all?

Sure it can. You just have to ask the right questions. Quality evaluation is not about 'good' or 'bad' results. Instead, it tells you how much of the dynamic range got lost in the conversion, how much details you managed to extract, and where halo artifacts got introduced. And when you can clearly name a problem, you can go ahead and fix it.

Tunç Ozan Aydın from the Max-Planck-Institute presented a new algorithm, that has all the answers. It can compare an HDR image with a tonemapped LDR image and clearly maps out 3 problem areas:

Highslide JS

Different tonemapping operators, compared via Aydın’s new image metrics.

  • Loss (green) - When there are details in the HDR that didn't make it into the LDR. Typically, this happens when the tonemapper compressed the range to much or clipped details.
  • Amplification (blue) - When the tonemapping result shows contrasts that were not just barely visible in the HDR. Sometimes a desired effect of detail enhancements, not necessarily a negative property of the result.
  • Reversal (red) - This identifies areas where a contrast in the LDR is the opposite from what was seen in the HDR. Should point out halos and put up warning signs of surreal appearance.
You can check out several examples in an interactive online viewer.

In a real-world application I can see such a colored overlay very useful for tonemapping previews. Kind of like the clipping warnings in Photoshop or Lightroom, that mark clipped pixels in red. The algorithms presented could also be used to create an iterative tonemapping operator, that automatically minimizes artifacts.

Highslide JS

Why doesn’t it pick up on the horrible halo artifacts from this Fattal TMO?

It certainly has some room left for improvements. I wish it would be better in detecting halo artifacts, the arch enemies of every tonemapper. Clearly, halos are a quality measure that should be accounted for.

Read more about this on the Project's Website or evaluate your own images by uploading them to the Quality Assessment Web Interface.

Display-Adaptive Tonemapping

This is also from Max-Planck-Institute, this time in collaboration with Sharp. They call it a new tonemapping operator, but I'd call it a whole new way to think about tonemapping.

If were shopping for a new display lately, you might have noticed that contrast ratios are all over the place. They range from 1:60 for ePaper over wallet-friendly 1:10.000 LCDs, up to 1: 3 Million in high-end plasma screens. If you see a wall of 20 TVs is a store, you see 20 different images - even though they show the same channel.

The idea is now to make the tonemapping display-aware. So the all images are perceptually the same, or at least as close as the display tech can show it. And then take it to the next level, and even compensate for the ambient lighting conditions in the room.

Admittedly, this research is of limited use for us consumers right now. But it clearly shows where HDR is heading.

You can check out the paper, watch the demo movie, and if you know how to compile things you can even play with the tonemapper yourself in the latest pfstools package.

Next Page