Practical HDRI by Jack Howard

Still looking for the perfect companion volume to the HDRI Handbook?
Highslide JS
Jack’s cover photo alone speaks volumes about his tonemapping skills.


Well, it might be just around the corner.

Jack Howard, well-renowned editor at PopPhoto and long-time friend of HDRLabs just wrote a new book on HDRI. It's very slick looking, very rocky-nooky, very professional photography. And it's called Practical HDRI, hinting at Jack's hands-on approach of teaching.

It's out just now, pre-ordering from Amazon will ensure you get a copy as soon as it's coming down the delivery pipe.

View Comments

September update

The sIBL of the month is a real gem from my personal collection: the milky way in it's full glory. So get your shiny space ship ready and render away to new frontiers.

Also, there is now a German forum section, after I realized that the German version of the HDRI Handbook became pretty popular. Not quite sure how many readers of the Czech HDRI Handbook are around here, but so far I haven't had any request for a Czech forum....

Next stop: Photokina


Photokina is about to happen, and it will be great this year.

Nikon just got the new D90 out, with 24p HD movie mode and the low-noise CMOS sensor known from the D700 and D3. Bracketing is limited to 3 frames, though. Personally I will wait for DX-sized CMOS sensor to come to the successor of the D300... Also new are the Pentax 200D and the Canon 50D. There's even rumor of a Canon 5D Mark II, and Photokina sounds like just the right launch event.


I will hold a workshop at Photokina as part of the fotoespresso series, organized by my publisher dpunkt. Seats are going fast, considering that this is a super slick deal: For 99 Euros you get a copy of the HDRI Handbook, a daypass for Photokina, and all the personal tutoring I can cram into three hours. The workshop will probably be split 70/30 into a presentation and a hands-on problem solving part where I'll try to answer your questions.
Don't miss the show, sign up here!

View Comments

Siggraph Part 3: Tonemapping Papers




When you want to see what's really happening on the frontier of imaging technology, you'd have to watch some technical paper sessions. The smartest people from universities all over the world present their latest research, and you get a glimpse of what's coming - way before it's incorporated into a product.

It's actually quite funny how HDR images are often used as test material for all kinds of new imaging algorithms, as if HDRI would already be the standard. That's how far these guys are ahead. However, I would like to highlight some new tonemapping approaches. Developers - please pay close attention, users - please refrain from drooling all over your keyboard.

Lischinski's Edge-Preserving Decompositions




The most important part in a Local TMO is the separation of different detail levels. It's usually done in a preprocess, before you can adjust any settings. This is the part where small-scale details are isolated from large-scale contrasts, and the quality of this isolation has a huge impact on the tonemapping result. If it's done poorly, you get halos.

Most programs use a bilateral filter for this extraction, which used to be the best way to preserve hard edges while smoothing global lighting changes. If you're a Photomatix user, you've seen this decomposition at work:




Highslide JS

Example implementation of a tonemapper based on the new WLS preprocessing. Watch the movie to see this in action.


Now a project group at The Hebrew University, in collaboration with Microsoft Research, has developed a new method with significant improvements. They call it the Weighted Least Squares (WLS) Optimization, and it does a much better job at preserving edges and thus suppressing halos.

Check out this project's website for example results, the full paper, and some code. Do not miss the movie presentation!


Tonemapping Quality Evaluation





How do you judge a tonemapping result?

Isn't it purely a matter of taste? Can this be quantified at all?

Sure it can. You just have to ask the right questions. Quality evaluation is not about 'good' or 'bad' results. Instead, it tells you how much of the dynamic range got lost in the conversion, how much details you managed to extract, and where halo artifacts got introduced. And when you can clearly name a problem, you can go ahead and fix it.

Tunç Ozan Aydın from the Max-Planck-Institute presented a new algorithm, that has all the answers. It can compare an HDR image with a tonemapped LDR image and clearly maps out 3 problem areas:


Highslide JS

Different tonemapping operators, compared via Aydın’s new image metrics.

  • Loss (green) - When there are details in the HDR that didn't make it into the LDR. Typically, this happens when the tonemapper compressed the range to much or clipped details.
  • Amplification (blue) - When the tonemapping result shows contrasts that were not just barely visible in the HDR. Sometimes a desired effect of detail enhancements, not necessarily a negative property of the result.
  • Reversal (red) - This identifies areas where a contrast in the LDR is the opposite from what was seen in the HDR. Should point out halos and put up warning signs of surreal appearance.
You can check out several examples in an interactive online viewer.

In a real-world application I can see such a colored overlay very useful for tonemapping previews. Kind of like the clipping warnings in Photoshop or Lightroom, that mark clipped pixels in red. The algorithms presented could also be used to create an iterative tonemapping operator, that automatically minimizes artifacts.



Highslide JS

Why doesn’t it pick up on the horrible halo artifacts from this Fattal TMO?


It certainly has some room left for improvements. I wish it would be better in detecting halo artifacts, the arch enemies of every tonemapper. Clearly, halos are a quality measure that should be accounted for.

Read more about this on the Project's Website or evaluate your own images by uploading them to the Quality Assessment Web Interface.


Display-Adaptive Tonemapping


This is also from Max-Planck-Institute, this time in collaboration with Sharp. They call it a new tonemapping operator, but I'd call it a whole new way to think about tonemapping.

If were shopping for a new display lately, you might have noticed that contrast ratios are all over the place. They range from 1:60 for ePaper over wallet-friendly 1:10.000 LCDs, up to 1: 3 Million in high-end plasma screens. If you see a wall of 20 TVs is a store, you see 20 different images - even though they show the same channel.

The idea is now to make the tonemapping display-aware. So the all images are perceptually the same, or at least as close as the display tech can show it. And then take it to the next level, and even compensate for the ambient lighting conditions in the room.



Admittedly, this research is of limited use for us consumers right now. But it clearly shows where HDR is heading.

You can check out the paper, watch the demo movie, and if you know how to compile things you can even play with the tonemapper yourself in the latest pfstools package.

View Comments

Video manipulation on a whole new level

This is only border-lining our HDR topic, but it's just too awesome to go unmentioned.




Not that any of these example clips would be impossible to create today. But they require a lot of manual work from a skilled VFX artist. In fact, the "We shot it wrong - you fix it!" category is the bread and butter of the VFX industry today. It's tedious, uninspiring, and sometimes even aggravating work. Any help from a push-button automatic is welcome. Currently Mokey is the closest production-ready tool, and at EdenFX we make extensive use of it. Mixed in with After Effects, Fusion, and some 3d reconstruction in Lightwave, we get stuff done. But it's nowhere nearly as automatic as in the video shown above.

Read more on this amazing new algorithm on the project's homepage. Especially the Spacetime Fusion technique described is a required reading for ever developer making tonemapping software - could be helpful for making After Effects plugins. (hint hint nudge nudge)

Blochi
View Comments

Siggraph Part 2: Everything goes Giga!




xRez shows off giant Yosemite panorama




Highslide JS

A handsome bunch:
Greg Downing (left), Eric Hanson (right) and VFX-legend Cody Harrington (middle).

One thing is for sure: Their pano is bigger than yours.

The Yosemite Extreme Panoramic Imaging Project aims at nothing less than capturing the entire Yosemite valley in one massive image. 70 volunteers, capturing 45 Gigapixel of imagery, which are projected onto super-detailed geometry scans, and then rendered into one long strip in Maya. That's crazy talk.

But the two masterminds behind xRez, Greg Downing and Eric Hanson, pulled it off, and their result is shown on in the entrance hall at Siggraph. For some reason I was expecting they would wallpaper the entire outer walls of the conference center. Actually, what you see in the picture above is only one half of it, the North Rim. Eventually you'll be able to see it on a Microsoft Surface display in the National Park's Visitor Center.

More info the xRez site, the HDView Blog, and in this insanely cool movie:


But wait - there's more...

GigaPan is catching up





Highslide JS

Gigapano photography set for less than $500.

This is another one of these projects I was watching closely, but never got around to talk about it. The GigaPan is a robotic panohead for regular snapshot cameras, and significantly cheaper than any other robotic head around. Still, it's incredible stable and reliable, and the onboard software is a breeze to use. It was good enough for the xRez gang, so it will sure fit your needs as well.

Except, that it's still in beta. I was fortunate enough to snatch a unit (here is my puny 0.5 GPix pano), but many others were not as lucky. So the Gigapan crew is digging through a backlog of 1000+ beta applications. Well, the good news is that they have manufacturing almost sorted out, and soon to expand into a second round of beta.

More eyecandy on gigapan.org, more info on the Global Connection Project site.


AutoPano 2 splits in Pro and Giga




Alexandre Jenny, original creator of AutoPano, made a totally unexpected appearance with a booth on the main exhibition floor. Way to go, Alexandre!

He's working really hard on v2.0, to be released in December. Then Autopano will be split in 3:
  • AutoPano Pro - the autopano we all learned to love.
  • AutoPano Server - automatic server-sided stitching and website display, especially useful for real estate agents in the field.
  • AutoPano Giga - this is where the money is in terms of HDR-support.

Let me quickly elaborate on this:
You'll be able to stitch an HDR gigapixel panorama without shooting multiple exposures for each segment. You'd rather put your camera in auto-exposure mode, so you capture most detail you can get in a best-shot-fashion. AutoPano 2 Giga will compensate for varying exposures by assigning them the proper luminance level. So the patch you shoot in a dark ground patch will have all the detail, but in the HDR it will be much darker than the patch you shot in the sky. Actually, this is already possible with the current version of AutoPanoPro, but there are severe blending issues. Well, not anymore. The resulting Gigapixel HDR will be the perfect feed for HDView.

There is lots of other goodness across the board: GPU rendering, support for RAW and the Gigapan unit.... Check out the Feature Comparison and Upgrade Path, and if you feel experimental today you can also grab the first beta version. I know I will :)

Wow, that was a hell of a long blog post. Hope you enjoyed it, still.

Blochi
View Comments

Siggraph Part 1: Superimposing DR and Mayan Temples




Well, I'm still stuck in the office for the day, but that gives me a chance to write up some of my early discoveries.

Superimposing Dynamic Range


It was the first page of the first chapter of the Handbook, where I mentioned you could expand the dynamic range of a book when you could somehow print a patch that is brighter than the paper it's printed on. Well, smart students Bimber and Iwai from the Bauhaus-University Weimar & the Osaka University did just that.




So, what is it that we're looking at?


Highslide JS


Daisuke Iwai showing off the superimposed dynamic range.


They snap a picture and project it back onto the image. May sound pointless, but it is a real eyecandy and could potentially have a huge impact on digital photolabs and in medical imaging. And it's also a little more sophisticated than I make it sound - there is realtime calibration going on (because camera and projector have different angles) and instead of a book they have an ePaper display hooked up as projection canvas.

Read their paper, or watch this movie (50 MB DivX).
Better even, visit them in their corner in Hall H.


HDR timelapse panoramas with hotspots





Right around the corner are INSIGHT, a non-profit organization for heritage archival. They made some amazing interactive tours of Mayan and Egyptian temple ruins. It's really fascinating to see these bright people use the newest high-tech to research the oldest structures man ever made.

Two things are specifically impressive about this. They have a novel Mac-based viewer application, that links panos with hotspots and a map, it can leech content from online sources, and even display panoramic timelapse videos. And you can pan in these videos. Totally awesome. The title is a bit misleading, because they do in fact show pre-tonemapped imagery. Not truly HDRI, but still awesome.


Just as awesome is their capturing device: They custom-built a robotic panohead with automatic exposure bracketing. Neato.
Check out the pano page from the Mayan Skies project, or the INSIGHT gallery.

Even better, go visit their booth and say hello! Both projects are in the back of the New Tech Showcase. Here's a map.


View Comments
Next Page