Facebook 360 and RED reveal VR camera "Manifold"

While the Ricoh Theta V has become the workhorse for on-set HDR capture (another story to be told in a later blog post), it does sit squarely in the consumer segment of this rapidly expanding universe of VR cameras. The professional high-end segment, however, just got a new champion.


Behold the Manifold


This behemoth is the result of a collaboration between Facebook 360 and RED, and was unveiled yesterday at the Oculus Connect 5 conference. The Manifold contains 16 RED Helium 8K sensors, each behind custom Schneider 8 mm f/4.0 180-degree fisheye lenses. No price or release date has been announced yet.

With these specs there so much overlap between all 16 lenses, that it can technically qualify as light field capture device. And that is exactly what it's aiming at. It's labelled 6DoF VR camera and promises a fully stereoscopic VR immersion, including positional parallax changes within the volume of the capture sphere. What that means for post-production professionals is that you can extract a depth channel, point clouds, and insert all kinds of CG mayhem with minimal effort.

Here is a glimpse at the footage:


I'm sure we will hear much more from this camera in the future, and I can't wait to see what creative filmmakers will do with it. Hey RED - if you're looking for someone to give this baby a thorough VFX test drive, you know where to find me... ;)


First Look at the Prototype


In the meantime, Hugh Hou shared with us the very first look at the prototype, just as it is currently on display at the conference. Hugh runs the excellent YouTube channel CreatorUp, has tested pretty much all VR cameras on the market, and he's extremely giddy about this new toy!



Source: Announcement on Facebook 360 Blog

View Comments

GoPro kills off Kolor

Alexandre Jenny and his company Kolor have always been a pillar of the panorama community. That era has just ended. Yesterday the mothership GoPro pulled the plug, as Alexandre announced in a heartbreaking Facebook post. The entire panorama community is in uproar.

That puts Autopano and Panotour – two of the most popular and pioneering software in this industry into the dustbin of history. The download section is pretty much all that is left of the Kolor page, so for current users this is your last chance to download final updates (also grab the free players before they're gone for good).

Kolor Download


Allow me a quick post mortem on these apps.


Autopano Pro / Giga

For the last 10 years there have been only two players in the professional stitching field: PTGui and Autopano. It was a friendly competition; almost by an unspoken agreement they both had different strengths. While PTGui allows you to micromanage a single panostitch to perfection, Autopano was the undefeated king when it comes to making enormously huge and enormously many panoramas. There would be no such thing as Gigapixel panoramas, had Autopano Giga not pioneered that field.
Here's a rundown of the unique features that no other stitcher (not even PTGui) has to offer:


Robot Import Wizard

To shoot extremely large panoramas you have to take many images with a large focal length. When you have the sky visible in the image, as you typically do in any outdoor shoot, this results in a very peculiar situation. You end up shooting tons of plain blue images, that show no other features and are impossible to align (which anyone knows who ever assembled a puzzle - the last remaining pile of puzzle pieces is all tiny snippets of blue sky). But as no sane person would ever attempt such a shoot without a robotic panorama head, Autopano Giga offered a sweet import wizard to define the initial alignment:

Autopanos Import Wizard allowed pre-aligment based on a robot's shooting pattern.


PTGui can't do that. Your best equivalent is to hack together an artificial template, or use Papywizard to create one.

Auto-Detected Batch Stitching

When you spend a weekend shooting away, you come home with a folder of hundreds or thousands of images. Autopano always had the ability to chew through the whole folder automatically, detecting which images belong together, and do an initial batch-stitch of all panoramas it could find. Although not always perfect, it's an incredible timesaver. For most users this was the defining feature advantage.

Single click, and a few hours later it has stitched 32 panoramas.


Coincidentally PTGui introduced a similar feature in its latest 11 update, with the Batch Builder. It's not quite the same yet, as PTGui is traditionally a single-project application and does not have such a slick way of presenting multiple stitcher projects at a glance.

Local Optimizer

The optimizer is the algorithm that turns detected features into aligned images. And when you have more than 1000 images, that can take a long time and cause errors to accumulate. With Autopano's unique ability to run the optimizer on selective areas only, you could quickly solve the trickiest stitches by incrementally pushing the stitching errors out of your panorama. Just like pushing out the air bubbles when applying a large car sticker.
You could do all this in Autopano's super comfortable panorama editor.

Local Optimizer in Autopanos Editor


Rest in peace, Autopano. You will be missed.


Panotour

This was the most sophisticated app for generating virtual tours. It had a slick nodal interface to combine panoramas with hotspots, fine-tune the online presentation, and even generate stand-alone runtimes for kiosk applications. Although I never used it myself, I hear that this was the app all competitors were trying to measure up against.

On the technical side, Panotour Pro was actually a backend user interface for the Krpano viewer. Klaus, the independent creator of Krpano, has some good advice in his forum:
- Panotour Pro users should update to the latest version, which exposes the templates.
- That way Klaus can deliver updated templates as Krpano evolves.
- You should export the bundled krpano license, as long as the Kolor servers are still online.

Myself, I do like hacking code directly and have always used the naked Krpano here on this website. And Klaus is in the process of making his own GUI for linking tours together. It's certainly no match to PTP yet, but usable and functional.

Other alternatives you might look into:

- Pano2VR is very much alive, super-powerful, even offers some extra functionality over Panotour. This is what I would use (if I would be into GUIs).
- 3DVista is another popular contender in this field, and they were very quick to announce a cross-upgrade deal.


The other Kolor apps

- Autopano Video was the pioneering app to stitch VR video. Has been superseded under GoPro ownership by GoPro Fusion Studio (although without any core coders, this is likely just dying a slightly slower death).
- After Effects has integrated VR tools now, that make the GoPro VR Plugins mostly obsolete.
- Nuke's Cara VR includes several nodes based on Autopano. You won't likely see any updates here, either.
- Not even to speak of all the cloud services running Autopano Server. Which, to my knowledge, are most of the OEM apps bundled with 360 VR cams.


It's really a disruptive blow to the panorama community and the VR industry. Kolor was a driving factor in both, always prominent at conferences, always pushing the boundaries and pioneering new fields. It's hard to think of a company that was more embedded in this industry. They even organized annual competitions and released gorgeous panobooks with the winning pictures (I was proud and deeply humbled to be asked as judge on the 2012 book).


My sincere thanks to Alexandre Jenny and his Kolor team. You had the best run possible.
Best of luck for all your future endeavors.
Keep innovating!

View Comments

Neural Networks for Generating HDRIs

Deep Learning is the new wizard tech that turns previously unsolvable problems into merely tricky ones. It gives computers the ability to come up with a plausible guess - a feast that previously required human intuition and experience.

One problem, that requires VFX artists to guess all the time, is to match real-world lighting just by looking at the filmed footage (a.k.a. the plate). You're probably saying "Wait - that's exactly what on-set HDRIs are for!" and you would be correct. But the real world is messy. Maybe the director never got the memo that he should invite a VFX supervisor to the set, maybe said supervisor was too distracted chatting up the costume designer instead of shooting HDR images, or maybe the director is building his entire commercial clip just out of stock footage. Maybe the HDRIs were even shot on set, but somehow got lost in the complicated jungle of client-vendor communication protocols, involving multiple levels of middle men. The sad fact of life is that a scary amount of CG artists have to match lighting by hand, still to this day.

Now what if... a neural netwok could automatically generate a plausible HDR image directly from the plate?

NeuralVFX



A rogue VFX supervisor under the pseudonym NeuralVFX figured this was worth a shot. He managed to train a neural network by creating a huge pile of randomized HDRIs and a corresponding render for each. Based on those example pairs, the magic black box learns to predict an HDR image when you show it just a render by itself.


Pretty remarkable, huh? It really picked up on the overall shape of the lighting. The hues are wildly off —- his network seems to prefer neon green and pink, like a true 80's retro hipster. Well, that might just be a juvenile phase, possibly caused by feeding the algorithm too many psychedelic example HDRIs during training. But the principle itself looks promising.

The third step is the real deal: After training and validation, now let it guess the lighting of a plate! And this is where Mr. NeuralVFX really caught my attention:


You can see a much better explanation of the process, along with a few more example images, on NeuralVFX's original blog post:

http://neuralvfx.com/lighting/neural-network-for-generating-hdris/


JFL at Université Laval


It's amazing what a dedicated tinkerer can do with modern AI tech.
But what about serious research professionals? Like, for example Associate Professor Jean-François Lalonde from the Computer Vision And Systems Lab at the Université Laval in Quebec:





This is a problem JFL et al have been working on as far back as 2009. He started with algorithms that detect and analyze shadows in an image, take cues from the brightness gradient in blue skies as well as lit/unlit areas of vertical surfaces. His publication list contains several gems, all worth reading. But in his more recent work he takes the same brute-force approach as above: training a neural network do its magic.
With a few differences:

  • Training is done with a database of 2.100 real HDRIs and 14.000 real LDR panos.
  • Instead of Render+HDRI pairs, he trains using cropped Plate+HDRI pairs.
  • RGB color and light intensity maps are predicted separately.
  • Optionally an existing HDRI can be warped for to compensate for spacial variations in lighting (this point certainly warrants a deeper examination in a later blog post).


Because the AI now has a very clear understanding of what real HDRIs look like, the predicted results are rather spectacular:



They even confirmed the stability of this trained network by downloading a bunch of random stock photos and putting stuff into it —- lit fully automatically by the predicted HDRI:


Go check out the project page, it's filled with goodies! You get the paper, slides of the talk, many more examples, even the training set is made available for all the tinkerers:
In a more recent move, the group also published their code on GitHub. And for the artist-types without coding experience of their own, there is also an Online Demo of their AI. Just upload your own plate and get a predicted HDRI for download. It's technically only certified for indoor scenes (as this is what the AI was trained on).

Let's give it a test drive


I figured the Thunderdome is technically still considered indoor, right? I mean, it has neither doors nor walls, but that's kinda the point of the Thunderdome!!!
So here's what it made for me:



Hm. Looks a bit potato quality, but that should only really matter for reflective materials. Smart IBL has shown that tiny blurred images are perfectly fine for diffuse lighting, so let's bring it into Modo. Turns out the HDRI ended up in camera space, of course. That means it takes a bit of counter-rotation to compensate for the camera angle, as well as a little extra intensity boost. But that's pretty standard procedure, we have to do such little tweaks with any HDRI.



Not bad at all. No extra lights in the scene. The color tone and backlighting are pretty well represented. It's not a very scientific test, as my troll Mica is naturally a bit orange and his mossy coat does not take on diffuse light too well. Lighting the scene by hand I would probably go for more stark contrasts and harsher shadows. But it would probably take me longer. For an automatic result that's nothing to sneeze at.

Definitely good enough to run a quick comp:

Crushin' it at the Thunderdome

So yeah, I'm a believer now. Neural networks can indeed hallucinate useful HDRIs together.

View Comments

Kevin Chen shows HDR Imaging in Microsoft Excel


This is a real gem!
Not only is Kevin Chen's talk at !!Con2017 completely hilarious, it's also packed with an extremely clear explanation of some core concepts of HDR imaging. Watch this and you will learn about recovering the camera curve, why that is even necessary, and how to assemble pixel values at real-world brightness levels from multiple exposures. All together it's a pretty comprehensive look under the hood of an HDR software. Which, in this case, is just a Microsoft Excel spreadsheet!

Grab a beer and enjoy this 11-minute presentation:


Hats off to Kevin and his wonderful ways of demystifying the process. He actually makes this monster .xlxs spreadsheet available on his website (along with the slides).

View Comments

Cryptomatte - OpenEXR's hot new de-facto feature

If you find it silly to be a fanboy of a file format, then count me guilty as charged: I love OpenEXR. It's elegant yet powerful, specifically tailored to the needs of the VFX industry, it's the unsung hero of the silver screen. If you read my book you already know about the numerous amazing extensions: Tiled EXR for working with huge images, stereoscopic SRX files, or unlimited embedded layers for extra flexibility in compositing. Truly remarkable. Cyptomatte is the latest - although somewhat unofficial - extension.

Cryptomattes solve the eternal compositing question: How can you select a particular material/object in a rendered image?

You see, up until now everybody was depending on Material ID or Object ID passes, more commonly known as "clown pass". By rendering a flat color for each material, the hope was to extract a selection mask for targeted adjustments. That strategy works great for texturing (i.e. in Substance Painter), because here that mask only needs to provide a starting point for further refinement. But for compositing, such ID passes never worked reliably.

Highslide JS
Example of a classic ID pass from Trollbridge


The trouble is, on the border of two surfaces you get antialiasing, and that naturally graduates through all the colors inbetween, which may happen to include colors of other masks. This cross-talk between colors makes it nearly impossible to extract precise masks for fine structures and details. Worse even, in real production scenes you may well have hundreds of materials and objects in a scene, and then the ID colors end up very close to each other - completely defeating the purpose of easy selection.

Highslide JS
Why traditional ID passes never really worked.


So the whole concept of rendering compact ID passes has always been questionable at best. In real life we either constrained ourselfes to individual alpha masks on-demand (only when the compositor asks for it), or resorted to manually bundling up 3 masks at a time in RGB layers. Tedious monkey work for the CG artist, that also inflates the file size, yet still leaves a non-zero chance to receive panicked phone calls with matte requests from the comp department.

Cryptomatte solves all that.


And best of all, it does it fully automatically. By using an auto-generated combination of ID and coverage masks, it can deal with subpixel-accuracy, motion blur, transparencies, and automatically packs all that data in the most efficient form. More importantly: it also includes a clever hashing mechanism to preserve the names of materials and objects. That means, instead of picking a color in compositing, you can literally pick materials from a list!


Originally developed by Jonah Friedman & Andrew C. Jones at Psyop, this idea was just too good to remain a secret. They published their tech at SIGGRAPH 2015, proclaimed it an Open Source project, and it has quickly turned into a buzzing grassroots-like movement to integrate it everywhere. By now Cryptomatte is the de-facto standard for ID mattes, it is readily supported in V-Ray, Houdini, Blender Cycles, RenderMan, Redshift, Clarisse, Arnold, Nuke, Fusion, Flame, After Effects. The latest addition is Lightwave 2018 support, via Micheal Wolf's EXRTrader plugin:


Big thanks to all involved! Even though my beloved Modo is not on that list yet, I'm very happy for all my fellow CG artists and compositors, whose lifes have just become a little bit easier. And it's also a shining example of a good idea sweeping through the industry, fueled by the power of Open Source, and establishing a de-facto standard solution for a previously painful problem. Go Crypromatte, Go!

Grab the plugins for your pipeline from https://github.com/Psyop/Cryptomatte.

And if you're in Vancouver for SIGGRAPH2018, don't miss your chance to meet the creators at the first ever Cryptomatte BOF meetup:
"Cryptomatte - Present and Future Uses" / Monday August 13, 3:30pm-5pm / Vancouver Convention Centre, East Building, Meeting Room 11

View Comments

Soooo close!

Troll Bridge is racing to the finish line, oh sweet victory, it's so close I can almost touch it! We called pencils down for compositors last week, and now it's just me and my trusty comp supervisor Addison polishing up the last straggler shots. From all these 350 shots we're down to 5, and those only need final finishing touches before they get handed off to the Grading Department (that is, to Tim).

As little teaser, here is a GIF of my favorite family scene.


If you're curious why we're doing all this, and what kind of person would volunteer for such a mad adventure - here is a rather extended Making-Of show. It was filmed at a Gnomon Event many many moons ago, back when we were still in the middle of animation.

View Comments
Next Page