At the Technology Summit for Cinema, Frederik Zilly of Fraunhofer IIS told a very interesting story about the making of their first movie using light field production techniques. Over the last couple of years, his team has developed the technology for light field capture and processing using a 4×4 array of cameras creating short stop motion films. In the latest project, they developed script and shot a few minutes on a real set with actors to try to emulate a more realistic movie production process. (We interviewed Zilly at IBC Fraunhofer Explains its Virtual Camera Technology at IBC 2014).
Now, there are a number of lightfield visual effects that can be deployed. This includes the ability to change the depth of focus, the focus point, the point of view and more. The goal was to try to use a normal post production process using certain filters to achieve the desired effects.
The short was called “Coming Home” and is the story of a man returning home from work and enjoying a nice pot of tea. But the team was careful to create sequences in the short film that showcased all the capabilities enabled by the light field capture and processing solution.
Zilly started by explaining that there are trade-offs in choosing how many cameras to use in the array and how densely to pack them. Large arrays offer viewpoint flexibility and image quality, but this impacts storage and other resources, which can be reduced with fewer cameras. One can also spread the cameras out, but you will lose some image quality. Ultimately, they chose a 3×3 camera array with each camera capturing at 1920 x 1080. The array was mounted in a camera rig with a beamsplitter with a Sony F3 camera acting as a hero camera (emulating the typical production acquisition).
There are several steps in the capture process. The first is multi-camera rectification which aligns all of the images to a common reference frame. This includes geometric corrections as well. The second is the generation of disparity and depth maps from these overlapping images. The parameters for the disparity can be adjusted using a simple GUI interface.
Next came color grading which required matching the color temperatures and luminance of the nine cameras. Finally, a render of the virtual cameras was done to produce stereo pairs for display.
One of the visual effects they wanted to demonstrate was interactive relighting which would allow the positioning of virtual lighting sources in the movie. This can only be done once a full 3D model with textured video elements is available. This was successfully done in a couple of sequences.
Zilly then showed the movie and then ran it a second time showing where in each part of the film they applied the various techniques. (We managed to take our own video of the movie at Coming Home from the Fraunhofer.