subscribe

Light Field Lab, Inc. Taking Ecosystem Approach

NAB 2017 was the official debut of a new company: Light Field Lab. It is composed of the top engineers from Lytro who led the light field cinema camera initiatives, debuted at NAB 2016 with much fanfare and impressive performance. The team now has a very ambitious goal to develop light field display solutions including the surrounding ecosystem elements required for successful adoption of holographic technologies.

We spoke with Jon Karafin, who led the Lytro cinema camera development and is now the CEO of Light Field Lab. He is joined by co-founders Brendan Bevensee, CTO, and Ed Ibe, VP of Engineering – both from the Lytro cinema leadership team.

Karafin explained that they need to take a full ecosystem approach as all the components are still emerging and there are no standards for a functional light field capture-to-display solution. “This is our long term goal and we will be working with many partners to help build up the ecosystem,” commented Karafin. He then laid out his vision for a holographic display experience. As noted below, this consists of four key areas: Display, Format, Universal API and Interactivity.

Light Field Labs 1

The display activity is the clear focus of efforts at this time with a prototype being developed on seed capital to prove the technology and help secure Series A funding. Karafin notes that they have filed over a dozen patents so far and they have validated the physics, materials and components, so have confidence that the full system will work. He was reluctant to describe the display solution in any detail or provide specifications for the prototype as they are still refining the architecture for the final spec although the final form factor is going to be similar to a traditional flat panel television.

In the Lytro cinema camera, there is a large main lens that defines the capture volume followed by an image sensor with a microlens array. The display solution is somewhat the inverse of this with a series of optical elements to guide rays of light, but now with the human eye acting as the “main lens” counterpart. The goal is to recreate the light rays reflecting from the captured scene in the same way in the display so it appears “as real as the original object.”

Characterizing the performance of a light field display is also an area that needs development (FoVI 3D has an AFRL contract to develop some methodology). Karafin simplified the characterization into three metrics: Ray Density, View Volume and 2D Equivalent Resolution.

Light Field Labs 4

The ray density is the key to image fidelity. You need high density to have high image resolution and to eliminate any vergence-accommodation mismatch. Otherwise, the image degenerates to a multi-view image with low 2D resolution. View volume describes the range of depth that is possible, how much of the captured image you can look at, and how far the viewer can move within the space. The concept is no different between a holographic display and that of a VR headset, for example. The 2D equivalent resolution is based on any slice in space within the viewing volume, which may vary based on your viewing position, but clearly higher is better than lower.

“While most of our initial effort is on a light field holographic display solution, we are motivated by the vision of the Holodeck and believe the display cannot require head or eye-mounted accessories to achieve compelling entertainment experiences,” said Karafin.

The other components to the ecosystem are being worked on as well in parallel. In the format area, they believe a robust encode/decode solution is needed that delivers the information required for the display processor. Fortunately, there is industry activity here with MPEG and the JPEG-PLENO group working on a call for proposals to begin to evaluate light field encode/decode methodologies.

The API part is needed to gather the capture information for delivery to the encoder while a display API will be needed to work with the display processor to create the light field image based upon the capabilities of that particular light field display as well as allowing both content creators and device manufactures simplified integration into their products. The recent Streaming Media for Field of Light Displays workshop organized by Insight Media and Third Dimension Technologies explored some of the efforts underway here.

The interactivity part of the ecosystem will take the most time to develop. Here, Karafin envisions the ability to project a localized sound field not only for audio information, but also to create haptic sensations.

Following our pre-NAB interview, Karafin presented the above information at the “Future of Cinema” event with some additional details. In particular, he showed a series of rendered video images that were generated in multiple ways specifically to highlight the differences in the fidelity of the render to demonstrate how Light Field Lab’s technologies will address the limits of commercial Internet bandwidth. As a reference, he showed a scene generated with a dense camera array (137 light field samples) with deep image structures as the ‘ground-truth’ for all following methodologies to be compared to. The clip shows that as the virtual point of view changes (to represent the exact image the viewer would see from either an VR/AR HMD or LF display when moving within the view volume), differences in image quality emerge. In particular, Karafin pointed challenging elements like maintaining transparency, rendering specular highlights, accurate edges, reflections, refractions, no artifacts and accurate depth.

The other approaches, like depth estimation algorithms or horizontal-only sparse arrays, do not show the fidelity of the reference. Karafin then showed their hybrid approach which he described as “Deep Image Sparse Array with Material Rendering.” The approach provides the ability to significantly compress the holographic data with minimal degradation to the resultant holographic imagery by decoupling the challenging elements for light field reconstruction from the more compressible samples. This leverages both real-time display processing and off-line encoding. The methodology is expected to be a viable method to stream holographic data on the emerging 5G networks, although more development is required on the real-time processing architecture.

In a second session during NAB, Karafin recapped some of the above information and also tried to dispel some myths about light fields – Pepper’s ghost display is not a light field or holographic display, nor are stitched VR images. Once images are stitched for presentation on a VR headset, all light field properties are lost. And, today’s VR and AR headsets do not have true holographic displays even if this term is mentioned.

While Karafin can’t say much yet about their display, he did reveal that the first prototype will be a 5”x 3” flat panel and offer 150 megapixels of resolution to validate all system and production components prior to starting production on the full scale system. This is anticipated to be available for demonstrations in Q1’18. Later in 2018, they plan to start production of the “SxGP” development kits and display which will have multiple Gigapixels of resolution to meet the full requirements of a compelling holographic experience and be monitor/TV sized. This is anticipated to become the highest resolution light field display developed to date. – CC