subscribe

Highlights of NAB 2017

Our special reports on NAB 2017 also include coverage of the SMPTE-organized Future of Cinema conference as well as a number of NAB-organized sessions. The focus of coverage was on AR/VR and the HDR ecosystems, with some additional coverage topics. There were a number of interesting developments, which I will highlight below.

First, let’s start with light field display, where the big news was the debut of a new company called Light Field labs composed of key technologists from Lytro. Their focus will be on developing a flat panel style light field display in addition to the processing, formatting and delivery of light field data to their display. They are proposing a formatting standard that will deliver textures plus materials properties to significantly decrease the bandwidth needed for light field data. Their display proof of concept prototype should be ready by the end of the year.

Other aspects of the light field ecosystem are progressing as well as we heard about activities in the JPEG-PLENO and MPEG-I group to develop light field compression methods and VR packaging formats

There is also a lot of activity in multi-camera capture techniques. Here, there are two main approaches: a inside-out approach typical of multi-camera VR rigs and the outside-in approach with arrays of cameras surrounding actors. As expected there were numbers VR cameras announced or demonstrated at NAB, but the outside-in approach garnered some attention as well. Companies like Microsoft, OTOY, 8i, Fraunhofer HHI, 4DReplay, Digital Domain and others already have such capabilities.

These “stages” can be used to capture a volume of action so that a 3D model can be built of the actors and props that can be viewed from any angle. When imported into a game engine to drop the actors and props into a computer generated environment, a six degree of freedom (6 DoF) experience can be created allowing the user to walk around the capture space and view the action from many angles. Several demonstrations of content captured this way were shown on VR headsets.

4DReplay

Such techniques offer the potential to allow for light field processing of the images, but not all do this. The tools for light field processing are still evolving but OTOY, Lytro, Light Field Labs, Fraunhofer IIS and others are working on this.

We also saw domed theaters at NAB this year, something I don’t recall seeing at past events. These are popping up as a way to show 360º VR content without having to wear a headset. Canon, Nokia and Zaxel all were there with domes.

In Augmented Reality, there were some interesting demos focused on how to use AR with a TV. The best was one by NHK that allowed you to watch your TV through your tablet, but have the dinosaurs come out of the TV and into your living room.

Epson and Brother also showed AR headsets with a focus on using them for camera operation or drone control.

In VR, there was a clear emphasis on live streaming of VR this year. This requires real time stitching and clever compression techniques for the typical 4K equi-rectangular video format. Digital Domain, Nokia and partners, Fraunhofer HHI, Orah, Ricoh and others all demonstrated or talked about this feature. New or improved inside-out VR rigs were introduced by Google, LucidCam, Ricoh, Orah, Nokia and Digital Domain.

Another new trend at NAB is called adaptive viewport distribution. This is an encoding technique with various flavors demonstrated by Ericsson, Nokia and Fraunhofer HHI. With Ericsson, it was being used to show 4 HD resolution TV screens in a VR headset in a sports bar scenario. Only the screens that are in the field of view get allocated more bandwidth. HHI takes a 4K equi-rectangular 360º image and breaks it into 24 tiles, each encoded with low and high resolution and aggregated into an HEVC DASH stream. At the headset, only those tiles in the current field of view are passed to the headset. Nokia takes a similar tiled approach.

In HDR, the big news was the coming availability of 2000 cd/m², 1 million-to-one contrast reference monitors. Panasonic has developed a two LCD-panel design with a direct backlight that it will use in its own reference monitor, but the panel will be offered to Eizo, TVLogic, Flanders Scientific, Ikegami and Hitachi for their own branded monitors as well. Only the Eizo one will limit peak luminance to 1000 cd/m², Canon is also readying a 200 cd/m², reference monitor as well, apparently not based on this panel. All will be ready toward the end of the year or early 2018 and are likely to be in the $30K range.

Other 4K HDR monitors for less critical on set use, monitoring, editing and QC were released as well by companies like Flanders Scientific, TV Logic, Marshall Electronics, Borland Communications, Panasonic, Primeview, Sony, HP, Dell and Canon.

HDR workflows from Dolby and Technicolor were updated as well as equipment through the broadcast production ecosystem. It is now clear that a wide variety of equipment will be ready this year that can support mixed HDR workflows – i.e. content that can be captured or archived in SDR or at least three flavors of HDR – PQ, HLG or S Log 3. We were surprised by the large number of SDR-HDR converter products as well to facilitate this mixed workflow. Most can do HDR-to-HDR conversion and some can do HDR-to-SDR or SDR-to-HDR conversion. With the HDR-to-SDR, one must shrink the color volume and change the OETF, so a little tricky. In the SDR-to-HDR conversion, the luminance range is expanded and the colors are re-encoded using the BT.2020 coefficients and primaries, but they do not expand the SDR colors beyond the 709 boundaries.

There is also some coverage of 8K, LED screens and green screen replacement technologies.

Overall, it was a very interesting show. Hope you enjoy the full details in the report. – CC