A couple of years ago, one of the highlights of our visit to IBC was the demonstration of light field capture for video by the Fraunhofer IIS. Since then, the institute has been developing its approach, which has been based on post-processing of images from multiple cameras, rather than the microlens approach used by companies such as Lytro. The group has been developing technology such as Nuke plug-ins to make it easier to use the technology and it is also working with MPEG and JPEG on standardisation.
There are a lot of issues to be solved with multiple cameras, including sorting out issues of different lens performance and alignment. Disparity between different views is detected. After that, the data has to refined and filtered to create a point cloud that stores the data of ‘what is where’.
Once you have the point cloud data, you can create synthetic apertures (to control depth of field) and the data allows the development of final images in 2D, 3D or with different atmospheric effects. It also allows the creation of 360º content.
We also talked to the Fraunhofer group that has developed the Lici Mezzanine Codec which is designed to offer visually lossless compression of 1:4 or 1:8. The codec is designed to use just a small number of gates and operate quickly. Staff told us that UltraHD at 60fps can be compressed by 6:1 in just 16 or 20 video lines. A complete system approach will ensure all processing within a single frame. Fraunhofer told us that it has designs that can use Altera (Intel) or Xilinx technology. A customer is IHSE which is making professional KVM extenders using Lici.