We’ve posted an interview with Frederik Zilly of Fraunhofer IIS in Erlangen, who explained how the Fraunhofer is using multiple camera arrays to capture many in focus images of a scene with deep depth of field. By calculating disparity maps, the technique yields a “cloud” of optical data points. That cloud can then be processed with “virtual cameras” to allow free choice of lens types, focus, zooming, panning an motion.
The technology has the potential, when commercialised to allow the capture of live content in a single process and allow 2D, stereo 3D or multiview 3D content to be created from the single master. This could overcome one of the key barriers to 3D adoption.