Light field displays can provide glasses-free 3D images, under the right circumstances. One of their big advantages is they are independent of "sweet spots" and the 3D image can be seen from any location in the viewing area. With proper design, they can show both horizontal and vertical parallax and they have "look around" capability, allowing you to see what is behind a foreground object by moving sideways in the viewing region.
Unfortunately, they are not problem free. One solution involves multiple projectors, up to 40 or more, in a rear projection configuration, and special processing software/hardware to drive the projectors. That is not a hand-held system by any means. Another issue has been their modest image quality, good enough for some digital signage applications, perhaps, but not good enough for TV.
In a paper at SIGGRAPH 2011, which is continuing through tomorrow in Vancouver, British Columbia, Gordon Wetzstein from the University of British Columbia and 3 co-authors from UBC and the MIT Media Lab, presented a paper that may revolutionize 3D displays and bring light field technology into hand-held systems or flat panel TV. Someday-don’t hold your breath, though.
According to the authors, the approach is a "multi-layer generalization of conventional parallax barriers." Instead of being two-layer, the LCD and the parallax barrier, their demonstration system is 5 layers plus a backlight. These layers, instead of being black and clear like the parallax barrier, are "attenuation layers" that selectively reduce the intensity of the light and produce gray scale from full black to fully transparent.
Multi-layer 3D displays in the past have normally produced the depth directly, with each layer in display producing a different depth plane. The depth volume of the system is then limited to the thickness of the display, which would be a severe limitation for a handheld display or a flat-panel 3D TV. Such an approach is more generally considered a volumetric display.
The UBC/MIT approach doesn’t work this way. Instead, the different attenuation layers interact with each other to control the intensity and color of the light in each different direction. This reproduces the light field produced by light reflecting off of the original object, hence the name light field reconstruction. This approach, in theory at least, can produce both out-of-screen and behind-the-screen 3D effects.
What’s the catch? First, SIGGRAPH isn’t a display conference. The conference is not particularly concerned about the hardware implementation of a display but about the algorithms needed to drive the display. This shows up, for example, in the name of this paper, "Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays." The demonstration showed at SIGGRAPH involves not electronic displays but film transparencies with test images. One of the goals of the paper was to determine how many attenuation layers are needed. The short answer is 3 aren’t enough and 5 do a pretty good job. If you want a lot of out-of-screen effects, you may want as many as 8 attenuation layers.
Eventually, a practical display would need to replace these transparencies with LCDs or some other type of "attenuation layer" with a high enough aperture ratio so 5 (or more) of them in succession would: a) let enough light through to be useful; and b) not produce severe moiré effects. There is hope, though, since the work was supported by both Dolby and Samsung Electronics. For more details, see the upcoming edition of Mobile Display Report, or either the UBC or MIT Media Lab project website.