The Korean Institute of Science and Technology‘s (KIST) Imaging Media Research Centre was showing off its latest generation of hand gesture recognition. This used a 3D sculpting system to create high quality 3D sculptures. The research work included development of a hand gesture library, and screen user interface. The core technology uses a synthesised depth database constructed from a 3D hand model; template matching based real-time detection, which allows for drift-free tracking; and a 3D visual recognition template that matches real time depth images from hand recognition. The 3D sculpting application makes use of the Kinect v2
Some of the more interesting exhibits in the emerging technology part of SIGGRAPH included a multi-projector display system of arbitrary shape, size and resolution, shown by a group from the Interactive Graphics Visualization Lab at the University of California, Irvine. The goal of the work is to create low-cost VR environments using off the shelf (commodity) products. To get there, the team focuses on developing immersive tools in software that can be used to map content on any surface.
The camera-based system auto calibrates general Windows desktop content to multi- (up to four) projector displays automatically. The system provided geometric and colour registration, calibrated to deliver a seamless image, targeting non-planar (flat) surfaces. The group claimed this is the first time the content has been delivered using this type of calibration on a Windows desktop machine.
Disney’s Carnegie Mellon University lab was in the Emerging Technologies area with its ‘Acoustriments’, described as a “low-cost, passive and powerless” switching device that works much like a musical instruments (think the horns or wind instruments that use chambers and valves to channel air). The hypersonic sound is used as an i/o sensor, but beyond this the team developed a vocabulary of design primitives that it uses as building blocks to interface with smartphones through the speaker and microphone functions. For example, the group attached a modified acoustriment device (made in a 3D printer) to the Google Cardboard headset, which connects to a smartphone. Such a pairing added any number of switch functions to the low-cost device. Neat.
One of the research students said that the above experiment was done to add to the functionality and usability of Cardboard, allowing direct interface with the smartphone you would normally not get otherwise. Interestingly, this is using sound waves rather than electrons, so there is an order of magnitude of difference in speed. When asked about latency, the reply was that the speed of sound was fast enough to use as a switching device, plus it offers the benefit of zero power requirements. Pretty cool.
We found the Siggraph graphic content show to be the perfect place to see just what is the cutting edge in the way of display technology, particularly in the AR and VR space. We can conclude that the technology stars are beginning to align to deliver better AR and VR (3D) content than ever seen, plus other support technologies in eye tracking and gesture recognition coming along as well. The space is growing at an amazing pace, and we think it will continue to change everything going forward. – Steve Sechrist