subscribe

How Will Advanced Display Technology Drive AR/VR Adoption?

That is one of the key questions that will be addressed at the Streaming Media for Field of Light Displays (SMFoLD) workshop to be held on Oct. 3 followed by Display Summit on Oct. 4-5.

Both will be held in Sterling, Virginia and will feature leading technologists discussing the state and future of immersive displays and the infrastructure needed to deliver these compelling images.

At Display Summit, we will be looking at component technology, headset designs and application requirements for AR and VR. Most of the focus will be on non-consumer applications and trying to understand how advancements in consumer-facing products and technology will enable other professional and commercial applications.
For example, one of the promising new display technologies for AR/VR applications is microLEDs. For this application, most are focusing on a monolith approach where a wafer of blue GaN LEDs is bonded to a CMOS active-matrix backplane to drive individual emitters. These LED arrays are high density with small (3-10 micron) emitters. The challenge is that only blue light is created with this process, so some sort of color conversion technology is needed to get to full color. Phosphor conversion technology won’t work as these particles are 100x to 1000X bigger than the emitter and no good green phosphor exists today. So how do you solve this problem?
At Display Summit, VeraLase will describe their approach to color conversion called Chromover that combines photoluminescent quantum wells with a novel resonator to enable bright, efficient full color microLED microdisplays.
Another approach will be described by Nanosys. They are developing quantum dot materials that absorb blue light and reemit in the green or red. His talk will discuss the requirements for quantum dot color converters for microLED displays and the current development status of quantum dots for this application.
Market analysts from Yole will also be there to provide an overview of technology trends with microLEDs along with insight into their adoption in a number of applications including AR/VR.
display sumit
Registration for Display Summit includes access to the SMFoLD workshop
Most agree that current image quality needs to be improved significantly in AR/VR headset designs. Factors that impact the design include resolution per eye, field of view, latency, frame rate, color gamut, motion and other artifacts, etc. While creation of high quality headsets is mostly possible, the size, weight, power, ergonomics and cost would not be suitable for most applications.
To address these trade-off challenges, VR/AR industry guru Karl Guttag will focus on wide field of view and high angular resolution headset design. He will provide an analysis of this design space and the trade-offs we have to make today, and what new technologies should allow in the near future.
One of the key components of AR/VR systems is the waveguide optic. This device typically features a holographically-defined input optic to capture the image from a microdisplay and allow it to propagate inside the waveguide. To extract the image and present it to the user, more holographically-defined optics are used. Achieving full color and wide field of view can be a challenge, so how can this be addressed?
Luminit will provide a nice overview of the design principles of these holographic optical elements (HOEs) and provide some insight into to how their company manufactures them and their performance characteristics.
Digilens will likewise describe their approach to full color wide FOV waveguide design and profile their current use in Head-Up Displays and future use in AR/VR headsets.
For those who actually design and build AR/VR headsets, the ability to characterize performance is always a challenge. Few standardized methods for device characterization exist and few metrology companies support this emerging area. Fortunately, Radiant Vision Systems is focusing on this area and will provide some insight into the tools they have developed to characterize optical performance of dense, high resolution microdisplays.
We are also seeing a huge desire to use AR and VR technology in many commercial, military and professional applications where training is required. The use of AR and VR is being explored vigorously in many industries, so adapting the tools and technology from consumer-facing products can be a cost effective way to move forward.
But the needs and challenges of these non-consumer applications are different. For example, enterprises need to be able to train personnel locally or remotely in a VR or AR environment. EON Reality has stepped up to this need by developing their EON Enterprise Virtual trainer.
This solution provides a unique 3D virtual training collaborative environment that allows a Trainer to train ‘students’ either locally or remotely. The Trainer starts a ‘Lesson’ that consists of a 3D virtual model within a 3D virtual environment, the Student via a Head Mounted Display (HMD) is immersed in this virtual environment and receives either direct instructions via VOIP from the Trainer or instructions/prompts from within the 3D environment or a combination of both.
But hardware is not the full solution of course. Careful choice of display content is very important to provide tangible operational value to wearers of these systems. Synergy and compatibility with other platform displays is another very important design factor. Rockwell Collins will discuss these issues in the context of AR displays for training.
Another challenge is the fast pace of innovation in the AR/VR market. Many non-consumer applications need solutions that will last for years and relying on consumer products means products, parts and support may not be available over this time period. On the other hand, developers want to be able to take advantage of recent upgrades in technology. So how do you solve this dilemma?
One way is via standards efforts such as OpenXR with middleware platforms providing solutions today. Sensics CEO Yuval Boger will describe the problem of future-proofing training systems, review existing solutions for this problem and describe ongoing efforts. He will conclude by describing effective strategies to keep training systems current.
Finally, Rockwell Collins will describe their Integrated Digital Vision System (IDVS). It is an advanced combat helmet mounted display system for warfighters that combines real-time mission data with multispectral vision sensors into one view for enhanced situational awareness.
The IDVS sensors include two low light level Visible/Near-InfraRed (VisNIR) sensors for binocular night vision, as well as a single Long-Wave InfraRed (LWIR) sensor for thermal imagery. On-board processing fuses the sensor video with incoming data from various sources (such as a command center, other warfighters or UAS) for low (less than 5ms) latency augmented vision, day or night.
The first prototypes utilized two high resolution OLED micro-displays with see-though free-form prisms for near-eye display. The next generation IDVS will incorporate high definition waveguide displays for better see-through quality and higher brightness.
The SMFoLD workshop is designed to provide an overview of the light field ecosystem from content creation/generation through distribution to advanced 3D displays. These displays can and will be large theater-sized, desktop monitor scale, or mobile phone and AR/VR headset types. The workshop will also focus on the formatting, signaling and encoding of light field data for efficient distribution over networks.
Such a streaming standard could be useful for the distribution of all kinds of large data sets. This can include VR/entertainment files, medical data (CT, MRI), CAD data, geophysical and photogrammetry information, point cloud data, SAR data and much more. The data sources are out there, but the advanced visualization systems and delivery mechanisms need work. This workshop is designed to advance this discussion.

Related Posts