LG Display and a group of researchers from Sogang University in Seoul, South Korea, led by their professor Kang Seok-ju, have developed tools to measure and reduce motion-to-photon latency and motion blur in VR, which can cause nausea and motion sickness.
An AI solution developed by the team uses a deep learning algorithm to facilitate a real-time conversion of low-resolution images into higher-resolution versions. The solution is designed to allow HMDs to generate smoother, higher-resolution images with less latency, without the need for a powerful, expensive GPU.
The team also created a tool that simulates a HMD user’s head movements and field of view, for the purpose of measuring the latency and motion blur that can cause nausea.
We seem to have reported a lot on the use of AI to scale images. Of course, although the learning is in the cloud, even 5G will not have enough latency to be used in live video, so the algorithm can be developed and optimised in the cloud, but downloaded to the kind of neural processor that is increasingly starting to appear in the market. (BR)