subscribe

User Experience Needs Context and Context Needs Sensors

I’ve had a busy week with a visit to the Electronic Display (ED) conference in Nuremberg, where industrial and automotive applications are the hot topics. With the Mobile World Congress coming up this Sunday, I had to get my head down to finish the Electronic Display report before Mobile World starts. While in Germany, I had a number of client meetings talking about desktop monitors and public displays and was doing some work on our latest forecasts which are being finalised. So, there have been lots of things to think about in different areas of display applications.

I got to thinking about the similarities between the different applications. As we heard at Latin Display in November, automotive cockpit designs tend to follow on from aviation. That connection was clear to see when I listened to the talk by Prashanth Halady of Robert Bosch at ED. Halady sees HUDs as important in automotive applications. However, when I talked to him over lunch and we discussed the influence of aviation, while acknowledging that influence, he pointed out that when he had a chance to look at a modern “glass cockpit” in an airliner, he found the experience overwhelming. Pilots get a lot of detailed training on particular aircraft types to help them to cope with the potential overload. That’s not realistic for automotive applications.

Furthermore, as Halady pointed out in his talk (In Cars, Augmented Reality with HUDs is Next Step), display makers usually want you to really be absorbed by the display, but when you put a display device in a car, you may want to be sure that the display doesn’t draw too much attention away from the road. Aircraft can be and are flown by autopilots and there are often multiple pilots, so the way that the displays are used can be quite different from a sole driver in a car.

At CES, Intel had a section of its booth that was being used by Seeing Machines, a company that specialises in understanding operator fatigue and attention. The company uses a wide range of sensors, including gaze recognition, to monitor where the driver is looking and understanding tiredness and attention. There’s a video showing how the gaze recognition can be used to understand where the driver is looking and warn accordingly.

Now, much of this week I was thinking about desktop monitors, and, as I have said before, when I’m using my PC, I’m very involved in the screen – it should really be an immersive experience. Furthermore, the very wide 21:9 and curved desktop monitors are increasingly being referred to as “cockpit” displays. So while car makers are really working to fully understand where the driver is looking, their mood and attention and the situation so that it can advise, PC makers are still, by and large, treating the display as a dumb device and waiting for the user to give instructions.

As we heard at ED, in the talk on the importance of user experience, it’s the sensors as much as the displays that have made tablets and smartphones so powerful and attractive. The question to me is who is going to revolutionise the desktop computing experience using sensors and input devices the way that Apply changed the mobile experience?

Regular readers will know that I have been banging on about gaze as an important technology for desktops for some time. At CES, I was impressed with the RealSense camera with depth that Intel has developed and was demonstrated in a Dell tablet at CES. That might also be used to understand what the user is doing.

This comes back to the idea of context. Systems can produce a much better user experience if they understand the context, whether that means helping a driver to navigate road hazards when they may be tired and easily distracted, or running software more quickly and efficiently. However, unless there are more inputs and sensors, there is no real way to understand the context of the interaction. Who is going to be the first to add the sensor and input devices that add context to the computing experience?

Bob