Passthrough video is a feature of many mixed reality (MR) headsets that allows the user to see the real world around them through the headset’s cameras. This can be used for a variety of purposes, such as providing context for virtual objects, or enabling the user to safely navigate their environment while using the headset. In some cases, the passthrough video is also used to create a hybrid experience, where virtual objects are overlaid on top of the real world. In theory, one of the benefits of passthrough video is that it can help to reduce cyber-sickness – the feeling of nausea or disorientation that most people experience when using VR headsets. By being able to see the real world around them, users can feel more grounded and less likely to feel sick. The technology is nowhere near mature, but Apple and Meta are certainly willing to spend billions (yes, billions) of dollars to make it work.
I remain unconvinced that mixed reality applications are any kind of an engine for increasing headset sales. High quality pass through is great, but I just don’t see applications built around integrating rendering with your real world environment as any kind of a killer app. I…
— John Carmack (@ID_AA_Carmack) September 29, 2023
So, the smartest people in tech have gotten together and said, “Wouldn’t it be great if we covered your eyes so you couldn’t see the real world (amazing color gamut performance, wide field of vision, and response times be damned) and, instead, we gave you a video of it using a heavy headset and some very expensive displays, add some dubious image quality that requires a lot of technology which we don’t really have right now, and we spent a lot of money to get it in front of your eyes with tiny displays.”
It is a very expensive way of making sure that people who have to wear a VR headset for a long time – no one quite knows who they are and why they would do this, by the way – don’t bump into stuff around them because they have a way of seeing where they are standing despite the dungeon on their head. Then, there’s the even more expensive notion that you can add all kinds of proximity sensors, hardware, and software to place virtual objects in your real world, the one that is not actually accessible to you anymore because you are, you guessed it, wearing a headset that doesn’t let you see it anymore.
So, the headset people create a problem because the people paying their wages are telling them to build that problem – again, no one has a use case or compelling market data to back up their desire to create an MR headset. Then, the headset people are told to figure out how to solve the problem that they were paid to create because, well, it’s a problem. The answer to that problem is to just eliminate common sense and focus on making problems that you have to solve in what is turning out to be a perpetual cycle of development for the sake of development.
But, MR headsets don’t need use cases and they are not solving any problems because we have the notion that we are entering a new era of spatial computing. This is a great term; it sounds like you are doing computing, a very functional activity with quite a large number of use cases, but you are doing it spatially, which is a word that means absolutely nothing in this context and probably sounded good to a bunch of marketing people desperately clinging on to the last vestiges of their sanity.
Applications and use cases be damned because what we really need is spatial computing. This should matter to the Display Daily audience because it is essentially a display replacement theory. There is a big suggestion from the likes of Apple and Meta that people will be freed from sitting at a desk or carrying a display in their hands because one will be placed near their eyes, with the ability to deliver a giant screen experience with very high resolutions.
As long as people give up the real world and shove a bucket over their head.
Who knows, maybe there will come a time when you can have a gossamer headset that floats over eyes like silk and the video passthrough is at such a level of quality that it is indistinguishable from what your eyes would see without these magic near-eye displays. But, some day we will, no doubt, all be flying to work in our electric space cars.
There is no business case for MR headsets. It’s just a desperate need for an iPhone moment or a deep desire by Mark Zuckerberg to find his own singular vision that isn’t just about having Meta copying competitors and trying to drown them out. It’s weird to see it happening. The emperor has no clothes, and yet, we kind of hem and haw about the tech behind these devices, and how certain problems are being solved as if there is a compelling case for the investments being made.
There isn’t. We need more displays, not less of them. We need less intrusive displays, light and foldable, and not headsets. AR/VR/MR/XR is being turned into a beachhead for a new computing paradigm because it has failed to deliver on anything else that is useful.
By all means, invest in cool technologies; personal proximity sensors could be interesting even though no one is thinking about using them to help people who could really use them like someone suffering from macular degeneration. Heads up displays have value, especially in automotive and aviation where you need to, you know, keep your head up and you have more information being thrown at you every day. Video passthrough as the defining function of an MR headset, the thing that makes it not VR, is a very niche feature for a very niche product. Reality is actually pretty good at delivering high-fidelity 3D images at zero cost and no added bill of materials costs.
But it doesn’t look like the insanity of reality replacement theory stops there. Meta has this other technology, photorealistic avatars, that Zuckerberg was showing off with podcaster Lex Fridman last week. There’s a lot of tech in here to digest, very smart work, work that will eventually end up with a two or three minute phone scanning process that gives you a version of you that is purely digital. Way cool? Yes. Way useful? Uhm, me in digital instead of me in camera? A photorealistic version of me to replace me?
This is the jumping the shark moment for tech innovation. It shall pass, but not until a lot of money has been spent, and someone steals the better ideas and uses them in a way that we haven’t even imagined yet. But the notion that you build to replace displays and reality is ridiculous and wacky.
Move along because there is nothing worthwhile to see here expect a bunch of experimental technologies being built without a tether to actual use cases.