Watch out, there’s a new term on the block. Even as the initial flurry of excitement over Oculus-primed virtual reality seems to be in a perpetual state of prototyping, and as other forms of augmentation are hanging about like costume options for Ready Player One, discussion is turning to enhanced reality. I know this not because of some online insight (Google Trends isn’t showing much), but because it has come up in conversation more than once with enterprise technology strategists.
So, what can we take from this? All forms of adjusted reality are predicated on a real-time feed of information that brings a direct effect to our senses:
- At one end of the scale, we have fully immersive environments known as Virtual Reality (VR). These are showing themselves to be enormously powerful tools, with potential not just in gaming or architecture but also areas such as healthcare — imagine if you can shrink to the size of a tiny cancer, and then control microscopic lasers to burn it away? At the same time, the experience is isolating and restricted, which is both a blessing and a curse.
- Augmented Reality (AR) rests on the fulcrum between virtual reality and, ahem, reality. Goes the argument, why take a real-time video feed and add data to it, if you can project data or images directly onto what you are seeing? It’s a good argument, but it demonstrates just how fraught and complex the debate quickly becomes. What’s useful information? What’s in and what’s out? And indeed, is a computer really better at discerning the important stuff, than our own senses?
- With its roots in passive information and heads-up display, the notion of enhanced reality (ER) does away with the need to worry about such things. Yes, AR and VR also follow a similar lineage but ER does not try to be anything beyond a context-specific information delivery mechanism. So, a motorbike rider can see speed and fuel, a surgeon can monitor vital signs, and so on.
The difference from other models is that ER starts from the perspective of minimum necessary — that is, what information do I actually need right now. This approach is not that dissimilar to the kinds of displays we are now seeing in cars, and indeed, it pushes the same buttons of how to add information whilst minimising distraction.
So, is this a cop-out? It is not hard to see VR, AR and ER as a Venn diagram, with plenty of overlaps between each. Right now however, no one-size-fits-all solution addresses that little triangle in the middle; meanwhile, the different models enable us to focus on what problem we are trying to solve, rather than worrying about whether today’s technology options are sufficient for one thing or another.
In practical terms, for example, even as the wonks work on deeply immersive experiences, a pair of retinal-focused sunglasses that offer a feed of messages, and/or can link into a feed of data about whatever activity is being undertaken, would probably sell like hot cakes if the price was right. (Indeed, if I could put in an early feature request, a tap on the frame to turn the thing on/off would be most welcome).
Enhanced Reality offers us more than just a lowest-denominator entry point. By focusing on useful data and how to deliver it succinctly, rather than clever hardware and how to make it fit our daily lives, we are putting the horse before the cart and, as a result, perhaps advancing things faster, even if initial use cases appear mundane. If we want to get to deep levels of technological immersion, we would do well to start at the shallow end.
from Gigaom https://gigaom.com/2018/03/26/is-enhanced-reality-an-ar-vr-cop-out/
Comments
Post a Comment