Google concluded its I/O presentation with a preview

What They Say

At the end of its I/O event, Google showed a video of its glasses being used to perform real time translation and transcription between languages, a feature it calls ‘sub-titles for the real world’.

What We Think

I have been watching this kind of voice recognition software since I first tried it on an Apple system in the early ’80s. It was pretty terrible. It got a bit better with PCs in the next two decades, but was really only useful for applications with very limited and precise vocabularies, for example, the transcription of medical reports. However, after a friend had trouble typing after a stroke, I was encouraged to try speech to text again and it was impressive, with the back-up of cloud-based processing. I have tried dictating articles, but they do come out differently than when I type. Translation adds an extra layer of complexity, but I often have to look at content in other languages than English and in recent years the quality of translation, especially from the Google Translate app has become amazingly good.

The application really shows the power of good AR, combined with vast computing power in the cloud. (BR)

Google translate