/>
X

Updated Envision smart glasses add improved OCR, new languages, third-party app support

The device's expanded, AI-based capabilities are designed to help the visually impaired navigate their daily lives without external aid.
mugshot.jpg
Written by Michael Gariffo, Staff Writer on

Envision announced that its AI-powered smart glasses will soon be upgraded with improved Optical Character Recognition (OCR), better text recognition with contextual intelligence, support for additional languages, and the creation of a third-party app ecosystem. 

According to Envision, the new ecosystem will allow for the "easy integration of specialist services, such as indoor and outdoor navigation, to the Envision platform." 

Envision based its smart glasses on the Enterprise Edition of Google Glass, using its built-in camera and processing power to help support its mission of accepting and processing visual data to help the visually impaired recognize objects and their surroundings. While Google Glass failed to gain widespread consumer traction across its multiple releases, it has since found a home within niche use cases such as Envision's repurposing it as a hardware vehicle for its AI-based platform.

Other attempts have been made in the past at using AR (Augment Reality) technology to help those with visual impairments. Still, they largely focused on dynamically altering visual zoom levels and focus on helping users take advantage of whatever limited sight they had available to them. Instead, Envision uses AI to translate what it sees into audio cues played back through accompanying speakers. 

Also: Google: AI helps Google Translate offer these new languages spoken by millions

Google's own Google Lens AI was previously used to power a similar platform called Google Lookout, which could provides descriptive readings of what a connected camera was pointed at. A similar service offered by Facebook to help visitors hear descriptions of posted photos was also updated last year.

The original Envision model based on Google Glass debuted in 2020 and was designed to help users read documents, recognize individuals, find personal belongings, use public transit, and achieve greater personal freedom. 

Envision claims its updated smart glasses can be useful to help users read typed or handwritten text on documents, product packaging, screens, and a wide range of other surfaces; translate recognized text into 60 languages; and recognize individuals, colors, and other important visual clues. If the onboard AI system isn't able to complete a task, Envision also offers its "Ally function," which connects users via video call to a sighted individual that can be of assistance.

Envision offers its platform via the aforementioned Google Glass implementation, as well as through iOS and Android apps. A video detailing the Google Glass version can be seen on Envision's website.

Related

I tried Google's new job interview practice tool and I want to cry
Two mature business people congratulate a young professional.

I tried Google's new job interview practice tool and I want to cry

Google
Managing disaster and disruption with AI, one tree at a time
Sunlight through the trees in the forest. Surrey, UK

Managing disaster and disruption with AI, one tree at a time

AI & Robotics
Human-Centered AI, book review: A roadmap for people-first artificial intelligence
human-centred-ai-main.jpg

Human-Centered AI, book review: A roadmap for people-first artificial intelligence

AI & Robotics