Google had plenty of new updates to offer mobile and Android users during its recent I/O 2019 presentation. In addition to dropping new details about privacy and upcoming Google mobile hardware, the tech giant also unveiled its latest functionality inteded to help users with speech disabilities. Called Project Euphonia, the latest Google service is aimed squarely at helping users who have difficulty speaking communicate with the world around them.
As detailed in the video above, Project Euphonia makes use of Google's AI and voice communication technologies to better serve users with speech impediments or disabilities. A teaser video was shown at I/O 2019 which lays out the situation rather plainly:
"Google has very good speech recognition", says Google research scientist Dimitri Kanevsky, "but if you do not sound like most people, it will not understand you." People with medical issues such as strokes, deafness, or even Multiple Sclerosis aren't typically part of the company's speech recognition models. As such, those types of users are the key focus of Project Euphonia, which aims to revise the technology in order to focus on communication that doesn't sound like typical straightforward speech.
Apparently the process of training Project Euphoria involved Kanevsky voicing over 15,000 phrases to the device. Unsure if it would work, Project Euphonia was eventually able to "make all voice interactive devices be able to understand any person [who] speaks to them." The process can apparently pick out commands and other functions from people who can't speak whatsoever, operating from something as simple as facial expressions or humming.
The system is designed such that typical Google voice-enabled functions can be utilized by those who can't speak clearly as well as those who can't speak at all. It's not just a way to perform mobile device operations, either: Project Euphonia is intended to help with all manner of communication, from expression of emotion to conveyance of messages created by something other than a text- or voice-based medium.
According to the Google I/O 2019 presentation, the scope of Project Euphonia is so large that Google is "not even scratching the surface yet of what is possible." The idea is that over time, Project Euphonia can help disabled users pool together data that will allow the technology to adapt to new circumstances and usage scenarios.
As for how things stand now, mobile users can expect voice models based around Project Euphoria data to become available through the Google Assistant in the near future. With that said, users can help speed up the adoption process by submitting their own voice samples. More details can be found over on the Project Euphonia website.
Kevin Tucker posted a new article, Google Project Euphonia voice accessibility revealed at I/O 19
I'm not crying you're crying