A paralyzed woman has been able to communicate verbally using a novel brain-computer interface (BCI) system. The approach by a team of researchers from the University of California, San Francisco (UCSF), “could form the basis for a speech neuroprosthesis for people with paralysis”, they write in their research paper, which was published in the prestigious science journal Nature.
The system captures the woman's neural activity, processes it, and then produces speech through a digital avatar, allowing her to communicate in real-time. As stated in the related research article, this is a “first-of-its-kind demonstration of a fully implanted intracortical brain–computer interface (BCI) system for real-time speech synthesis.” The authors write, “Our results demonstrate the feasibility of a speech BCI approach that could restore spoken communication in people with severe speech disabilities.”
A Scientific Breakthrough
This marks the first instance where either speech or facial expressions have been synthesized from brain signals. Impressively, the system can decode these signals into text at a rate of nearly 80 words per minute, a significant improvement over existing technologies.
Edward Chang, MD, chair of neurological surgery at UCSF, has been working on this BCI technology for over a decade. He expressed hope that this latest research breakthrough will soon lead to an FDA-approved system that can convert brain signals into speech. Chang's team had previously demonstrated the possibility of decoding brain signals into text in another patient who had experienced a brainstem stroke. The current study, however, showcases a more ambitious goal: decoding brain signals into the richness of speech and the facial movements that accompany conversation.
253 Electrodes Connected to Speech Areas of the Brain
To achieve this, Chang implanted a paper-thin rectangle of 253 electrodes onto the surface of the woman's brain over areas critical for speech. These electrodes intercepted the brain signals that would have otherwise directed muscles in her tongue, jaw, larynx, and face. The participant then worked with the team for weeks to train the system's AI algorithms to recognize her unique brain signals for speech. This involved repeating phrases from a 1,024-word conversational vocabulary until the computer could identify the associated brain activity patterns.
The team also devised an algorithm to synthesize speech, which they personalized to sound like her voice before the injury, using a recording of her speaking at her wedding. Furthermore, they animated the avatar with software that simulates and animates facial muscle movements, allowing the avatar to display facial expressions like happiness, sadness, and surprise.
Potential and Future Applications
The potential applications of this technology extend beyond just restoring speech. With further advancements, it could be used to help individuals with other forms of paralysis or motor function impairments. The technology could also be integrated into other devices or systems to provide enhanced communication capabilities. A future goal for the team is to develop a wireless version of the BCI, which would greatly enhance the user's independence and social interactions.