Speech Synthesized From Brain Activity
Please consider Scientists Create Speech From Brain Signals.
Scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesized into speech.)
“It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group.
The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence.
Previous implant-based communication systems have produced about eight words a minute. The new program generates about 150 a minute, the pace of natural speech.
The researchers also found that a synthesized voice system based on one person’s brain activity could be used, and adapted, by someone else — an indication that off-the-shelf virtual systems could be available one day.
The team is planning to move to clinical trials to further test the system. The biggest clinical challenge may be finding suitable patients: strokes that disable a person’s speech often also damage or wipe out the areas of the brain that support speech articulation.
This is fascinating research. Congrats to the researchers.
Mike "Mish" Shedlock