A brain implant the size of a postage stamp could restore the ability to communicate in patients who have lost their ability to speak by inferring what the patient might be trying to say based on brain signals.
Created by the joint work of neuroscientists, neurosurgeons and engineers at Duke University, the “artificial speech device” translates brain signals into what the wearer wants to say.
a Nature Communications New technology described in the journal may one day allow people who have lost the ability to speak due to a neurological disease to communicate using a brain-computer interface.
“There are many patients who have difficulty speaking as a result of a devastating motor neuron disease, such as amyotrophic lateral sclerosis,” said Gregory Kogan, a professor of neuroscience at Duke University School of Medicine and one of the project’s principal investigators. “But current tools that try to help these patients communicate are usually too slow and complicated.”
Imagine listening to an audiobook at half speed: this is the fastest speech decoding rate currently available, producing about 78 words per minute. Meanwhile, people pronounce approximately 150 words per minute while speaking. The difference in speed between normal speech and decoded speech is partly due to the fact that relatively few brain activity sensors can be applied to a thin circle of paper that is then smeared on the surface of the brain. Fewer sensors can provide less decodable information.
Kogan and his team wanted to overcome this limitation, so they teamed up with Jonathan Viventi of the Duke Brain Institute, whose Biomedical Engineering Laboratory specializes in developing high-density, ultra-thin, and flexible brain sensors. For the purposes of the current project, Vivente and his colleagues crammed at least 256 microscopic brain sensors into a stamp-sized sheet of medical-grade plastic. Neurons that coordinate speech show markedly different activity patterns even at the distance of a grain of sand from each other, so it is necessary to read the intended speech so that the signals of neighboring neurons are picked up by separate filaments.
Modest but encouraging results
After manufacturing the new implant, Kogan and Vivinti contacted Duke University Clinic neurosurgeons Derek Southwell, Nandan Ladd, and Alan Friedman, and with their help, they were able to recruit four patients to test the device. To conduct the experiment, the shoulder blade had to be placed on the surface of patients’ brains while they were undergoing surgery for a disease such as Parkinson’s disease or a brain tumor. Naturally, the research team only had a very limited time frame available in the operating room. “It felt like soldiers changing the steering wheel in a Formula 1 race,” says Kogan. -We obviously didn’t want to prolong the surgery, so we had to do the scan in fifteen minutes, including getting up and down. As soon as the surgeon and the medical team said: “You can go!”, we immediately got to work, and the patient did the job.”
The task itself was simple speech repetition: a series of nonsense words were played to participants – e.g. “ava”, “kug”, “vip” – they had to repeat each out loud, and the device recorded the activity of the speech motor center, while coordinating the functions of nearly 100 muscles that move the lips, tongue, jaws and larynx.
Next, Suseendrakumar Duraivel, a bioengineering doctoral student at Duke University and first author of the article, fed the pattern of neural activity and the corresponding heard speech into an artificial learning algorithm to examine how accurately the algorithm could predict which speech sound would be heard based on the pattern of neural activity alone. For some sounds and participants, such as the “g” sound in the word “gak,” the decoding algorithm guessed correctly 84% of the time, provided the sound was at the beginning of a nonsense three-letter word.
However, prediction accuracy decreased as sounds appeared in the middle and at the end of the word. The program also suffers from similar sounds like “p” and “b”. Overall, the decoder performed with 40% accuracy. This may seem like a fairly modest test result, but compared to the fact that other brain speech learning algorithms require several hours or days of audio input, Duraivel’s speech decoding algorithm processed only 90 seconds of audio material from a 15-minute test. It’s an impressive performance.
Duraivil and his teachers are now excited about the possibility of making a wireless version of the device thanks to a recent $4.2 million grant from the National Institutes of Health. “We are now developing the same type of equipment for recording brain activity, but only in a wireless design,” Kogan explained. “With these devices, you can move freely without having to be connected to an electrical outlet, which is a very exciting prospect.”
Although the results of the work are encouraging, Vivinti and Cogan’s speech prosthetics will not be on store shelves any time soon. “We’re still at the point where the speech we produce is much slower than normal, but we can see where we have to go to reach our goal,” Vivente said in the latest issue of Duke’s journal about the technology.