Brain Implant’s Speech Decoding Feat Offers Hope to Neurological Disorder Patients


New York, In a groundbreaking development, a team of scientists has introduced an innovative brain implant that exhibits the potential to decipher signals from the brain’s speech center, thus predicting what an individual intends to convey. The breakthrough technology, as documented in the journal Nature Communications, offers hope to those unable to communicate due to neurological disorders, such as ALS (amyotrophic lateral sclerosis) or locked-in syndrome, by enabling communication through a brain-computer interface.

Lead researcher Gregory Cogan, a Professor of Neurology at Duke University’s School of Medicine, emphasized the importance of such advancements, stating, “There are many patients who suffer from debilitating motor disorders that can impair their ability to speak. The current tools available for communication are often slow and cumbersome.”

The innovative device is a small, postage stamp-sized piece of flexible, medical-grade plastic embedded with an impressive array of 256 microscopic brain sensors. To ensure precise predictions about intended speech, it is essential to distinguish signals from neighboring brain cells, as even neurons positioned closely can exhibit vastly different activity patterns during speech coordination.

The experiment necessitated the temporary placement of the device in four patients undergoing brain surgery for other conditions, such as the treatment of Parkinson’s disease or tumor removal. The experiment involved a simple listen-and-repeat task, where participants heard a series of nonsense words and then spoke each one aloud. The device recorded the activity of nearly 100 muscles involved in lip, tongue, jaw, and larynx movement from each patient’s speech motor cortex.

Subsequently, the researchers employed a machine learning algorithm to analyze the neural and speech data from the surgery suite and assess its accuracy in predicting the sounds based solely on brain activity recordings.

The results revealed that for certain sounds and participants, the decoder achieved an impressive 84 percent accuracy when predicting the initial sound in a sequence of three sounds forming a given nonsense word, such as /g/ in “gak.” However, accuracy dropped when predicting sounds in the middle or at the end of a nonsense word, particularly when the sounds were similar, as in the case of /p/ and /b/.

Overall, the decoder achieved a 40 percent accuracy rate. While this might appear modest, it is a significant achievement, considering that similar brain-to-speech technologies typically require hours or even days of data for their predictions. Remarkably, the speech decoding algorithm achieved this level of accuracy with only 90 seconds of spoken data from a 15-minute test.

Despite this encouraging progress, the team emphasizes that there is still a long journey ahead before the speech prosthetic becomes readily available. Further research and refinement are necessary before this technology can reach the public.

 

Share this article
0
Share
Shareable URL
Prev Post

Krishna Kaul, Manit Joura to perform on ‘Naatu Naatu’ in ‘Rishton Ki Deepavali’ episode

Next Post

Abrar Qazi opens up on day 1 experience on ‘Kumkum Bhagya’ sets

Read next
Whatsapp Join