Scientists use AI to turn brain signals into speech


A recent Research study could give a voice to those who no longer have it. Scientists used electrodes and artificial intelligence to create a device capable of translating brain signals into speech. This technology could help restore the ability to speak in people with brain damage or neurological disorders such as epilepsy, Alzheimer’s disease, multiple sclerosis, Parkinson’s disease and more.

The new system developed in the laboratory of Edward Chang, MD, shows that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of the voice centers in their brain. In the future, this approach could not only restore fluent communication in people with severe speech impairments, say the authors, but could also replicate some of the musicality of the human voice that conveys the emotions and personality of the person. speaker.

The study recorded the brain activity of five epilepsy patients who had previously received treatment with brain implants. Electrodes on the brain have been used to translate brain waves into words spoken by a computer. When you speak, your brain sends signals from the motor cortex to the muscles of the jaw, lips, and larynx to coordinate their movements and produce sound. They were asked to read aloud a list of sentences that the AI ​​algorithm would read and decode the brain signal process of their speech.

“For the first time, this study demonstrates that we can generate whole spoken sentences based on an individual’s brain activity,” said Chang, professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is exhilarating proof of principle that with technology already at hand, we should be able to build a clinically viable device in patients with speech impairments. “

The machine translates brain signals into action and chooses words that they believe have been processed in the brain. Scientists studied English speakers and had sentences computerized and chosen from a possibility of each word used. They then used brain signals that were tracked to the lips, jaw, tongue, and throat which are all used by humans to produce language and words. These were then used to predict formulation by machine learning systems. They were previously used to help people with paralysis with their brains. It was at a rate of just eight wpm, with this new technology it can go up to 150 wpm.

Results varied depending on the number of options to choose from, but on average, listeners were able to correctly identify 70% of the words. When given 25 options per word, they got 69% of the correct words; with 50, they got 47% correct answers.

The benefits of this technology could really help those who have lost their communication skills due to stroke or other illnesses to be able to talk to others. There are concerns, however, that the technology could function as a “mind reading device” and could compromise people’s private thoughts. However, scientists say we are still a long way from being able to accurately imitate speech.

Source link

About Wilhelmina Go

Check Also

British Steel boss Ron Deelen to resign at the end of March

British Steel announced the departure of its chief executive just over a year after the …