Able2UK Heroes

Paralysed woman uses AI to speak again

Ann sitting in her wheelchair next to a t v monitor using a laptop

A severely paralysed woman has been able to speak using a digital avatar translating her brain signals into speech and facial expressions.

Up until now non-verbal people have had to operate slow speech synthesisers spelling out sentences word by word via eye tracking devices or small facial movements.

But a breakthrough in modern technology has seen brain-computer-interfaces (BCIs) helping people who have lost the ability to speak following strokes or amyotrophic lateral sclerosis (ALS).

New tech uses small electrodes implanted on the brain surface to detect electrical movement in the part of the brain controlling speech and facial expressions. The signals are encrypted into a digital avatar’s speech and facial movements such as smiling, surprise or frowning.

Professor Edward Chang led the study at the University of California, San Francisco (UCSF), he told The Guardian: “Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others.

“These advancements bring us much closer to making this a real solution for patients.”

Ann, 47, was paralysed after suffering a brainstem stroke 18 years ago, leaving her unable to talk or type.

In order to communicate she uses moving-tracking technology, but the process is slow and laborious, she can only select up to 14 words per minute.

A paper-thin rectangle consisting of 253 electrodes was inserted in the part of her brain which controls verbal communication. The electrodes translated the brain cells which, before Ann’s stroke, controlled muscles in her tongue, larynx, jaw and face.

Once inserted Ann worked with Chang’s team educating the system’s AI algorithm to detect her unique brain signals for speech sounds by repeating various phrases.

The computer picked up on 39 distinctive sounds and a Chat GPT-style language turned the signals into sentences which were used to control an avatar distinctive to Ann’s voice before she suffered a stroke by using a recording of a speech she made at her wedding.

There were some bleeps along the way, 28% of the words out of 500 phases were incorrectly identified. The speech was not as fast as a typical human's running at 78 words a minute in comparison to the standard 110-150 words used in a natural conversation.

But scientists said, taking the imperfections into account, the process is ready to be used by patients.

Professor Nick Ramsey, a neuroscientist at the University of Utrecht in the Netherlands, who was not involved in the research, said: “This is quite a jump from previous results. We’re at a tipping point.”

Dr David Moses, an assistant professor in neurological surgery at UCSF and co-author of the research, said: “Giving people the ability to freely control their own computers and phones with this technology would have profound effects on their independence and social interactions.”

[ A crucial next step is to create a wireless version of the brain-computer-interface that could be implanted beneath the skull. ]

Related Articles

Help support us continuing our groundbreaking work. Make a donation to help with our running costs, and support us with continuing to bring you all the latest news, reviews and accessibility reports. Become a supporter or sponsor of Able2UK today!

Able2UK Logo