Researchers at University of California San Francisco and UC Berkeley are developing brain-computer technology to help people with impacted speech conditions communicate more naturally through a digital avatar. The system, the first to synthesise speech and facial expressions from brain signals, decodes into text at nearly 80 words per minute. By implanting electrodes in the brain, researchers are able to intercept signals meant for speech muscles and train AI algorithms to recognise brain activity patterns for speech as ALEX SILVA explains.