Researchers at the New York University have demonstrated that the human brain processes information using two different networks – a finding that has significant effects in the creation of AI systems, specifically speech translation tools.
“Our results show there are at least two brain networks that are active when we are manipulating speech and language information in our minds,” Bijan Pesaran told NYU News. Pesaran is an associate professor at New York University’s Center for Neural Science and the senior author of the study “Manipulating stored phonological input during verbal working memory” published in the journal Nature Neuroscience.
Past studies had highlighted that a single or “central executive” network managed the manipulations of information stored in the human memory. Pesaran said the distinction between a single neural network versus two neural networks is important as currently-used AI systems that replicate human speech assume that the computations involved in verbal memory are done by a single brain network.
“Artificial intelligence is gradually becoming more human like. By better understanding intelligence in the human brain, we can suggest ways to improve AI systems. Our work indicates that AI systems with multiple working memory networks are needed,” Pesaran added.
The researchers determined the neural activity of their test subjects’ brain by asking them to transform what they heard into what they needed to say. The study results showed that verbal memory involved the manipulation of two brain networks – one network was used to guide the utterances made by the test subjects, while the other network handled how sounds are transformed into speech. The NYU research team said that translating what you hear in one language and speaking in another language similarly involves the use of two brain networks. The researchers added that currentl AIs have trouble learning languages, similar to people with impaired verbal memory who find it difficult to learn new languages.