Advanced brain recording techniques have revealed how neurons in the human brain work together to produce speech.

The recordings provide a detailed map of how people think about what words they want to say and then speak them aloud, researchers report in the Jan. 31 issue of the journal Nature.

Specifically, the map shows how speech sounds like consonants and vowels are represented in the brain well before they are spoken, and how the brain strings them together during language production.

“Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech — including coming up with the words we want to say, planning the articulatory movements and producing our intended vocalizations,” said senior study author Dr. Ziv Williams, an associate professor in neurosurgery at Massachusetts General Hospital in Boston.

“Our brains perform these feats surprisingly fast — about three words per second in natural speech — with remarkably few errors,” Williams added in a hospital news release. “Yet, how we precisely achieve this feat has remained a mystery.”

The findings could form the basis of sophisticated brain-machine interfaces capable of producing synthetic speech, the researchers said. They also could provide insight into a wide array of disorders that hamper or prevent speech.

“Disruptions in the speech and language networks are observed in a wide variety of neurological disorders — including stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders and more,” said study co-author Dr. Arjun Khanna, a neurosurgeon at Stanford University.

“Our hope is that a better understanding of the basic neural circuitry that enables speech and language will pave the way for the development of treatments for these disorders,” Khanna added.

The study relied on a cutting-edge technology called Neuropixels probes, which researchers used to record the activity of individual neurons in the prefrontal cortex region of the brain.

The research team identified cells that are involved in language production that could underlie the ability to speak. They also found separate groups of neurons in the brain dedicated to speaking and listening.

“These probes are remarkable — they are smaller than the width of a human hair, yet they also have hundreds of channels that are capable of simultaneously recording the activity of dozens or even hundreds of individual neurons,” Williams said.

“Use of these probes can therefore offer unprecedented new insights into how neurons in humans collectively act and how they work together to produce complex human behaviors such as language,” he added.

As a result, researchers could observe how neurons in the brain fire to create the basic elements involved in constructing spoken words.

For example, the consonant “da,” which is produced by touching the tongue to the hard palate behind the teeth, is needed to produce the word “dog.”

Researchers found that certain neurons become active before “da” is spoken out loud. Other neurons then fired to combine “da” with other sounds, creating the one-syllable word “dog.”

Using this speech mapping, scientists can predict the combination of consonants and vowels a person plans to say before the words are actually spoken.

Researchers next plan to focus on more complex language processes that will allow them to investigate how people choose the words they intend to say, and how the brain assembles those words into sentences.

More information

The National Institutes of Health have more about communication disorders.

SOURCE: Mass Brigham General, news release, Jan. 31, 2024