Brain Scans Enable Scientists to Read Minds 

Brain scans from a recent study enabled scientists to decode what people were thinking. Find out how the researchers used artificial intelligence to read minds, and how this new technology might help people who can’t speak or even change how we all communicate.

Gordon Lightfoot died this week. He was a role model for me, so a bunch of his songs are rolling around in my head these days.

One of his biggest hits came out when I was just ten years old. It hit number one on the Canadian Singles Chart and number five on the Billboard Hot 100 in the US. 

The song was called If You Could Read My Mind, and there’s another reason that particular song is playing in my brain right now. This other source of my earworm has more to do with science than music, or even relationships.

Human Language Processing Using Functional MRI

Dr. Alexander Huth is an assistant professor of neuroscience and computer science at the University of Texas at Austin. He’s been working toward bringing mind reading closer to reality by using brain scans.

For the past ten years, he’s been studying human language processing with functional magnetic resonance imaging (fMRI). In the process, he’s become one of the world’s leading experts in cognitive neuroscience.

Professor Huth has also won several teaching awards for his cognitive neuroscience courses and classes on neuroimaging and data analysis. In his research work, he looks for ways to use fMRI and similar neuroimaging methods to understand how our brains process language, perception and cognition. 


The May issue of Nature Neuroscience includes a paper written by Professor Huth and his colleague, Jerry Tang from the university’s department of computer science. In it, they describe the results of a study in which the researchers used  fMRI brain scans to produce streams of words directly from peoples’ thoughts.


Subjects Listened to Stories from Podcasts

The team had three subjects lie inside an fMRI machine for at least 16 hours. They listened to stories from sources like a podcast called The Moth.

The Moth is a non-profit storytelling organization. They offer podcasts, live storytelling events and workshops on the art of storytelling.

While the volunteers listened to these stories, the fMRI brain scans monitored their cerebral blood flow. Scientists can use circulation changes in the brain as broad indicators of brain activity.

Matching Brain Activities With Words and Ideas in Stories

Having collected this neural data, the team found ways to match brain scan patterns with story content. They accomplished this using the computer language model, Generative Pre-trained Transformer (GPT), a forerunner of the popular Chat GPT chatbot (forgive me, natural language processor).

The team’s GPT-based language model helped them to develop a brain activity decoder application for the brain scans by relating neural activity patterns with the most likely sequence of words. Once they understood the brain activity patterns that matched words from the stories, they worked on reversing the process.

The reversal involved using the brain patterns to predict new words, phrases or ideas. Through a series of gradual iterations, the decoding software came up with ways to rank the likelihood of the next word in a series, a bit like the way our smartphones try to finish our sentences for us when we’re texting.


Decoder Gets Words Wrong but Gets the Ideas

Strictly speaking, the system gets a lot of words wrong. In fact, if our standard is a verbatim transcript of a person’s thoughts, the error rate is between 92 and 94 percent. 

Even so, that’s only if we take the content literally. “It definitely doesn’t nail every word,” Professor Huth explained. “But that doesn’t account for how it paraphrases things. It gets the ideas.”

For example, “That night, I went upstairs to what had been my bedroom” came out as, “We got back to my dorm room. I had no idea where my bed was.” The decoder understands the gist of the idea even though it phrases it differently and loses some details in “translation.”

Decoder Has Trouble with Pronouns, No One Knows Why

One of the decoder’s biggest challenges is getting pronouns right. As Professor Huth put it, “It doesn’t know who is doing what to whom.” The researchers aren’t sure why the decoder struggles with that so much.

On the positive side, there were two scenarios where the decoders were especially successful. One was when people silently recited a story in their head that they already knew.

The decoder also did well when subjects watched silent movies. So, although the decoder doesn’t come up with identical words, under the right conditions, it successfully reads thoughts from brain scans.

“We’re Getting at the Idea of the Thing”

“It meant that what we’re getting at with this decoder, it’s not low-level language stuff,” Professor Huth said. “We’re getting at the idea of the thing.” In a sense, the decoder knows what we’re thinking, even if it has trouble putting brain scan data into words.

People have always had mixed feelings about the notion of mind reading. Like Gordon Lightfoot, we’ve all wished we knew what a loved one was really thinking from time to time.

On the other hand, we also want to keep our thoughts to ourselves. We have enough privacy issues with computers without brain scans that let them directly invade our minds.

“We Know That This Could Come Off as Creepy”

“We know that this could come off as creepy,” Professor Huth conceded. “It’s weird that we can put people in the scanner and read out what they’re kind of thinking.” Even so, the process isn’t as invasive as it might seem.

For one thing, the decoders only work for the person for whom they were programmed. The brain scan patterns they learn are distinctive to each individual, so scanners or decoders can’t just tap into some random stranger’s ideas.

More importantly, the subjects have to cooperate for the technology to work. If subjects want to resist, all they have to do is start thinking about something else to stop the system from eavesdropping.

Hope for People Who Can’t Speak

This new technology offers hope for people who can’t speak in typical ways. For example, people with hearing impairments might someday be able to use this technology to supplement or even replace sign language.

Also, some of us are born with a condition called congenital vocal cord paralysis. Their cognition and their hearing are fine, but they can’t use their throat to make sounds in the usual way. This can also be the case with stroke survivors.

It’s possible that a more advanced decoder could use artificial intelligence to translate thoughts from a brain scan into language. Conceivably, in the distant future, a more advanced from of this technology might replace everyone’s speech with a more accessible and inclusive way to communicate.

Practical Applications  a Long Way Off

Those practical applications are a long way off, but this research is helpful in other ways as well. Jonathan Gottschall famously called humanity “the storytelling animal.”

Understanding how our minds form language for communication sheds light on the power of storytelling. Stories are the way we all understand the world around us and our place in it.

And Another Thing…

The more we can learn about the role storytelling plays in how we perceive, understand and relate to our world and the universe beyond, the better. It’s our last best hope for creating a better world.

The researchers’ next step is to test their decoders with another, more portable type of brain scanner called a functional near-infrared spectroscope (fNIRS). Professor Huth wrapped up the conversation saying,“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring. So, our exact kind of approach should translate to fNIRS,”

We always have more to learn if we dare to know.


Learn more:

Brain Activity Decoder Can Reveal Stories in People’s Minds
Semantic reconstruction of continuous language from non-invasive brain recordings

Alcoholism’s Brain Pathway Located by MRI Scans
Brain Cells Have Mysterious Self-Organizing Ability
Can a Contest Solve the Mind/Brain Puzzle?

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s