How AI Models Are Helping to Understand — and Control — the Brain

Martin Schrimpf wants to use AI to learn more about how human brains work.
Samuel Rubio for Quanta Magazine
Introduction
For Martin Schrimpf, the promise of artificial intelligence is not in the tasks it can accomplish. It’s in what AI might reveal about human intelligence.
He is working to build a “digital twin” of the brain using artificial neural networks — AI models loosely inspired by how neurons communicate with one another.
That end goal sounds almost ludicrously grand, but his approach is straightforward. First, he and his colleagues test people on tasks related to language or vision. Then they compare the observed behavior or brain activity to results from AI models built to do the same things. Finally, they use the data to fine-tune their models to create increasingly humanlike AI.
The process works best with more data and more models, so Schrimpf built an open-source platform called Brain-Score that contains nearly a hundred human neural and behavioral data sets. Researchers have tested thousands of AI models against the human data since Schrimpf first developed the platform in 2017, back when he was still in graduate school.
Schrimpf originally planned to work in the tech industry, but after co-founding a pair of software startups during his early academic career, he felt unfulfilled. “I thought I could ask neuroscientists how the brain works, and that would help me build better AI,” he said. “But I realized there’s a huge opportunity in the opposite direction: prototyping ideas in silico [on a computer] and using AI models to explain the brain.”
He moved from his native Germany to the U.S. to get a doctorate in brain and cognitive sciences at the Massachusetts Institute of Technology. In 2023, he moved back to Europe to start the NeuroAI Lab at the Swiss Federal Institute of Technology Lausanne.

Schrimpf’s experiments have shown that the brain’s visual and language systems process information in a similar way.
Samuel Rubio for Quanta Magazine
That year he also co-authored a study showcasing how AI models could transform neuroscience. Schrimpf and his colleagues trained a model to generate sentences that, when read, would activate or suppress neural activity in the reader’s brain. When they tested it with human subjects, brain scans confirmed that the AI-generated sentences really did alter neural activity in the way the model predicted. The study marked the first time that researchers in any field had exerted noninvasive control over high-level brain activity. Using this approach, scientists could potentially use AI-generated stimuli to help treat depression, dyslexia and other brain-related conditions.
Quanta spoke with Schrimpf about what artificial neural networks reveal about intelligence, the future of neuroscience, and the ethical considerations of predicting — and influencing — human thought. The interviews have been condensed and edited for clarity.
You study vision and language systems in the brain. Why these?
I want to build a model of the brain. Starting in vision was a practical decision because that field had produced most of the data in neuroscience, mainly because screens are good at showing many stimuli in rapid succession. Moving to language was a decision to see if the techniques we were developing for sensory systems, like vision, would translate.

Schrimpf on the campus of the Swiss Federal Institute of Technology Lausanne.
Samuel Rubio for Quanta Magazine
And did they?
Yes, it seems the language system in humans can be considered an encoder of features, just like the visual system. It might mean the way mental representations of words or objects are built in the brain is more widespread across cognitive systems than we assumed.
It’s famously hard to understand artificial neural networks, so when a model does seem to align with real neural data, how do you know it’s not just a superficial correlation?
We have all the information with these models. It’s just very difficult to parse. We try to let the data speak for itself. You could compare two human brains and find their activity patterns are similar. That’s basically what we’re doing for models with all the neural and behavioral data we have on the Brain-Score list.
If there’s a lot of data, and the models just keep approaching the ceiling — which I think is the situation we’re in for vision and language — then they might not be perfect, but they’re starting to be aligned.
How similar are these AI systems to the brain?
Artificial neural networks have a neuron-level similarity to the neuronal processing units in the brain. They can reflect activity that’s reasonably consistent with the brain and can even mimic human behavior.

Schrimpf holds his doctoral graduation hat from MIT, which includes a 3D model of his brain and other mementos of his time there.
Samuel Rubio for Quanta Magazine
We might never perfectly explain the brain with simplified models. But the much more interesting question to me is how useful the current models already are for brain science. And I think they are much better than most people give them credit for.
Some neuroscientists have said your approach doesn’t account for psychological data. Is that fair?
It’s true, in many ways, we’re throwing out the classic neuroscience approaches. We’re saying, “Let’s get more data and build models where we might not actually understand the internal mechanisms.”
Classical neuroscientists tend to react to our research with a mix of positive disbelief. I think many aren’t aware of how good these models already are at mimicking brain function.
I don’t see this as just one approach being correct. They both have their successes and limitations. It’s just different bets on which will be more effective in the end, and I’m betting on this AI modeling approach.
How close are we to a digital twin of the brain? Do you expect ever to see one?
That’s exactly what Brain-Score is trying to quantify! I’m optimistic we can get close, and I hope it only takes a few decades. If we get there, I’d think, “Cool, we did that. Now let’s see what we can do with it.”

Schrimpf has trained AI models to create sentences that precisely alter a reader’s brain activity.
Samuel Rubio for Quanta Magazine
And what would you do with it?
One of our dreams is to generate a font that will help people with dyslexia parse sentences. If we have a model of dyslexia, we can probe it and find changes to the text that make it easier for someone to read.
Or if we had a digital twin of a patient’s brain undergoing treatment for depression, we could optimize effective therapy. There’s also invasive stimulation — you could ask the model how to directly change the brain state into a less depressive one.
Are there any limits to influencing brain activity with AI?
In areas like decision-making or general memory, we’re still far from influencing neural activity. But if we can accurately model cognition, we should also be able to induce specific perceptual experiences we can measure.
This is an ethical minefield, though. How do you develop AI models that can responsibly influence thought?
We need to work with experts on that, and we’re exploring this as we move toward things that might become products someday. Creating legislative frameworks is critical, but it’s not obvious to policymakers what’s possible even today. There’s already a lot we can do with the brain that doesn’t have any kind of legal framework around it.
I do worry the timelines are going too fast. As we’re seeing with AI, by the time it gets to the public focus, there’s a lot of retroactive work to ensure everything is done properly. It seems that whatever society develops, security is an afterthought.

“Artificial neural networks have a neuron-level similarity to the neuronal processing units in the brain,” Schrimpf said. “They can reflect activity that’s reasonably consistent with the brain and can even mimic human behavior.”
Samuel Rubio for Quanta Magazine
You’ve found that AI models trained purely on computational tasks can still closely predict human neural responses. Does this imply that human intelligence is reducible to computation?
If you look at current AI systems, they look pretty darn intelligent. They have flaws, but I would certainly start to call the kinds of reasoning they are able to do intelligent. So, I do think that intelligence is reducible. We have a particular implementation that is biological, but we’re already seeing evidence that it’s not the only implementation.
Does that evidence change what you think it means to be human?
If we accept that human behavior arises from physical processes, then there’s no inherent limitation to building such processes artificially. AI models forgo biochemical synapses and use simple unit-level processing rather than complex cellular machinery. And yet, we’re seeing behavior emerge that is reminiscent of human cognition.
So, I think the intelligence we see in humans is not exclusive to us. It’s a pattern of information processing that can arise elsewhere. Personally, I’m not unsettled by this. I view it as an opportunity to learn more about ourselves. What makes the human experience unique in my opinion is not the underlying building blocks, but rather the collection of experiences that are made in a lifetime.