Computational model discovers new types of neurons hidden in decade-old dataset

In 2014, a team of neuroscientists, including Dr. Earl Miller, the Picower Professor of Neuroscience at MIT, gave macaque monkeys a carefully standardized task: categorize visual dot patterns into one of two groups. As the animals learned, the researchers recorded brain activity, hoping to understand how learning changes neural activity.

Nearly a decade later, Miller — alongside researchers from Dartmouth, including Dr. Anand Pathak and Prof. Richard Granger — gave the same task to a very different subject. It wasn’t a primate at all, but a computational model that the team wired to work like the real brain circuits that control learning and decision-making. Dr. Miller and his colleagues hoped it would produce patterns of neural activity similar to what they observed in the macaques. What they didn’t expect was that the model’s output would point them to something they had missed the first time around.

“We saw some peculiar brain activity in the model,” Miller says. “There was a group of neurons that predicted the wrong answer, yet they kept getting stronger as the model learned. So we went back to the original macaque data, and the same signal was there, hiding in plain sight. It wasn’t a quirk of the model — the monkeys’ brains were doing it too. Even as their performance improved, both the real and simulated brains maintained a reserve of neurons that continued to predict the incorrect answer.”

The new work, published in Nature Communications, puts a name to these overlooked signals: incongruent neurons, or ICNs, and explores theories as to why a primate brain might want to keep alternate options in mind, even if they’re not the right ones at the moment.

Beyond identifying a previously unrecognized class of neurons involved in learning, the study shows that the model behaves like a brain and generates realistic brain activity, even without being trained on neural data. The findings could have major implications for testing potential neurological drugs and for using computational models to investigate how cognition emerges and functions.

Built like a brain

Computational models use mathematical equations to express the electrical and chemical activity of neurons. In that sense, the model is “wired” to behave like a brain.

Most existing models fall into one of two camps: those that are biologically accurate and those designed to perform cognitive tasks like learning and decision-making. Biologically detailed models are built to mimic a brain, and they can reproduce physiological activity such as neurons spiking and oscillating. But they don’t typically include the more complex circuitry involved in cognitive tasks like learning or decision-making. 

On the other hand, cognitive models, including the neural networks that run AI, can reliably perform cognitive tasks like learning and categorization, but the underlying architecture is much simpler than a real brain. That means that they can’t tell you how a real brain performs these tasks — they just perform the task using other machinery. 

“It was eerily similar to what we saw happening in the macaques’ brains,” Miller tells Big Think.

So, if researchers want to use these models to predict how the brain performs cognitive tasks, they need models that are both built like a brain and able to perform cognitive tasks. That was the gap Miller and his colleagues set out to fill.

In this case, Pathak and Granger, from Dartmouth, built a model of the corticostriatal circuit, a loop connecting the brain’s cortex, involved in perception, planning, and memory, with the brain’s striatum, which helps select actions and learn from feedback. The circuit is central to decision-making and learning, and it’s exactly what the macaque monkeys rely on during the visual categorization task. The corticostriatal circuit is also implicated in disorders ranging from Parkinson’s disease to schizophrenia.

If the team could build a model that was biologically realistic and capable of learning, they could begin to understand how that circuit works and what happens when it goes wrong.

No training, no problem

Crucially, the researchers never fit the model to any neurophysiological data.

“Many models are tested by fitting them to one part of a dataset and seeing if they generalize to the rest,” Miller said. “Our model was zero-trained, which means it never saw any brain data. Instead, we built it to follow the same biological rules as the brain. In this case, any behavior has to come from the structure itself, not from fitting the answers in advance.”

After building the model, the researchers gave it the same visual categorization task the monkeys had performed years earlier and let it run. Only afterward did they compare its internal activity to the original macaque recordings.

If the patterns matched, they could be confident that the behavior emerged from the biological architecture itself.

Monkey see, monkey do

When Miller and his colleagues looked at the neural data of the model, he described it as a “wow” moment.

“It was eerily similar to what we saw happening in the macaques’ brains,” he tells Big Think. “The model and the monkeys improved at the same pace; the spikes and waves looked like the monkeys’ spikes and waves. And as both learned, their brains showed more activity in the same relevant areas. All of it suggested that the simulated circuit was capturing something real about how this brain system works.”

Also, this model produced synchronized brain waves, rhythmic patterns of activity across populations of neurons, that changed with learning. Rather than being added by design, the waves emerged naturally from the model’s biologically realistic circuitry and played a functional role in how the model went about the categorization task. For Miller, who has spent much of his career arguing that the rhythms of brain activity are central to cognition and consciousness, that was especially striking.

“The model independently surfaced patterns that decades of experimental data have been pointing to,” he said.

A new class of neurons

Miller used slightly stronger language to describe the second major finding. “It was jaw-dropping,” he recalled.

In the model’s output, the researchers noticed a group of neurons that consistently signaled the wrong response. Instead of fading as learning improved, these neurons grew stronger, and occasionally even nudged the model toward an incorrect decision.

“It’s counterintuitive,” Miller said. “You’d think neurons that signal the wrong pathway would go away with learning.”

When the team went back to the macaque data, they saw the same pattern. “No one had noticed them before, probably because no one was looking for them,” Miller says. “The model itself made a genuine discovery.”

The fact that these neural signals were present in both the model and real-brain data strongly suggests they are real neural pathways, not just noise or modeling artifacts. The team put a name to them that highlights their slightly counterintuitive function: incongruent neurons, or ICNs for short.

Staying flexible

Miller says that we need more research to understand why ICNs exist, but the theoretical, working explanation is that they allow the brain to remain flexible to changing circumstances. In a world where the rules change, and the wrong answer sometimes becomes the right answer, it’s good to keep alternative options in mind. ICNs might help us avoid committing to overly rigid ways of existing when a situation changes suddenly. They could help us explore alternatives, update decisions, and keep the option to choose differently.

It’s a bit like driving the same route to work every day. You know it by heart, could do it with your eyes closed. But if the road is suddenly closed, you don’t just turn around and drive home. Instead, you call upon alternative routes in the back of your mind (or on your smartphone).

I used that driving metaphor as a way to ask Miller why the results really mattered — it seems like common sense that our brains don’t just forget about other options, even if they’re “incorrect” at the moment. He reminded me that much of neuroscience isn’t about “discovering” that the brain can do something, but rather about understanding how it does it.

“It’s not surprising that the brain has this capacity,” he says. “What’s revealing is that we now have more information about where and how it arises. Understanding how the brain supports flexible learning could help us treat neuropsychiatric disorders and learning disabilities. And it brings us closer to understanding cognition as a whole.”

Implications for drug development

Discovering an entirely new set of neurons is jaw-dropping, but the broader significance of the work lies in what the experiment demonstrates: a strong proof of concept that this model doesn’t merely simulate brain activity, but actually behaves like a brain. And once you can model a brain on a computer, you can run experiments that would be impractical, prohibitively expensive, or simply impossible to perform in living animals or people.

The model used in this study is one component of a larger brain-modeling platform called Neuroblox. The platform takes a modular approach, breaking the brain into functional “blocks,” or mathematical models that correspond to real neural systems, and allowing researchers to assemble and study them in biologically grounded ways. Scientists who understand how a given circuit works can build it, let it operate on its own, and observe what emerges.

For drug developers, Neuroblox offers a way to test the way a drug might perform before animal trials even start. “Today, most neurological drugs are tested in mice before moving to humans, but only a small fraction end up working in people,” explains Miller. “Models could offer an intermediate step: a way to explore how drugs might affect complex, brain-like circuits before advancing to animal or human trials.” 

Researchers can sit behind a computer and simulate the effects of a drug on a particular neural circuit, then test the model with a relevant cognitive task to see how those changes affect performance. Of course, experiments in living subjects, mice included, are always necessary, but computational modeling helps make efforts smarter and more precise by narrowing the field of possibilities before testing goes forward.

A fresh pair of eyes

Beyond its practical value for drug development, the model points to something more subtle: how biologically grounded simulations can surface meaningful insights on their own.

“Most of the time, you only find what you’re already looking for,” Miller says. “If something is counterintuitive, it’s easy to miss.”

In this case, building a system constrained by the brain’s own rules helped the researchers spot patterns they had previously overlooked — not because the signals weren’t in the data, but because they hadn’t thought to look for them. After all, it was real brain data from a study they thought they understood. There was no reason to scour it for anomalies until the model showed them something strange. Luckily, as the model revealed, our brains may be hardwired to literally keep alternative ideas in mind, even if we don’t notice them the first time around.

This article Computational model discovers new types of neurons hidden in decade-old dataset is featured on Big Think.

Espace publicitaire · 300×250