It is well known that artificial neural networks are inspired by their biological counterparts. Yet these algorithms are extremely simplified, even cartoonish, when compared to human brains.
How do they work? Can we learn anything from them?
The answer is yes, according to a panel at the Society for Neuroscience annual meeting this month. It wasn’t intended to model the brain with deep learning. Biologically, it is unlikely, if not impossible. According to the panel, that isn’t the point. We can derive high-level theories for the brain’s processes by studying how deep learning algorithms perform-inspirations to further test in the lab.
The use of simplified models is not wrong, according to panelist Dr. Sara Solla, an expert in computational neuroscience at Northwestern University’s Feinberg School of Medicine. It’s enormously powerful to discover what’s critical for our neural networks and what’s evolutionary junk by discovering what to include-or exclude.
She agrees with Dr. Alona Fyshe at the University of Alberta. Despite the fact that AI [algorithms] are not faithful models of physiology, they have already been useful in understanding the brain. The key point, she said, is that they can provide representations of how neurons assemble into circuits that drive cognition, memory, and behavior mathematically.
What, if anything, are deep learning models missing? The answer is a lot, according to Ulster University panelist Dr. Cian O’Donnell. We often refer to the brain as a biological computer, but it also runs on electrical and chemical information. The incorporation of molecular data into artificial neural networks could bring AI closer to a biological brain, he argued. The brain uses several computational strategies that deep learning doesn’t yet use.
“The future is already here” when it comes to using AI to inspire neuroscience, according to Fyshe.
Language’s role for the ‘Little Brain’
Fyshe used a recent study about language neuroscience as an example.
The cortex is commonly thought of as the brain’s central processing unit for deciphering language. The cerebellum, however, appears to be a new and surprising hub. In addition to its role in motion and balance, the cerebellum is often called the “little brain.”. Neuroscientists are unclear how language is processed.
Introducing GPT-3, a deep learning model that can write in crazy languages. The GPT-3 algorithm predicts the next word in a sequence. AIs and their successors have created works that are human-like in poetry, essays, songs, and computer code, stumping judges.
Volunteers listened to podcasts while their brains were scanned with fMRI in this highlighted study, led by Dr. Alexander Huth at the University of Texas. Based on these data, the team trained AI models that can predict how their brains fire up based on five language features. When we speak, one feature captures the movement of our mouths. An examination of the language context was another; another examined the noun or verb status of a word. From low-level acoustics to high-level comprehension, the study captured the main levels of language processing.
On a new dataset, only the contextual model based on GPT-3 was able to accurately predict neural activity. What’s the conclusion? High-level processing is preferred by the cerebellum, especially when it pertains to social or people categories.
In order to understand what the cerebellum does, we needed a neural network model, said Fyshe. “Without deep neural networks, this wouldn’t have been possible.”
Bio-Learning at the deepest level
In spite of being inspired by the brain, deep learning is only loosely based on the wiggly bio-hardware inside our heads. Additionally, AI is not bound by biological constraints, allowing it to process information much faster than humans.
What can we do to bring an alien intelligence closer to its creators?
The non-binary aspects of neural networks should be revisited, according to O’Donnell. He says the brain is organized at multiple levels, from genes and molecules to cells that form circuits that lead to cognition and behavior.
In deep learning models, it’s easy to see biological aspects that aren’t present. In the brain, astrocytes play an increasingly important role in learning. There are “mini-computers” inside a neuron’s twisting branches – dendrites – indicating that a single neuron is far more powerful than previously thought. They can package themselves up into fatty bubbles to float from one neuron to another to change the activity of the receiver.
Which of these biological details is important?
The addition of biological details to computation, learning, and physical constraints could move deep learning towards higher biological probability, according to O’Donnell.
What is an example? Workers within a neuron who actively reshape the synapse, a physical bump that connects adjacent neurons, computes data, and stores memories at the same time. Molecular reactions can occur within a single neuron, floating around dendrites to trigger biochemical computations at a much slower speed than electrical signals – a sort of “thinking fast and slow,” but within the neuron itself.
Similarly, the brain learns in a different way than deep learning algorithms. Deep neural networks excel at one task, while the brain is flexible in many. The brain’s main computation method is unsupervised and rewards-based, whereas deep learning relies on supervised learning.
It may also be effective if physical restraints are applied to the brain. Normally, neurons are silent – in sleepy mode – and fire only when needed, conserving energy. Chatterboxes and noisy, they have a built-in redundancy trick to deal with our messy world.
Last but not least, the brain reduces space waste: its input and output cables are like mashed spaghetti, but their 3D location in the brain influences how they work; neurons located near each other can share chemical modulators, even if they aren’t directly connected. There is something very much missing from deep learning models that is this local yet diffused regulation.
It may seem like nitty-gritty details, but these could shift the brain’s computational strategies to a different direction from deep learning.
What is the bottom line? According to O’Donnell, there are lots of known and unknown biological mechanisms that aren’t typically incorporated into deep neural networks.
It’s clear that AI and neuroscience are converging, regardless of hesitancy. Deep learning crowds are now increasingly interested in neuroscience aspects typically ignored by deep learning-like motivation and attention.
According to Solla, it is critical to keep deep learning models close to actual brains, but not too close. She noted that a faithful model might not be useful if it is as detailed as the system itself.
In order for deep learning and neuroscience to be synergistic, the panelists agreed that the next challenge is finding the sweet spot between abstraction and neurobiological accuracy.
The question of which brainy details to include is yet to be answered. Solla believes that there is no silver bullet in neuroscience, despite the possibility of testing ideas systematically. “It will depend on the problem,” she said.
According to O’Donnell, things are different. With deep learning networks, you can train them to perform tasks previously impossible for conventional brain-inspired models. The best of both worlds could be achieved by comparing and uniting the two: an algorithm that works well and also plays well with biological accuracy.