Scientists have uncovered striking parallels between how large language models (LLMs) and the human brain process language, despite their vastly different architectures and energy requirements.
A collaborative study by Google Research, Princeton University, NYU, and the Hebrew University of Jerusalem found that neural activity in the human brain aligns linearly with the internal contextual embeddings of LLMs during natural conversations. Researchers discovered that both systems share three fundamental computational principles: they predict upcoming words before hearing them, match predictions against actual input to calculate surprise, and rely on contextual embeddings to represent words meaningfully.
"We demonstrate that the word-level internal embeddings generated by deep language models align with the neural activity patterns in established brain regions associated with speech comprehension and production," the researchers noted in their findings published in Nature Neuroscience.
However, significant differences exist. While LLMs process hundreds of thousands of words simultaneously, the human brain processes language serially, word by word. More importantly, the human brain performs complex cognitive tasks with remarkable energy efficiency, consuming only about 20 watts of power, compared to the massive energy requirements of modern LLMs.
"Brain networks achieve their efficiency by adding more diverse neuronal types and selective connectivity among various types of neurons in distinct modules within the network, rather than simply adding more neurons, layers and connections," explains a study published in Nature Human Behaviour.
In a surprising development, researchers at BrainBench found that LLMs now surpass human experts in predicting neuroscience experimental outcomes. Their specialized model, BrainGPT, achieved 81% accuracy compared to 63% for neuroscientists. Like human experts, LLMs showed higher accuracy when they expressed greater confidence in their predictions.
These findings suggest a future where brain-inspired computing could dramatically improve AI efficiency. Researchers are exploring spiking neural networks (SNNs) that more closely mimic biological neurons, potentially enabling applications from energy-efficient search and rescue drones to advanced neural prosthetics.
As LLMs continue evolving toward more brain-like processing, the boundary between artificial and biological intelligence grows increasingly blurred, raising profound questions about the nature of cognition itself.