Why You Shouldn’t Outsource Your Brain: The Memory Paradox in an AI World

In the chapter The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI, Barbara Oakley, Michael Johnston, Ken-Zen Chen, Eulho Jung and Terrence Sejnowski challenge a widespread assumption of our time: that memorization is outdated. With smartphones in our pockets and AI tools at our fingertips, many educators and learners alike have come to believe that it is more important to know how to find information than to know the information itself. But this way of thinking, the authors argue, may be quietly undermining our ability to think, learn and reason in meaningful ways.

At the heart of the chapter lies the observation that as digital technologies have made information more accessible, we have started to rely on them not just occasionally, but habitually. This reliance leads to a cognitive pattern known as offloading: instead of storing knowledge in our own minds, we push it onto external devices. The paradox is that in doing so, we may be weakening the very mental architecture that allows us to make sense of the world. Evidence from neuroscience, psychology and education points to a worrying trend. As we offload more, we train less, and the result is a loss of cognitive depth.

The authors ground their argument in the workings of human memory. Our brains operate through two major systems: declarative memory, which holds facts and concepts we can recall consciously, and procedural memory, which handles routines, skills and patterns that become automatic over time. Real learning involves more than exposure. It requires moving information from the declarative to the procedural system through practice and repetition. This transition is what allows a chess master to instantly recognize strategies, or a pianist to play without thinking about every note. When knowledge remains external, that transition never occurs. Learners may pass exams or find answers online, but they will not develop fluency or intuition.

Underlying these memory systems are two important concepts: engrams and schemata. Engrams are the physical traces that learning leaves in the brain. Schemata are mental frameworks that organize related information into meaningful patterns. The richer our schemata, the faster and more flexibly we can think. But building schemata requires effort and internalization. When we constantly look things up, our brains do not form these frameworks. We end up with fragments of information rather than connected knowledge, which weakens our ability to apply what we’ve learned in new contexts.

Rather than blaming technology, the chapter highlights how educational models have contributed to this problem. Many classrooms have shifted away from explicit instruction and practice in favor of student-centered, discovery-based learning. The idea is that students will develop deeper understanding if they construct knowledge themselves. But the evidence suggests this does not work well for abstract academic subjects like mathematics or science. These areas require guidance, structure and repetition. Without it, learners often fail to build strong internal representations and instead remain dependent on external cues.

What makes the argument particularly compelling is the connection between these cognitive processes and principles from artificial intelligence. Both human and machine learning rely on a mechanism called prediction error: the gap between what is expected and what actually happens. When our expectations are violated, we adjust our mental models. But to benefit from this, we need prior knowledge. A student who has memorized multiplication tables will notice immediately if a calculator gives the wrong result. A student who has never internalized the facts will not notice the error at all. The brain’s learning systems are only activated when there is something to compare, and that something must come from memory.

Rather than rejecting AI and digital tools, the authors call for a recalibration of how we think about knowledge. Technology should support learning, not replace it. We need to preserve the conditions that allow our brains to build and refine knowledge through effort. Looking things up is not the same as learning. The goal is not to return to rote learning for its own sake, but to recognize that repetition, memorization and guided practice play a critical role in forming the cognitive infrastructure that higher-order thinking depends on.

In a world where nearly everything is searchable, knowing becomes more valuable. A strong internal knowledge base enhances creativity, supports reasoning and protects against error. This is not nostalgia for old methods, but a call to respect how the brain actually works. The chapter concludes with a vision of education that combines the benefits of digital tools with a renewed emphasis on memory, practice and internal knowledge. If we want to develop minds that are both flexible and informed, we cannot afford to abandon the cognitive training that real learning requires.

The full chapter by Oakley, Johnston, Chen, Jung and Sejnowski will appear in The Artificial Intelligence Revolution: Challenges and Opportunities (Springer Nature, forthcoming). A preprint version is already available online and can be accessed here.