This short piece, as always, is born out of my passion for studying how theories can help us use Artificial Intelligence more effectively. I believe now more than ever that without interdisciplinary research, we won’t be able to logically face the challenges of the Cognitive Age.
Systematically speaking, the key to identifying challenges lies in examining fundamental issues, not just their consequences. For example, if we want to fix the flaws in the learning process, we must first redefine the roots of deep learning and its underlying mechanics. We may even need to redefine them repeatedly to understand how to solve the problems arising from mind-based technologies.
Let me explain what I mean through one of the most debated topics of our time: the mental laziness caused by the way AI is rewriting our brain’s habits. To understand this, we need to look at the dynamics of deep learning in the brain. By grasping this process through interdisciplinary research, we might find ways to make AI learning feel more like natural deep learning.
The goal isn’t just to know the biochemistry of cells. Before looking at what happens inside an organism, we should ask:
Why do we usually prefer learning through AI over the effortful, traditional human way?
You might say the answer is obvious: because learning with technology is effortless and fast.
As a learning specialist, I’d like to answer this from a theoretical perspective.
First, we must accept a reality: Human deep learning is naturally a challenging process. It is fundamentally different from the vast amounts of data we consume today through formal or informal education assisted by LLMs.
The Logic of Immediate Reward: From Skinner to the Present
There is strong research showing that learners prefer a small, immediate reward over a larger, delayed one. This was first highlighted by B.F. Skinner (1953), the pioneer of operant conditioning. (I’ve previously written about how this connects to AI. )
Later, others expanded on this effortless reward preference. In short, according to the behavioral economics of Skinner’s theory, humans look for shortcuts.
AI is currently the ultimate shortcut, giving the best answer in seconds without any real struggle. From this view, it’s not just about the mind; it’s about behavioral economics.
A behavior that leads to a quick reward will always be repeated.
Richard Herrnstein (1961), a student of Skinner’s, developed a mathematical formula called the Matching Law. He showed that organisms don’t just look at one reward; they choose between options. If given two choices, a living being will put its energy into the one that pays off faster and more directly.
In behavioral economics, this phenomenon is known as temporal discounting (Ainslie, 1975). The value of a reward drops the longer you have to wait for it. Simply put, the reward loses its shine in the organism’s mind because it requires patience.
We observe this phenomenon every day with AI users, particularly those utilizing ChatGPT. Students, for instance, might feel that spending hours writing a thesis is stupid or inefficient when they can get an answer in a split second. They don’t just feel productive; they feel smart for bypassing the effort.
Even if you tell them that the struggle is what actually builds their brain, they often won’t listen. They choose the immediate payout over the long-term value.
Evolutionary psychology explains this too: an immediate reward is guaranteed, while a future one is uncertain. Since we are wired for survival, we grab what’s available now.
Brain Biochemistry and the Deep Learning Process
When we learn something deeply, three key things happen at a neurological level:
- Exposure to New Information: The nervous system makes its first contact with data for which it has no existing pattern.
2. Cognitive Load: This is that stuck feeling when a mental process is harder than expected. It’s the effort the brain needs to process unfamiliar data (Sweller, 1988). This friction is essential.
3. Processing and Protein Synthesis: If the information is processed correctly, chemical signals trigger the creation of proteins that physically change the brain’s structure to store that knowledge (Kandel, 2001).
This is why sleep is so vital. Most of this protein synthesis happens while we rest.
One of the most beautiful parts of learning is when we stop thinking about a problem, but our brain keeps working on it.
Through the Default Mode Network or DMN (Raichle, 2015), the brain makes random, creative connections. This is where true creativity is born.
Toward Friction-Based AI
If deep learning is the result of protein synthesis triggered by challenge, then the paradox of modern AI is clear: By removing the friction, technology is removing the learning.
We are facing a biological crisis where human brains, instead of producing genius and problem-solving skills, are becoming mere terminals for receiving quick hits of dopamine.
My proposal is simple: How can we turn AI from a passive answer-giver into a Cognitive Challenging Provocateur?
We need to design models that don’t bypass cognitive load but manage it in a personalized way.
I call this Friction-based AI; a model where algorithms are programmed not for the shortest path, but for the most effective learning path. This is an open invitation to researchers, neuroscientists, and AI architects to collaborate on this new paradigm. My ideas are ready to be turned into actionable proposals.
As a final note, I believe the way we interact with AI is a skill in itself. Even if everyone has the same tools, the results aren’t equal. Efficiency depends on the how.
I am currently developing a startup idea to address these exact challenges in EdTech.It’s EdTechxDr. Atefeh F.
References
• Ainslie, G. (1975). Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin.
• Herrnstein, R. J. (1961). Relative prevalence of response in relation to the relative frequency of reinforcement. Journal of the Experimental Analysis of Behavior.
• Kandel, E. R. (2001). The Molecular Biology of Memory Storage: A Dialogue Between Genes and Synapses. Science.
• Raichle, M. E. (2015). The Brain’s Default Mode Network. Annual Review of Neuroscience.
• Skinner, B. F. (1953). Science and Human Behavior. Simon and Schuster.
• Sweller, J. (1988). Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science.


