Surrendering to AI
Reflections on when to hold on to our cognitive powers and when to let go
There are two recurrent discourses that I keep coming across and which make me feel uneasy.
The first is the idea that there is no intelligence like ours, no creativity like ours, and that AI will never be like us. If we are so special, why is our history packed with so many mistakes and why haven’t we solved all of humanity’s problems yet?
The second is that our ability to think and create can be destroyed if we continue using AI. The worry about losing cognition to our tools is not new and it’s one of my biggest concerns. But it overlooks a very important fact: It has happened already. Gen Z is the first generation in modern history to score lower on standardised tests than the previous one, and that’s not because of AI, which is only being widely used in the past couple of years, if that.
I recognise that neither discourse is wrong, they have something real in them, but the point I want to make is that they are dogmatic and are leading us to the wrong places.
A recent study from the Wharton School “Thinking, Fast, Slow, and Artificial” by Steven D Shaw and Gideon Nave, has attracted attention for showing that when AI is available, people tend to adopt its outputs uncritically, bypassing deliberative reasoning. Researchers call this “cognitive surrender.” The most common reaction to this has been a torrent of commentary arguing we should not surrender to AI.
But I want to point out something that people are missing. I want to argue that in some instances, surrendering to AI is exactly the right thing to do, and that the more important question is not whether to surrender, but when, and how to protect our cognitive capacity. That is the most important decision we all need to make.
Underlying my research and recommendations, there’s a personal belief that AI could also help make us smarter.
In the end, I’ll share how I manually and slowly researched and wrote down my convoluted thoughts, how I got AI to help and how I loved and hated the process of pushing my brain.
Cognitive surrender is increasing due to System 3 (AI)
To understand what is at stake, it helps to understand both the recent Wharton School research and the conceptual framework I have developed in 2022, the idea of System 3. Both build on Daniel Kahneman’s dual mode of human thinking.
In Kahneman’s “Thinking Fast and Slow”, System 1 is fast, automatic, and intuitive. It’s the mental processes that happen below the surface, driven by pattern recognition and past experience. System 2 is slow, deliberate, and analytical, the effortful reasoning we engage when a problem demands it.
The new concept I introduced in my essay back in 2022 is the System 3: the artificial intelligence we access to help us solve problems and augment our intelligence. System 3 is external, algorithmic, and data-driven. It operates outside the brain boundary but increasingly shapes what happens inside it.
Whilst in my previous work on System 3, I suggested that when System 2 surrenders to System 1, the later would grow (due to the neurological effect of pruning), Wharton’s research provides the first rigorous evidence base for how System 3 actually operates in human decision-making, making System 1 surrender to it too.
Their key findings are striking. When AI was available, participants followed its recommendations for the vast majority of time, not just when it was right but also when it was wrong. This uncritical adoption of AI outputs is what the researchers call “cognitive surrender”: the decision-maker no longer constructs an answer but adopts one generated by an external system, without critical evaluation.
The study also found that cognitive surrender is not primarily a time-pressure phenomenon. Even when participants had ample time, they still followed AI advice at similarly high rates. Surrender, it turns out, is not an emergency response. It is a default state.
But cognitive surrender is not a new problem introduced by AI. It is how human cognition has always worked.
Even without AI, when System 2 is suppressed, either by time pressure, by fatigue, or by a lack of motivation, we default to System 1.
We conserve mental energy by using simple, low-effort shortcuts/ heuristics, rather than engaging in complex, effortful thought. The brain is metabolically expensive, and System 2 reasoning is its most demanding operation. Evolution did not build us to deliberate about everything. It built us to act quickly and survive.
So in reality, System 2 has always been surrendering to System 1 whenever conditions made it convenient.
The Wharton researcher also noted that AI’s fluent, authoritative outputs lower the threshold for scrutiny. When AI communicated confidence, participants believed it more.
I would argue that this is exactly the same way a persuasive human does. Think about how we defer in everyday life. We follow influencers. We accept the expert’s opinion, we go along with the confident, articulate person in the meeting.
Experiments show large majorities uncritically accepting faulty answers. From AI, yes, but also from politics, from misinformation, from health advice, from the confident people we encounter every day.
What has changed is not the existence of surrender. What has changed is that we now have a third system to surrender to. One with vastly greater knowledge and computational capacity than System 1 could ever possess.
This reframes the problem. The question is not whether we surrender. We always did. The question is: what should we be surrendering to, and when?
AI can be much better than our instincts
In simple terms, instinct is experience embedded. But AI’s knowledge is the experience of all humans, made available to you in an instant.
Here is what Shaw and Nave’s research also found, but that the headlines missed.
Under time pressure, participants working without AI answered only 32.6% of reasoning questions correctly. Participants who used AI and received accurate outputs answered 71.3% correctly. That is more than double the accuracy, achieved not by thinking harder, but by surrendering to a system that was right.
The research confirmed that System 3 adoption can buffer the cognitive demands of time pressure and reduce its adverse performance effects, specifically when System 3 is accurate (which is more and more often the more models evolve). System 1, which is often clouded by pressure, emotion, and recent experience, can be meaningfully corrected by System 3.
Think about a doctor with two minutes to act on a collapsing patient. Or a national security decision that must be made before a window closes. In both cases, our System 1 (intuition) is operating under conditions that degrade its reliability. But System 3 does not panic and can logically act with accurate and extensive knowledge. In structured, well-defined, high-stakes domains like these, the case for deferring to AI is empirically grounded.
AI over-reliance is a known risk, in particular because AI is not always right. But neither are we. The choice is rarely between a perfect system and a fallible one. It is usually between two fallible systems with different failure modes.
Now I would like to add that knowing which system to trust, and when, is not just the responsibility of users, but it should be incorporated in the AI training. For example, with ethical design thinking, the AI tools could easily offer a confidence level that is accurate and does not mislead users. That is a whole new topic for a different essay.
The real risk is cognitive atrophy, not cognitive surrender
If the case for surrendering wisely (or not surrendering at all) is strong when it comes to System 1 decisions, the case for protecting human capacity is equally strong, but for different reasons.
The problem with cognitive surrender is not, primarily, about getting things wrong or losing autonomy. Those are real concerns, but they are not the deepest ones. The deepest concern is cognitive atrophy: the slow, largely invisible decline of our capacity to think independently.
The question “at what point does cognitive surrender become cognitive atrophy” is a long term issue that researchers have not investigated yet.
If we ignore the consequences of the long term, in the short term, we are getting better results using AI.
An example I like to think about is in education. When we measure whether an AI-assisted essay is better, the answer is usually yes. When we measure whether the student learned anything, the answer is almost always no.
The output improves. The human does not. So the question is not whether today’s essay is better, it is whether, carried forward over time, we produce a generation of people with a widening gap between what they can output and what they actually understand.
This is especially important for children and young people, whose brains are still developing, but can be applied to adults too, if we value sustaining our capacity as we grow older. The same way we should be protecting our muscle mass throughout adulthood, we should protect our gray mass too.
In summary, System 3 can drive a surrender of System 2, and we’re often tempted to do so, because the short term result is great. But System 2, just like a muscle, atrophies when it is not used, and must be actively protected.
On this, I could offer endless advice on selecting carefully what we read, reading deeply, rather than skimming AI summaries, taking handwritten notes, but I will choose one:
Think before using and use it to think.
Think before using AI: When my children ask me how plants grow, I do not Google the answer. I ask them first: what do you think? I make them engage their own reasoning before I offer an external source. The answer they arrive at may be wrong. But the act of forming a hypothesis, of committing to a thought, is precisely what builds the cognitive infrastructure that makes them better thinkers over time.
I’m not alone. Parents and educators have known this for decades. The fact that we now have an ondemand always available System 3 age does not change it. But it makes it more urgent and widespread.
Deskilling is a phenomenon being observed for a while. A multicentre study found that gastroenterologists who relied on AI to detect polyps during colonoscopies became measurably worse at finding them when the AI was switched off. Younger doctors entering the profession may never develop the skill at all — what researchers call the “never skilled” generation.
Now are those the skills that we need to cultivate, and can we even call them skills, or just knowledge? If AI can master these skills better than we do, why should we invest in learning them?
The gastroenterologist may reasonably delegate looking for polyps to AI. But they should not delegate their clinical judgement, their diagnostic reasoning, their ability to recognise that something is wrong even when they cannot name it. These are called constitutive competencies — not specific skills, but the core capacities that enable flexibility, learning, and judgement across novel situations.
And those skills we must protect ruthlessly.
Use AI to think: Using AI to challenge your thinking, asking it to argue against your position, to find the holes in your reasoning, can make you sharper rather than weaker.
There are many ways you could do this, which I won’t dwell on, but I will give an example of it. As you work on a project, instructing AI to respond as a hostile critic, identifying the three ways your argument could fail, two counterarguments, and the blind spots you have missed.
The experience can be uncomfortable. It can be deflating, but learning involves effort and challenge, we all know lunches are never free.
In short, use System 3 not as a replacement for System 2, but as a sharpener of it.
There is nothing more powerful than a brain doing what it loves.
The Wharton School research found something hopeful in its third study. When participants were financially incentivised for accuracy and given immediate feedback on each answer, cognitive surrender decreased. They became more likely to override faulty AI advice. The effect of caring about the outcome was real.
This points to something that the commentary on the study overlooked: motivation is not incidental to thinking. It is constitutive of it. We develop expertise more easily in what we care about, we memorise and learn what we pay more attention to, as reflected in a 2023 research on the topic “The Relation Between Attention and Memory”.
If we like something we pay more attention, if we pay more attention we memorise it better, we learn better. What does this mean when working with AI tools? Use them selectively for the things you enjoy less, the things you’re worse on.
Do not choose depending on AI capability, choose according to YOUR capability.
If you are better at speaking than writing, start with the speech and let AI help with the writing. If the thing you love is the medical diagnosis, not the polyp detection, protect the diagnosis.
The goal is not to resist AI across the board — it is to identify the cognitive capacities that are genuinely yours, that matter to you, that you want to keep, and to be intentional about not surrendering those.
Keeping ourselves thinking (either fast or slow) requires intention
I want to conclude with a call to intentionality, to decide when to use System 1, 2 or 3. When to surrender and when to take charge.
Cognitive surrender is not new. It was always there, happening every time System 2 stood down and System 1 took over. What is new is that we now have a third option, available at any time, with access to more knowledge than any human has ever possessed. Used wisely, it can make us better. Used as a default mode, it can leave us AI dependent and joyless.
The most challenging change we must address is that to keep ourselves thinking is no longer automatic. It requires intention. It requires caring. It requires the willingness to choose friction when the path of least resistance leads somewhere we do not want to go.
It’s System 2 like never before. Slow and steady, just to stay in the race.
A note on the process
This text took eighteen hours to write, eight reading the research and gathering sources, seven writing, arguing with myself, and pushing through the convoluted thoughts that kept growing with the more I read. I forced myself to pause at times and managed to go for a long walk in the fields I photographed. Of course I used AI throughout my writing process: to explain statistical findings, to challenge my arguments, to make it clearer, to make edits that I then edited again. As much as Claude knows my writing, it’s not able to write like me and it keeps erasing my flawed grammar and soul. I did not use it to find new angles, to suggest new ideas nor to draft the flow. Those are the things I love doing and I don’t want to delegate.
Still, I consider this writing in progress, I will edit it a few more times before I surrender to the saying “done is better than perfect”.
References
Shaw, S.D. & Nave, G. (2025). Thinking — Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender. Wharton School, University of Pennsylvania.
Kahneman, D. (2011). Thinking, Fast and Slow. Penguin.
Kissinger, H. (2018). How the Enlightenment Ends. The Atlantic. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/
OECD (2022). PISA 2022 Results, Volume I. https://www.oecd.org/en/publications/pisa-2022-results-volume-i_53f23881-en.html
Farahany, N. (2025). Conversation on AI and cognition. Exponential View podcast.
Pires, S (2022). I Believe System 3 Will Bring Us to Our Senses. IPA.
Cowan, N., Bao, C., Bishop-Chrzanowski, B.M., Costa, A.N., Greene, N.R., Guitard, D., Li, C., Musich, M.L., & Ünal, Z.E. (2024). The relation between attention and memory. Annual Review of Psychology, 75, 183–214.


