A Question Beneath the Hype
Much of the public conversation about artificial intelligence is framed around productivity, efficiency and progress. We are told AI will save time, remove friction and free humans to focus on what truly matters. These claims echo familiar narratives from past technological revolutions, especially the Industrial Revolution, which mechanised labour and ultimately expanded opportunity.
But there is a deeper question we are not asking—perhaps because it is uncomfortable:
What happens psychologically and existentially when humans no longer experience themselves as the primary thinking agents on the planet?
This is not a question about jobs alone. It is a question about agency, meaning, identity and mental health. And it may be the most important question of the AI era.
Intelligence as the Foundation of Human Dominance
Humans did not become the dominant species on earth because of physical superiority. We are not the fastest, the strongest, or the largest. Many animals far exceed us in strength, speed, endurance, or size.
Humans thrived because of intelligence.
Our ability to reason abstractly, plan across time, cooperate symbolically, use language and accumulate culture allowed us to shape environments rather than submit to them. Intelligence gave humans agency: the felt sense that we can understand, decide and influence outcomes.
This mattered not only for survival, but for meaning.
Human identity became organised around cognition:
- thinking
- problem-solving
- storytelling
- moral reasoning
- authorship of action
Other animals became submissive to humans not because they recognised our moral authority, but because they were cognitively outmatched. Humans could anticipate, coordinate and dominate through intelligence.
For the first time in history, that cognitive asymmetry is being challenged.
Why AI Is Psychologically Different From Past Technologies
The comparison to the Industrial Revolution is common and misleading.
The Industrial Revolution mechanised muscle. Artificial intelligence mechanises mind.
This distinction is not technical; it is existential.
Replacing physical labour did not threaten the core of human identity. Humans still thought, decided, reasoned, judged, created meaning, and took responsibility. Machines extended effort, but they did not pre-empt cognition.
AI is different. It does not merely assist thinking; it can replace entire cognitive sequences:
- drafting language
- solving problems
- synthesising ideas
- generating plans
- offering judgments
When thinking itself becomes externalised, the psychological consequences are fundamentally different.
Cognitive Offloading and the Deskilling of the Self
From a mental health perspective, one of the most concerning dynamics introduced by AI is cognitive offloading at scale. When humans consistently rely on external systems to:
- remember
- decide
- articulate
- reason
those skills weaken through non-use. This is not speculative; it is how learning and neuroplasticity work.
Consider writing. Writing is not just a functional skill. It is how humans:
- organise thought
- process emotion
- articulate identity
- create coherence
If individuals no longer write letters, applications, reflections, or arguments themselves—because AI can do it “better”—they may lose confidence in their own voice. Over time, they stop trusting their capacity to express themselves without assistance.
This leads to a subtle but profound shift: “I can’t do this without help”. That belief is fertile ground for anxiety.
Dependency and Anxiety: A Self-Reinforcing Loop
Clinically, we see that anxiety increases when people feel dependent on safety behaviours. AI risks becoming a cognitive safety behaviour. The pattern looks like this:
- AI reduces effort and discomfort
- Human skill weakens through non-use
- Confidence erodes
- Anxiety increases when unaided
- Reliance deepens further
This mirrors patterns seen in:
- reassurance-seeking
- learned helplessness
- avoidance-based coping
Confidence is built through mastery, not convenience. When AI removes struggle rather than scaffolding it, it undermines the very experiences that build psychological resilience.
The Existential Shock: No Longer the Smartest Species
Humans have always understood themselves as the primary thinking agents on the planet. Intelligence was our survival advantage, our moral authority and our justification for agency.
So, what happens when humans feel they are no longer the most intelligent entities in existence?
As much as we would like to believe the hype, evolutionary psychology suggests that loss of dominance does not produce calm adaptation. It produces threat responses.
The question is therefore not whether humans will react, but how.
Three Predictable Human Responses to Existential Displacement
When a core identity is threatened, humans respond in archetypal ways. In the context of AI, three responses are particularly likely.
- Fighting: Conflict as a Substitute for Agency
When people feel outpaced, replaced, or rendered irrelevant, aggression often turns sideways: toward other humans. Conflict restores a sense of:
- power
- identity
- agency
- meaning
If intelligence and competence can no longer ground self-worth, ideology and dominance often take their place. Polarisation intensifies. “Us vs them” thinking hardens.
AI does not need to cause war directly. It only needs to destabilise identity and perceived usefulness. Conflict becomes a compensatory assertion of agency.
- Fawning: Submission to Systems That “Know Better”
Another response to perceived inferiority is submission – loss of agency.
If AI is believed to:
- think better
- decide better
- optimise better
then deferring judgment can feel rational; even responsible.
Over time, this produces:
- loss of independent decision-making
- moral outsourcing
- obedience to opaque systems
- anxiety in the absence of guidance
This is psychologically soothing in the short term. Responsibility is heavy and so letting something else decide reduces stress.
But the long-term cost is loss of agency. A population that no longer trusts its own thinking becomes passive, dependent, and easily controlled.
- Self-Destruction: Withdrawal and Meaning Collapse
The most dangerous response is not rebellion or submission: it is withdrawal.
When people feel: “My thinking is unnecessary”, “My effort is redundant” ,or “My contribution doesn’t matter”, motivation collapses.
This begins quietly:
- disengagement
- apathy
- loss of curiosity
And deepens into:
- depression
- nihilism
- despair
- self-destructive behaviours
Humans require a sense of necessity to thrive – they need to been as ‘useful’. When intelligence—and therefore usefulness—becomes externalised, life can begin to feel hollow.
Why Humans Cannot Simply “Accept” This Shift
Some argue that humans should simply accept a new place in the hierarchy, just as animals once did. This misunderstands human psychology.
Other animals did not define themselves by intelligence. Humans do.
Intelligence is not a peripheral trait. It is our:
- organising principle
- survival strategy
- source of meaning
- basis of agency
Remove or devalue it without offering a replacement and you do not get adaptation. You get identity collapse.
Fragmentation, not unity, is the likely outcome
The most realistic future is not one unified human response, but fragmentation.
Some will fight. Some will submit. Some will withdraw.
This fragmentation itself is destabilising. Shared narratives dissolve. Collective agency weakens. Social trust erodes.
A society that no longer agrees on what humans are for becomes psychologically brittle.
The Most Dangerous Configuration of All
Perhaps the greatest risk is this combination:
- AI is perceived as smarter
- Humans still hold power
- Humans no longer feel responsible
When decision-making is outsourced but authority remains, moral accountability collapses.
“The system decided” is one of the most dangerous sentences a society can normalise.
Conclusion: The Risk Is Not AI—It Is Human Abdication
The true danger of artificial intelligence is not that it will dominate humanity, it is that humans may:
- stop initiating thought
- stop trusting judgment
- stop practising agency
- stop seeing themselves as necessary
Humans do not flourish when thinking is removed from their lives. They deteriorate.
Struggle, effort, authorship and uncertainty are not inefficiencies. They are psychological nutrients.
If AI replaces them rather than supports them, the result will not be liberation: but anxiety, paralysis, and quiet withdrawal from agency.
The central question of the AI era is not what machines can do, it is whether humans will continue to believe that thinking, deciding, and meaning-making still belong to them.