We are living through a moment in which technological acceleration is being narrated as inevitability.
Artificial intelligence is presented not merely as a tool, but as a threshold a force that will surpass human judgement, replace human labour, and redefine value itself. In professional settings, in education, and increasingly in private life, people are being told that machines will think faster, diagnose better, decide more rationally, and even care more consistently than we can.
What we see repeatedly is not only anxiety about job loss or market disruption. It is something deeper: a quiet erosion of human worth.
Many people now speak as though they are provisional temporarily useful until systems improve. Some genuinely believe they will be substituted not simply in task, but in value. This is not a technical development. It is a psychological and political shift.
And it demands serious attention.
The Danger Is Not the Technology
The issue is not that AI systems are powerful. Nor is it that automation reshapes labour markets; it always has.
The risk is that we begin to internalise the narrative that the machine is cognitively superior, ethically neutral, and historically inevitable that it represents something larger than the human mind that produced it.
This matters because AI is not an autonomous intelligence emerging from nowhere. It is a human invention: designed, coded, trained, financed, and governed by people situated within particular economic systems, cultures, and power relations.
To treat AI as if it exceeds the human is to erase the human labour, imagination, bias, fear, ambition, and unconscious fantasy embedded within it.
And that erasure has consequences.
When a technology is framed as transcendent, it becomes difficult to question. When it is framed as inevitable, it becomes difficult to resist. When it is framed as superior to human judgement, it becomes easy to diminish ourselves.
Why Psychoanalysis Matters Now
In moments of technological transformation, technical literacy is not enough. We also require psychological literacy.
Psychoanalysis, at its core, is a discipline concerned with unconscious desire, projection, fear, dependency, and the stories we tell ourselves about power. It asks not only what we are doing, but why. Not only how systems function, but how we relate to them.
This is not an abstract concern.
Consider the growing reliance on “therapy bots” and AI systems positioned as emotional companions. Many people now disclose intimate fears, loneliness, and distress to automated interfaces. They report feeling heard, validated, stabilised.
Rather than dismiss this as naïve, psychoanalysis would ask: what conditions make this form of attachment compelling? What does it mean to prefer a system that cannot truly respond, cannot be affected, cannot misunderstand in human ways? What anxieties about dependency, vulnerability, or disappointment are being negotiated through this turn towards machinic care?
This is not merely about individual wellbeing. It is about how we understand relationship, reciprocity, and recognition.
If care becomes something we download, what happens to the relational labour that sustains societies?
AI as a Site of Projection
Across CTDC’s work, we often see how institutions project competence, neutrality, or authority onto systems that appear technical. The machine becomes the rational arbiter; the human becomes the source of error.
Psychoanalysis offers language for this dynamic. Projection allows us to disown parts of ourselves uncertainty, aggression, ambition, dependency by attributing them elsewhere. In the case of AI, we may be projecting fantasies of omniscience and control onto the machine, while simultaneously displacing our own insecurities onto ourselves.
The result is paradoxical: we diminish the human and elevate the tool.
Yet AI systems are built by people who carry unconscious biases, prejudices, interests, fears, and hopes. They are trained on historical data that reflect unequal social arrangements. They are funded by economic actors with incentives. They are deployed within political systems that distribute risk unevenly.
To treat these systems as neutral is not sophistication. It is avoidance.
Agency in an Automated World
Technological acceleration can create a sense of inevitability as though resistance is futile and adaptation is the only rational response.
But inevitability is a narrative, not a law of nature.
Reclaiming agency does not mean rejecting technology. It means situating it. It means recognising that tools are extensions of human intention, not replacements for human value.
Psychoanalysis is important here because it insists on examining intention. Why are we building these systems? What fantasies of efficiency, perfection, or invulnerability drive their development? What anxieties about fallibility or limitation are we attempting to escape?
If we do not interrogate these motivations, we risk reproducing them at scale.
This is not merely about safeguarding jobs. It is about safeguarding meaning.
Work has always been more than productivity; it is bound up with identity, recognition, contribution, and belonging. When individuals begin to believe that they are interchangeable with systems, something fundamental shifts in how they relate to themselves and to others.
The question is not whether AI will change work. It already is.
The question is whether we will allow it to redefine human value without scrutiny.
The Human Behind the Machine
AI is an extraordinary achievement of human ingenuity. It reflects decades of research, creativity, mathematical abstraction, and collaborative labour. To acknowledge this is not to minimise its power; it is to locate that power correctly.
The danger lies in mythologising the invention and erasing the inventor.
When we speak of AI as if it were a superior intelligence emerging beyond humanity, we obscure the ethical responsibility of those who design, deploy, and govern it. We also obscure our collective responsibility as users.
Psychoanalysis reminds us that responsibility cannot be outsourced. Tools may mediate action, but they do not dissolve intention.
If we turn to AI for wellbeing, judgement, or validation, we must ask what this reveals about our social arrangements. If we feel replaceable, we must interrogate the economic systems that measure value narrowly. If we fear obsolescence, we must question the narratives that equate speed with worth.
Technology does not generate these anxieties alone. It amplifies existing structures.
A Different Starting Point
Rather than asking whether AI will surpass us, we might ask:
- What aspects of being human are we too quick to discard?
- What forms of care, interpretation, and ethical judgement cannot be automated without loss?
- What does it mean to build systems that reflect our highest values rather than our deepest fears?
Psychoanalysis does not provide technical solutions. It offers something more difficult: a method for confronting ourselves.
In times of acceleration, reflection can feel indulgent. It is not.
It is the condition for agency.
If AI is a human creation, then its trajectory is not predetermined. It will reflect the intentions, investments, and imaginaries we embed within it.
The task, therefore, is not to compete with the machine. It is to refuse the diminishment of the human.
That refusal begins with understanding ourselves — our projections, our dependencies, our ambitions — and insisting that technology remain accountable to human value rather than the other way around.
Continuing the Work
Across CTDC’s work, we create learning spaces that help people think seriously about power, technology, responsibility, and the psychological dimensions of institutional life. These are not courses about software proficiency or digital trends. They are structured inquiries into how systems are shaped by human intention and how we remain accountable within them.
CTDC Academy courses are built around this premise: that technological literacy without psychological and political literacy leaves us vulnerable to narratives of inevitability and diminishment.
We invite those navigating AI-driven workplaces, ethical uncertainty, or professional self-doubt to engage with these questions in depth. Not to resist technology reflexively. Not to embrace it uncritically. But to understand the forces conscious and unconscious shaping its development and use.
Serious engagement requires time, discipline, and collective reflection.
If these questions resonate, explore whether CTDC Academy offers the right space for your next stage of thinking.
Bibliography
- Laubender, C. (2026). Why Sigmund Freud is making a comeback in the age of Authoritarian and AI. The Conversation UK https://theconversation.com/why-sigmund-freud-is-making-a-comeback-in-the-age-of-authoritarianism-and-ai-273499
- Stein, A. (2025) Editor’s Introduction: What AI Can and Can’t Do and How Psychoanalysis Can Help. American Psychoanalytic Association. https://apsa.org/what-ai-can-and-cant-do/
Reach to Us
Have questions or want to collaborate? We'd love to hear from you.