Real Learning in the Age of AI — Part 4
Before the rise of AI, the primary concern for learners was whether a course was relevant, rigorous, or well-designed. Today, an entirely new question has emerged — one that would have seemed unimaginable even five years ago:
Was this course created by actual educators, or is it synthetic content assembled by AI?
This question matters not because AI-generated material is inherently problematic, but because unexamined, unguided, and uncontextualised AI output undermines learning in fields where ethics, power, and lived experience are essential.
In our work at the Centre for Transnational Development and Collaboration (CTDC), we increasingly encounter organisations confused by curricula that appear professional but collapse under minimal scrutiny. AI has enabled an unprecedented level of surface-level coherence — polished texts, sleek visuals, tidy frameworks — while masking fundamental gaps in knowledge, methodology, and ethics.
This blog, the fourth in our series Real Learning in the Age of AI, explores how learners can recognise synthetic content, how thin learning design manifests, and why this matters deeply for safeguarding, DEI, gender justice, organisational governance, trauma, and other sensitive areas.
🌱 Synthetic Content: When AI Mimics Expertise Without Having Any
AI can generate content that looks impressively structured:
- neat modules
- tidy lists
- “global best practice” language
- motivational tone
- conceptual vocabulary
- fabricated case examples
- generic frameworks
But beneath the polish, synthetic content lacks:
- intellectual lineage
- conceptual consistency
- political analysis
- contextual relevance
- ethical reasoning
- methodological coherence
- lived experience
- relational understanding of harm
AI can replicate the form of expertise, but not its substance.
This distinction is critical in education — because the appearance of coherence can falsely signal reliability.
🧠 Six Signs a Course Is Largely AI-Generated
The following patterns, when observed together, strongly suggest synthetic content.
1. The Language Is Perfect — and Empty
AI-generated text often uses:
- overly balanced sentences
- polished but repetitive phrasing
- broad claims that apply to any field
- cliché-filled motivational language
Example:
“Leaders today must be ethical, dynamic, and visionary to create impactful transformation across diverse contexts.”
This says everything and nothing.
The absence of conceptual anchors — authors, traditions, histories, debates — is a hallmark of AI-generated writing.
2. The Curriculum Could Apply to Any Topic
Many synthetic course descriptions follow a template:
Module 1 — Introduction
Module 2 — Core Principles
Module 3 — Tools and Techniques
Module 4 — Implementation
Module 5 — Leadership and Change
Swap “safeguarding” for “marketing” or “trauma” and the curriculum still makes sense.
This is a signal that no real educator shaped the learning design.
3. No Clear Methodology
AI cannot produce a learning methodology rooted in pedagogy.
Thus synthetic courses lack:
- epistemological grounding
- facilitation plans
- reflection practices
- learning outcomes tied to context
- exercises that require interaction or critical thinking
A course without methodology is not a course.
It is a content repository.
4. Generic Case Studies
AI-generated case studies have indicative features:
- ambiguous settings
- culturally neutral environments
- unrealistic names
- simplified conflicts
- tidy resolutions without structural analysis
Real case studies contain:
- tension
- complexity
- politics
- contradictions
- discomfort
AI avoids this because it cannot navigate emotionally or politically sensitive terrain.
5. No Evidence of Revision or Community Practice
Educational materials evolve through:
- feedback loops
- facilitator notes
- community engagement
- adaptation over time
Synthetic courses rarely reference:
- previous versions
- learning from practice
- field-based insights
- collaborative development processes
AI-generated materials arrive fully formed — a key warning sign.
6. Identical Tone Across All Modules
AI struggles to shift register.
Thus the tone of synthetic courses is:
- uniform
- upbeat
- uncritical
- slightly detached
- politically flat
In contrast, real educators shift tone depending on:
- audience
- context
- sensitivity of material
- emotional complexity
- ethical considerations
Uniformity is a red flag.
⚠️ The Risks of Thin Learning Design in Sensitive Fields
Synthetic content does not only lead to shallow learning, in fields involving harm, justice, inequality, and power, it can also reproduce or deepen harm.
1. Safeguarding and PSEA
AI-generated frameworks often ignore:
- affect and lived experience
- positionality
- structural violence
- survivor-centred approaches
- confidentiality and risk
- local power dynamics
This leads to training that misclassifies harm or reinforces proceduralism.
2. DEI and Gender Justice
AI draws heavily on dominant Western discourse.
Without human guidance, it:
- erases local histories
- misrepresents marginalised groups
- recycles neoliberal diversity language
DEI cannot be automated.
3. Trauma-Informed Practice
AI cannot handle discussions of trauma ethically or safely.
Synthetic content risks retraumatisation or misinformation.
4. Governance and Organisational Culture
AI-generated leadership courses often mimic corporate positivism, ignoring relational, cultural, and ethical dimensions of governance.
5. “Holistic Therapy” and “Healing” Courses
AI can now generate entire healing curricula that mix spiritual language, pseudo-psychology, neuroscience jargon, and motivational quotes.
This is dangerous.
When touching the lives, vulnerabilities, and experiences of people, thin educational design is not a benign error — it is an ethical failure.
🧩 How Responsible Educators Use AI (and Why Transparency Matters)
Responsible learning providers use AI as:
- a drafting assistant
- a translation tool
- a brainstorming catalyst
- a way to expand examples or variations
- a support mechanism, not the foundation
The difference lies in intentionality, transparency, and oversight.
Educators must articulate:
- what AI contributed
- what humans revised
- what frameworks guide interpretation
- what ethical considerations shape decisions
- how methodology and pedagogy remain human-led
Transparent integration builds trust.
Unacknowledged dependence erodes it.
🔍 How Learners Can Recognise Thin Learning Design
Here are practical indicators to evaluate a course:
1. Does the course articulate a learning philosophy?
If all you see is outcomes (“become a confident leader!”), not methodology, proceed with caution.
2. Does the provider show contextual knowledge?
Real educators can speak about conditions, constraints, politics, and nuance.
3. Are there identifiable humans behind the work?
AI cannot substitute positionality, identity, or lived experience.
4. Does the course critically engage with power?
Synthetic content avoids structural analysis.
5. Are concepts clearly defined?
Ambiguity and overgeneralisation are common in AI outputs.
6. Does the material show internal coherence?
Fragmented content signals lack of real authorship.
7. Are examples grounded and specific?
Generic case studies indicate thin design.
These checks help learners resist the lure of polished but shallow educational offers.
🌍 At the Centre for Transnational Development and Collaboration
CTDC’s approach to learning is grounded in:
- critical pedagogy
- feminist and decolonial analysis
- methodological rigour
- interpretive and relational approaches
- field-based experience
- careful ethical design
- context-specific adaptation
- transparent and responsible integration of AI
As we prepare to launch CTDC Academy and our new practice camps, we hold ourselves accountable to the same standards we recommend to learners.
In an environment where AI can generate infinite content, the real markers of credibility remain human judgment, methodological depth, and ethical responsibility.
Reach to Us
Have questions or want to collaborate? We'd love to hear from you.