Research 1 Research 2 Research 3

Artificial intelligence is often framed as either a threat to human distinctiveness or as a neutral productivity tool. Rarely is it discussed as something that may meaningfully support neurodivergent people and equally rarely do we ask what AI could become if it were shaped by neurodivergent ways of thinking.

The issue is not simply whether AI can “help”.
The issue is: help according to whose norms?

Across education, employment, and public services, neurodivergent individuals are routinely required to adapt to systems that were not designed with them in mind. Communication styles are standardised. Attention is regulated. Social expectations are implicit and unspoken. Administrative processes reward particular forms of executive functioning.

In that context, AI tools can offer real support.

Where AI Can Support Neurodivergent People

Used carefully, AI can function as a cognitive scaffold rather than a corrective device. It can:

  • Translate dense information into structured formats
  • Assist with drafting when starting is difficult
  • Provide rehearsal space for communication
  • Support time planning and task breakdown
  • Offer alternative modalities (text, audio, visual summarisation)

For someone navigating executive functioning challenges, social processing differences, or sensory overload, these supports can reduce friction.

This matters because many institutional barriers are procedural, not intellectual.

However, the ethical line is important. AI should not be positioned as a tool to make neurodivergent people appear “more typical”. When technology is framed as a way to smooth difference into acceptability, we reproduce the very normativity that excludes in the first place.

The goal is not normalisation.
It is accessibility and agency.

The Risk: Automation of Norms

AI systems are trained on large datasets that reflect dominant communication patterns and behavioural expectations. That means they often encode assumptions about what clarity looks like, what professionalism sounds like, or what counts as appropriate interaction.

If these systems are left uninterrogated, they risk reinforcing neurotypical standards as the default.

What happens, for example, when hiring tools privilege certain linguistic patterns?
What assumptions are embedded in AI-driven assessments of “fit”?
Whose cognitive style is treated as neutral?

This is not merely a design oversight. It is a political and structural question.

What Neurodivergent Perspectives Offer AI Development

The conversation must move beyond assistance and towards co-creation.

Neurodivergent individuals often bring:

  • Pattern recognition that diverges from conventional heuristics
  • Deep-focus specialisation
  • Non-linear problem solving
  • Heightened sensitivity to system inconsistencies
  • Alternative interpretations of social data

These are not deficits to be mitigated; they are epistemic resources.

When neurodivergent people are meaningfully involved in AI design, testing, governance, and auditing, the resulting systems can become more adaptable and less normatively rigid.

Rather than retrofitting accessibility, inclusion becomes embedded at the level of architecture.

This is what relational accountability looks like in technological development: those most affected by systems must shape them.

Moving From Tool Use to Structural Design

There are two parallel conversations happening:

  1. How AI can reduce barriers for neurodivergent people navigating existing institutions.
  2. How neurodivergent people can reshape the assumptions embedded in AI systems themselves.

The first is about access.
The second is about power.

Without the second, the first remains partial.

Across CTDC’s work, we see repeatedly that technology debates collapse into functionality: what works, what scales, what saves time. What is less frequently examined is whose cognitive style becomes the default architecture of the future.

AI is not cognitively neutral.
It encodes design choices, training priorities, and assumptions about “good” reasoning.

If neurodiversity is understood not as deviation but as variation, then technological systems must reflect that variation structurally.

Questions Institutions Should Be Asking

  • Are AI accessibility tools positioned as optional support, or as substitutes for institutional adaptation?
  • Who is present in AI design and governance processes — and who is absent?
  • Do performance metrics penalise cognitive difference?
  • Are neurodivergent users treated as testers, or as architects?

These are governance questions, not usability tweaks.

A Different Framing

The dominant discourse asks: Can AI accommodate neurodiversity?

A more serious question is: Can neurodiversity improve AI?

If artificial intelligence is to become a public infrastructure rather than a narrow optimisation machine, it must be shaped by diverse cognitive logics. Not as an afterthought. Not as compliance. But as design principle.

That requires institutions to move from inclusion rhetoric to structural redesign.

And it requires us to understand neurodiversity not only as a matter of individual support, but as a source of epistemic strength.

These questions cannot be resolved at the level of personal productivity or tool preference. They sit at the intersection of cognition, power, governance, and design.

If AI is becoming part of our shared infrastructure, then understanding how cognitive norms are encoded and how they can be challenged is no longer optional. It requires conceptual literacy, ethical clarity, and institutional courage.

Across CTDC Academy’s programmes, we explore precisely these tensions:

  • how technological systems embed political and economic assumptions
  • how accountability operates when decisions are automated
  • how governance can move beyond compliance towards relational responsibility
  • how individuals and institutions can act responsibly within unequal and rapidly shifting systems

These are not technical training spaces. They are learning environments for people who want to think carefully about the infrastructures shaping their work and lives and who want to participate in shaping them differently.

If you are grappling with the role of AI in your organisation, or reflecting on how cognitive difference is recognised, supported, or marginalised within technological systems, you may wish to explore our Academy offerings.

Serious technological change requires serious learning.
 

Reach to Us

Have questions or want to collaborate? We'd love to hear from you.

"

"