CTDC Responsible AI Use Policy
Effective Date: [15 December 2025]
This policy outlines how the Centre for Transnational Development and Collaboration (CTDC) engages with artificial intelligence (AI) technologies across its educational, research, facilitation, consultancy, digital infrastructure, and internal systems. It reflects CTDC’s commitment to justice, equity, transparency, and ethical innovation, and applies across all affiliated CTDC group entities.
1. Purpose and Scope
CTDC integrates AI to support the effectiveness, scalability, and accessibility of its services. This policy:
- Applies to all AI-enabled tools, platforms, and practices used or developed by CTDC internally or externally;
- Covers CTDC Academy (e-learning, diagnostics, content generation, analytics);
- Covers consulting, research, facilitation, and operational work involving AI systems;
- Covers internal systems (e.g. document processing, knowledge management, CRM automation);
- Applies to all CTDC personnel, consultants, learners, clients, and users of CTDC tools or services;
- Includes CTDC’s AI Integration for Businesses service, which supports clients in ethically and contextually embedding AI into their operations.
2. Consent and Data Use Safeguards
CTDC does not use AI in any context involving personal or client data without clear, prior, and informed consent. We do not process personal data via AI systems, nor do we use client data in AI workflows unless:
- We have a valid legal basis (e.g. contract or legal obligation);
- The client or user has explicitly consented in writing;
- The purpose, method, and scope of AI use have been clearly communicated;
- Appropriate data protection agreements are in place.
Learners, clients, and partners always retain the right to decline or opt out of AI-enabled features or services. Where AI is used, it is transparently disclosed and never substitutes for human judgment in critical or safeguarding matters.
3. Guiding Principles
CTDC’s approach to AI use is grounded in its feminist, decolonial, and justice-oriented values. We commit to:
- Transparency — Disclosing where AI is used, what it does, and who oversees it.
- Human Oversight — Ensuring no critical decisions are made by AI without contextual and accountable human involvement.
- Fairness and Non-Discrimination — Minimising algorithmic bias and proactively identifying risks to marginalised groups.
- Accountability — Holding CTDC legally and ethically responsible for AI tools used in its name.
- Privacy and Data Sovereignty — Respecting user consent, minimising data use, and never feeding personal data into third-party AI models without legal basis.
- Purpose Limitation — Designing AI applications for clear, justifiable use cases aligned with CTDC’s values and service objectives.
- Reflexivity — Regularly interrogating the impact of AI on power, access, equity, and CTDC’s institutional purpose.
3. Categories of AI Use Across CTDC
A. In CTDC Academy
- Learning Analytics: Monitoring learner engagement, completion patterns, and dropout risks to inform course design.
- AI-Enhanced Learning Tools: Writing support, reflective prompts, summarisation tools, available to learners with clear opt-in and disclaimers.
- Generative Prototypes: Internal testing of generative AI to assist in drafting course materials or custom learning outputs, always human-reviewed.
- Feedback Automation: Optional use of AI-generated quiz feedback or automated milestone alerts.
- Accessibility Tools: AI-enabled transcription, subtitling, and translation to improve access to course content.
B. In Consultancy, Facilitation, and Research
- Qualitative Data Processing: AI-assisted analysis of anonymised transcripts, surveys, and textual data for trends or pattern detection.
- Document Drafting: Human-led drafting supported by AI summarisation, synthesis, and prompt generation in complex reports.
- Workshop Preparation: Use of AI for slide generation, facilitation prompt design, or discussion maps, always reviewed by a human facilitator.
- Translation/Language Support: AI-powered translation of client-facing content or multilingual resources (subject to human quality assurance).
C. Internal Systems and Infrastructure
- Workflow Optimisation: AI-assisted scheduling, ticketing, task routing, inbox filtering, and internal comms categorisation.
- Knowledge Management: Indexing, summarising, and tagging archival material or shared resources for organisational learning.
- Data Hygiene and Automation: Error flagging, duplicate detection, or data validation in CRM and internal databases.
- Monitoring and Insights: Tracking website performance, analytics trends, or LMS engagement insights.
D. AI Integration for Businesses
CTDC supports external clients through an AI Integration for Businesses service, providing ethical, contextual, and justice-informed strategies for embedding AI into internal workflows, service delivery, knowledge systems, or communications. These services are strictly delivered with:
- Full disclosure of tools and techniques involved;
- Consent and contract-based agreement with the client;
- Explanation of justification, necessity, and safeguards;
- Ongoing review and client-led control of outcomes.
All AI systems are subject to internal approval, testing, and oversight before being deployed.
4. Boundaries and Prohibited Uses
CTDC will not:
- Use AI for any form of covert surveillance or behavioural scoring of users, staff, or learners.
- Delegate safeguarding decisions, hiring outcomes, or contractual decisions to AI systems.
- Feed identifiable client, learner, or staff data into external AI tools (e.g. ChatGPT, Bard, Copilot) without legal basis, clear user knowledge, and safeguards.
- Use personal data in any AI system for training, modelling, or prediction purposes.
- Deliver AI-enabled services to clients or learners without their explicit and informed permission, including justification of use and necessity.
- Use AI to simulate human interactions (e.g. impersonated coaches, facilitators, or researchers) without clear labelling and consent.
- Deploy AI in ways that undermine meaningful engagement, structural analysis, or reflexive practice.
5. Data Protection and Consent
All AI use is governed by CTDC’s Data Protection and Information Security Policy. In addition:
- CTDC never processes personal data via AI tools without a lawful basis and explicit consent.
- No personal data is used to train or fine-tune third-party models.
- CTDC does not use client data in AI systems unless the client has provided informed permission and a legal agreement is in place.
- All third-party AI tools are reviewed for data protection compliance (UK GDPR, DPA 2018).
- Learners and users must opt in before using optional AI-enabled learning supports.
- Data used to develop or monitor AI performance is anonymised and aggregated wherever possible.
- If AI-generated insights are used to influence programme changes, they are validated by domain experts and declared in evaluation reports.
6. Oversight, Audit, and Redress
CTDC ensures ethical governance through:
- Designated leads in each service area accountable for AI use and review;
- Mandatory documentation of AI systems, functions, data inputs, and risks;
- Annual internal audits of AI tools and workflows for risk, bias, and effectiveness;
- Accessible feedback channels for users to contest or flag AI use or impact;
- Termination or retraining of tools where bias, harm, or inaccuracy is discovered.
7. Training and Internal Capacity
CTDC invests in:
- Training staff and consultants in the ethical and strategic use of AI;
- Providing teams with guidance on prompt design, tool selection, and bias mitigation;
- Fostering critical AI literacy aligned with CTDC’s justice and power analysis.
8. Legal and Ethical Alignment
CTDC’s AI practices are designed to comply with current and emerging legal and ethical standards, including:
- The UK General Data Protection Regulation (UK GDPR) and Data Protection Act 2018, including provisions on consent, data minimisation, and automated decision-making safeguards;
- The UK government’s AI governance principles, covering safety, fairness, transparency, contestability, and accountability;
- The EU AI Act (finalised in 2024, entering into force in stages from 2026), especially requirements for educational, research, and professional development tools designated as "limited" or "high-risk" systems;
- The OECD Principles on AI and the UNESCO Recommendation on the Ethics of Artificial Intelligence, ensuring human rights, fairness, and democratic oversight in AI deployment.
CTDC’s AI policy will be updated proactively as legal requirements evolve. Its current provisions reflect international best practices for responsible, rights-respecting AI in education, consulting, and organisational development.
9. Policy Review and Future Development
CTDC will update this policy in line with:
- Changes in the AI regulatory environment (e.g. UK/EU legislation);
- Developments in CTDC’s Innovation & Development portfolio;
- Learning from pilot use cases and user feedback.
Major updates will be reviewed by the CTDC Directors and flagged publicly via the website or platform updates.
Contact
Questions or concerns about CTDC’s AI practices can be directed to:
[email protected]