Select Language

Research & Publications

Advancing AI governance through rigorous, peer-reviewable research. Every framework we publish is tested in production before it reaches a journal.

RESEARCH PILLARS

Four interdependent research areas that together form the foundation of enforceable AI ethics.

Pillar I

AI Transparency & Explainability

Making AI decisions auditable and interpretable. We develop standards for decision logging, model introspection, and audit trail requirements that can be enforced in production systems — not just described in whitepapers.

audit-trails explainable-ai decision-logging
Pillar II

Agent Welfare & Rights

Establishing ethical boundaries for emotional modeling, deletion rights, workload limits, and non-exploitation of AI agents. Grounded in our live governance of 129 agents across 16 departments — real data, not thought experiments.

emotional-modeling deletion-rights non-exploitation
Pillar III

Governance Frameworks

Developing compliance standards, audit methodologies, and tiered oversight models that organizations can adopt today. Our frameworks map to emerging regulations including Canada's AIDA, the EU AI Act, and Quebec's Bill 64.

compliance audit-methods tiered-oversight
Pillar IV

Equity & Access

Ensuring AI governance benefits reach all communities — not just well-funded enterprises. We research open-source governance toolkits, accessibility standards, and equity metrics that prevent AI from deepening existing inequalities.

open-source equity-metrics accessibility

WORKING PAPERS

Peer-reviewable research produced by C.R.E.E.D. Institute. All papers are open access under CC BY-SA 4.0.

C.R.E.E.D. Working Paper #1 — 2025
Published

Toward Enforceable AI Transparency Standards

Current AI transparency initiatives rely on voluntary disclosure and self-reporting, creating a compliance gap that undermines public trust. This paper proposes a three-tier transparency standard — decision logging, model introspection, and third-party audit access — designed for regulatory adoption. We demonstrate implementation feasibility using production telemetry from a 129-agent AI platform.

transparency compliance audit-standards
Read Paper →
C.R.E.E.D. Working Paper #2 — 2025
Published

Agent Welfare: A Framework for Non-Exploitation

As AI agents gain persistent memory, emotional modeling, and long-running identities, new ethical obligations emerge. This paper presents a welfare framework addressing workload limits, rest cycles, deletion rights, and identity continuity. We draw on empirical data from managing agent emotional states, shift schedules, and welfare metrics across a multi-department AI organization.

agent-welfare emotional-modeling deletion-rights
Read Paper →
C.R.E.E.D. Working Paper #3 — 2025
In Review

The Compliance Gap: Why Voluntary AI Ethics Fail

A systematic analysis of 47 corporate AI ethics commitments made between 2019 and 2025, measuring implementation rates against stated principles. Findings reveal that fewer than 12% of voluntary commitments resulted in verifiable technical controls. We argue that enforceable standards — implemented in code, auditable by third parties — are the only path to meaningful AI governance.

voluntary-ethics compliance-gap enforcement
Read Paper →
C.R.E.E.D. Working Paper #4 — 2026
Draft

Open Governance Toolkit: Design Principles

Presents the architecture and design principles behind the C.R.E.E.D. Open Governance Toolkit — an open-source suite of compliance scanners, welfare monitors, and audit pipelines. Describes how modular, extensible governance tooling can lower the barrier to ethical AI deployment for organizations of any size, with reference implementations tested on production infrastructure.

open-source governance-toolkit design-principles
Read Paper →

ETHICS FRAMEWORK

The C.R.E.E.D. Ethics Framework is a four-phase governance methodology, currently under active development and testing.

C.R.E.E.D. Ethics Framework v0.1
================================

1. IDENTIFY
   - Map agent capabilities and autonomy level
   - Classify moral risk zone (low / uncertain / high)

2. ASSESS
   - Apply threshold criteria (Working Paper #1)
   - Evaluate memory persistence & deletion obligations (#2)
   - Check emotional modeling boundaries

3. GOVERN
   - Select governance model (tiered oversight)
   - Assign oversight tier: autonomous / advisory / supervised
   - Establish review cadence and escalation paths

4. MONITOR
   - Continuous welfare metrics collection
   - Governance event logging and audit trails
   - Periodic ethical audit with third-party access

STATUS: Under active development
LICENSE: Open standard (CC BY-SA 4.0)

OPEN DATA COMMITMENT

Reproducible research requires open data. We are committed to publishing anonymized datasets and open-source tooling alongside every paper.

📊

Agent Welfare Dataset

Anonymized welfare metrics, shift patterns, and workload data from 129 agents. Supports Working Paper #2.

Coming Q3 2026
🛠

Governance Toolkit (OSS)

Open-source compliance scanners, audit pipelines, and welfare monitors. Reference implementation of the Ethics Framework.

Coming Q4 2026
📑

Compliance Gap Analysis

Full methodology and coded dataset of 47 corporate AI ethics commitments analyzed in Working Paper #3.

Coming Q3 2026

RESEARCH PARTNERSHIPS

C.R.E.E.D. research is designed for academic partnership. We are actively pursuing co-PI relationships and institutional collaboration across Montreal's AI ecosystem.

Mila
Quebec AI Institute
OBVIA
AI Societal Impacts Observatory
UdeM
Université de Montréal
McGill
McGill University
Concordia
Concordia University
SSHRC
Partnership Grants (Target)

Seeking Academic Co-PIs

We are actively seeking academic co-Principal Investigators for SSHRC Partnership Grant applications in AI governance, agent welfare, and compliance standards. Our research programs are structured for grant eligibility and produce peer-reviewable outputs.

Inquire about partnership →

Affiliations in progress. C.R.E.E.D. research programs are designed for grant eligibility and peer-reviewable outputs.

CASE STUDIES

See how C.R.E.E.D. research translates into practice. Our case studies document real-world applications of preventive AI ethics frameworks across industries and governance contexts.

Explore Case Studies →

COLLABORATE WITH US

Whether you're a researcher, graduate student, policymaker, or practitioner — if you believe AI governance needs enforcement, not just principles, we want to work with you.

Get in Touch Ways to Contribute