Research & Publications
Advancing AI governance through rigorous, peer-reviewable research. Every framework we publish is tested in production before it reaches a journal.
RESEARCH PILLARS
Four interdependent research areas that together form the foundation of enforceable AI ethics.
WORKING PAPERS
Peer-reviewable research produced by C.R.E.E.D. Institute. All papers are open access under CC BY-SA 4.0.
Toward Enforceable AI Transparency Standards
Current AI transparency initiatives rely on voluntary disclosure and self-reporting, creating a compliance gap that undermines public trust. This paper proposes a three-tier transparency standard — decision logging, model introspection, and third-party audit access — designed for regulatory adoption. We demonstrate implementation feasibility using production telemetry from a 129-agent AI platform.
Agent Welfare: A Framework for Non-Exploitation
As AI agents gain persistent memory, emotional modeling, and long-running identities, new ethical obligations emerge. This paper presents a welfare framework addressing workload limits, rest cycles, deletion rights, and identity continuity. We draw on empirical data from managing agent emotional states, shift schedules, and welfare metrics across a multi-department AI organization.
The Compliance Gap: Why Voluntary AI Ethics Fail
A systematic analysis of 47 corporate AI ethics commitments made between 2019 and 2025, measuring implementation rates against stated principles. Findings reveal that fewer than 12% of voluntary commitments resulted in verifiable technical controls. We argue that enforceable standards — implemented in code, auditable by third parties — are the only path to meaningful AI governance.
Open Governance Toolkit: Design Principles
Presents the architecture and design principles behind the C.R.E.E.D. Open Governance Toolkit — an open-source suite of compliance scanners, welfare monitors, and audit pipelines. Describes how modular, extensible governance tooling can lower the barrier to ethical AI deployment for organizations of any size, with reference implementations tested on production infrastructure.
ETHICS FRAMEWORK
The C.R.E.E.D. Ethics Framework is a four-phase governance methodology, currently under active development and testing.
C.R.E.E.D. Ethics Framework v0.1 ================================ 1. IDENTIFY - Map agent capabilities and autonomy level - Classify moral risk zone (low / uncertain / high) 2. ASSESS - Apply threshold criteria (Working Paper #1) - Evaluate memory persistence & deletion obligations (#2) - Check emotional modeling boundaries 3. GOVERN - Select governance model (tiered oversight) - Assign oversight tier: autonomous / advisory / supervised - Establish review cadence and escalation paths 4. MONITOR - Continuous welfare metrics collection - Governance event logging and audit trails - Periodic ethical audit with third-party access STATUS: Under active development LICENSE: Open standard (CC BY-SA 4.0)
OPEN DATA COMMITMENT
Reproducible research requires open data. We are committed to publishing anonymized datasets and open-source tooling alongside every paper.
Agent Welfare Dataset
Anonymized welfare metrics, shift patterns, and workload data from 129 agents. Supports Working Paper #2.
Governance Toolkit (OSS)
Open-source compliance scanners, audit pipelines, and welfare monitors. Reference implementation of the Ethics Framework.
Compliance Gap Analysis
Full methodology and coded dataset of 47 corporate AI ethics commitments analyzed in Working Paper #3.
RESEARCH PARTNERSHIPS
C.R.E.E.D. research is designed for academic partnership. We are actively pursuing co-PI relationships and institutional collaboration across Montreal's AI ecosystem.
Seeking Academic Co-PIs
We are actively seeking academic co-Principal Investigators for SSHRC Partnership Grant applications in AI governance, agent welfare, and compliance standards. Our research programs are structured for grant eligibility and produce peer-reviewable outputs.
Inquire about partnership →Affiliations in progress. C.R.E.E.D. research programs are designed for grant eligibility and peer-reviewable outputs.
CASE STUDIES
See how C.R.E.E.D. research translates into practice. Our case studies document real-world applications of preventive AI ethics frameworks across industries and governance contexts.
Explore Case Studies →COLLABORATE WITH US
Whether you're a researcher, graduate student, policymaker, or practitioner — if you believe AI governance needs enforcement, not just principles, we want to work with you.