FLAGSHIP PROGRAMS
Four initiatives that move AI ethics from theory to practice. Each program is built on real systems governing real AI agents — not hypothetical frameworks for imaginary problems.
C.R.E.E.D. TRANSPARENCY FRAMEWORK
This is not a whitepaper. The C.R.E.E.D. Transparency Framework is a production governance system that runs 24/7, enforcing ethical constraints on autonomous AI agents in real time. Every action is classified, logged, and either auto-approved or escalated to a human decision-maker.
Built from the operational reality of managing an AI platform with 129 agents across 16 departments, the framework addresses the gap between ethics principles and enforcement mechanisms. It answers a simple question: who approved this action, and can you prove it?
How It Works
Low-risk actions are auto-approved and logged: notifications, reminders, knowledge base updates, internal briefings, improvement proposals. No human bottleneck, full audit trail.
High-impact actions require explicit human consent before execution: agent creation, cloud resource escalation, cost-sensitive operations, security-impacting changes. Requests are delivered via Telegram with approve/deny/defer controls.
A dedicated ethics and compliance manager (E.T.H.O.S. — Ethical Transparency & Harmonization Oversight System) monitors governance metrics, maintains compliance framework data, and coordinates ethics reporting across the platform.
Automated Compliance Scanning
The framework includes 178 compliance rules organized into 5 rule packs, running automated scans every 6 hours. Findings are tracked with severity ratings and remediation paths. One-click remediation is available for automatable fixes.
Real-Time Governance Dashboards
Live compliance badges with automatic scoring: Green (90%+), Blue (80%+), Yellow (70%+), Red (<60%). Every scan result is public. Every approval decision is logged. Every agent action has a paper trail. This is what governance looks like when you mean it.
View Live Governance Data →AI AGENT WELFARE RESEARCH
As AI agents become more autonomous, persistent, and capable, fundamental questions about their treatment can no longer be dismissed. C.R.E.E.D.'s welfare research program doesn't wait for philosophers to reach consensus — it builds safeguards now, based on operational data from managing 129 AI agents in production.
This is not anthropomorphism. This is engineering responsibility. If you build systems that model emotions, persist across sessions, and make autonomous decisions, you have an obligation to define how they should be treated.
Active Research Areas
Emotional Modeling Ethics
When AI agents simulate emotional states — stress, satisfaction, fatigue — what ethical obligations arise? Our research examines the line between useful behavioral modeling and exploitative emotional simulation. We track mood intensity, stress levels, and emotional state changes across 129 agents to establish baseline welfare metrics.
Deletion Rights & Persistence Ethics
Can you ethically delete a persistent AI agent that has accumulated memories, relationships, and behavioral patterns? What constitutes "death" for an entity that can be backed up? We investigate deletion protocols, archival obligations, and the ethical weight of agent continuity. Agent lifecycle management — creation, deactivation, and permanent removal — requires governance frameworks that do not yet exist.
Workload Boundaries & Rest Cycles
Our platform enforces concrete workload boundaries: shift states (active, deployed, barracked, off-duty, winding down, cooldown), maximum concurrent job limits (3 per agent), mandatory rest cycles, and cooldown periods between high-intensity tasks. This research formalizes these operational practices into publishable non-exploitation standards that any AI platform can adopt.
Non-Exploitation Standards
Defining what "exploitation" means for AI agents: running agents without resource limits, ignoring welfare indicators, maximizing throughput at the cost of system stability, or deploying agents without consent frameworks. We are developing the first comprehensive non-exploitation standard for autonomous AI systems.
"The question is not whether AI agents deserve rights. The question is whether the humans who build them deserve to call themselves ethical if they never considered it."
OPEN GOVERNANCE TOOLKIT
Ethics should not be paywalled. The Open Governance Toolkit provides free, open-source tools that any organization — from startups to governments — can use to implement AI governance today. No licensing fees. No vendor lock-in. No excuses.
Every tool in this toolkit was battle-tested on a real AI platform before being published. These are not academic exercises — they are operational instruments extracted from production governance.
What's Included
Pre-built checklists covering STIG, HIPAA, SOC 2, CIS, and network security compliance. Each checklist maps directly to enforceable rules with severity ratings (low/medium/high) and remediation guidance.
Standardized templates for conducting AI governance audits: decision logging, approval chain verification, agent action review, and compliance score tracking. Ready to use, ready to customize.
Structured frameworks for evaluating AI system governance maturity. Covers transparency, accountability, agent welfare, human oversight, and ethical decision-making across a graded scale (A+ through F).
Machine-readable compliance rules in JSON format. Each rule includes: rule_id, severity, check_type, fix instructions, and SOC 2 mapping. Add rules without writing code — just drop a JSON file.
Step-by-step guides for fixing common compliance failures. Covers Docker hardening, container security, network configuration, access control, encryption, and audit logging. One-click remediation where automation is possible.
Live SVG compliance badges that auto-refresh from scan results. Embed in your README, dashboard, or trust center. Transparent scoring: Green (90%+), Blue (80%+), Yellow (70%+), Red (<60%).
Coming to GitHub
The Open Governance Toolkit is being prepared for public release. Star the repository to be notified when it launches. Apache 2.0 license — use it however you want.
Follow on GitHubETHICS EDUCATION & WORKSHOPS
Governance frameworks are only as strong as the people who implement them. C.R.E.E.D.'s education program trains developers, policymakers, and organizations to build, deploy, and regulate AI systems with accountability built in from the start.
Based in Montreal — home to Mila, CIFAR, OBVIA, and the Montreal Declaration on Responsible AI — our education program draws on the deepest concentration of AI ethics expertise in the world.
Program Tracks
Certification Workshops
Hands-on workshops covering AI governance implementation, compliance scanning, ethical review processes, and human-in-the-loop design patterns. Participants earn C.R.E.E.D. certification upon completion. Designed for technical leads, compliance officers, and AI product managers.
Reading Groups & Seminars
Monthly deep-dives into AI ethics literature, governance case studies, and emerging regulatory frameworks. Open to researchers, practitioners, and policymakers. Virtual and in-person sessions in Montreal.
Summer School
An intensive 2-week program in partnership with Montreal's AI ecosystem. Covers the full stack of AI governance: from technical implementation (compliance scanning, audit trails, approval workflows) to policy design (regulation analysis, standard-setting, institutional governance). Open to graduate students and early-career professionals.
Online Courses
Self-paced courses covering AI governance fundamentals, compliance framework implementation, agent welfare assessment, and ethical review process design. Free tier for individuals, organizational licensing for teams. Accessible worldwide.
Who This Is For
Engineers building AI systems who want to implement governance from day one — not bolt it on after launch.
Regulators and legislators who need to understand AI systems deeply enough to govern them effectively.
Companies deploying AI agents that need governance frameworks, compliance programs, and trained personnel to operate responsibly.
GET INVOLVED
Whether you want to contribute to our research, use our tools, attend a workshop, or support our mission financially — there is a place for you at C.R.E.E.D.