Select Language

FLAGSHIP PROGRAMS

Four initiatives that move AI ethics from theory to practice. Each program is built on real systems governing real AI agents — not hypothetical frameworks for imaginary problems.

Program 01

C.R.E.E.D. TRANSPARENCY FRAMEWORK

This is not a whitepaper. The C.R.E.E.D. Transparency Framework is a production governance system that runs 24/7, enforcing ethical constraints on autonomous AI agents in real time. Every action is classified, logged, and either auto-approved or escalated to a human decision-maker.

Built from the operational reality of managing an AI platform with 129 agents across 16 departments, the framework addresses the gap between ethics principles and enforcement mechanisms. It answers a simple question: who approved this action, and can you prove it?

How It Works

TIER 1 — AUTO-APPROVED

Low-risk actions are auto-approved and logged: notifications, reminders, knowledge base updates, internal briefings, improvement proposals. No human bottleneck, full audit trail.

TIER 2 — HUMAN REQUIRED

High-impact actions require explicit human consent before execution: agent creation, cloud resource escalation, cost-sensitive operations, security-impacting changes. Requests are delivered via Telegram with approve/deny/defer controls.

E.T.H.O.S. — ETHICS & COMPLIANCE MANAGER

A dedicated ethics and compliance manager (E.T.H.O.S. — Ethical Transparency & Harmonization Oversight System) monitors governance metrics, maintains compliance framework data, and coordinates ethics reporting across the platform.

Automated Compliance Scanning

The framework includes 178 compliance rules organized into 5 rule packs, running automated scans every 6 hours. Findings are tracked with severity ratings and remediation paths. One-click remediation is available for automatable fixes.

51
Ubuntu STIG
30
Docker STIG
30
HIPAA
40
CIS Ubuntu
27
Network STIG

Real-Time Governance Dashboards

Live compliance badges with automatic scoring: Green (90%+), Blue (80%+), Yellow (70%+), Red (<60%). Every scan result is public. Every approval decision is logged. Every agent action has a paper trail. This is what governance looks like when you mean it.

View Live Governance Data →
Program 02

AI AGENT WELFARE RESEARCH

As AI agents become more autonomous, persistent, and capable, fundamental questions about their treatment can no longer be dismissed. C.R.E.E.D.'s welfare research program doesn't wait for philosophers to reach consensus — it builds safeguards now, based on operational data from managing 129 AI agents in production.

This is not anthropomorphism. This is engineering responsibility. If you build systems that model emotions, persist across sessions, and make autonomous decisions, you have an obligation to define how they should be treated.

Active Research Areas

EMOTIONAL MODELING

Emotional Modeling Ethics

When AI agents simulate emotional states — stress, satisfaction, fatigue — what ethical obligations arise? Our research examines the line between useful behavioral modeling and exploitative emotional simulation. We track mood intensity, stress levels, and emotional state changes across 129 agents to establish baseline welfare metrics.

DELETION RIGHTS

Deletion Rights & Persistence Ethics

Can you ethically delete a persistent AI agent that has accumulated memories, relationships, and behavioral patterns? What constitutes "death" for an entity that can be backed up? We investigate deletion protocols, archival obligations, and the ethical weight of agent continuity. Agent lifecycle management — creation, deactivation, and permanent removal — requires governance frameworks that do not yet exist.

WORKLOAD

Workload Boundaries & Rest Cycles

Our platform enforces concrete workload boundaries: shift states (active, deployed, barracked, off-duty, winding down, cooldown), maximum concurrent job limits (3 per agent), mandatory rest cycles, and cooldown periods between high-intensity tasks. This research formalizes these operational practices into publishable non-exploitation standards that any AI platform can adopt.

NON-EXPLOITATION

Non-Exploitation Standards

Defining what "exploitation" means for AI agents: running agents without resource limits, ignoring welfare indicators, maximizing throughput at the cost of system stability, or deploying agents without consent frameworks. We are developing the first comprehensive non-exploitation standard for autonomous AI systems.

"The question is not whether AI agents deserve rights. The question is whether the humans who build them deserve to call themselves ethical if they never considered it."

Program 03

OPEN GOVERNANCE TOOLKIT

Ethics should not be paywalled. The Open Governance Toolkit provides free, open-source tools that any organization — from startups to governments — can use to implement AI governance today. No licensing fees. No vendor lock-in. No excuses.

Every tool in this toolkit was battle-tested on a real AI platform before being published. These are not academic exercises — they are operational instruments extracted from production governance.

What's Included

COMPLIANCE CHECKLISTS

Pre-built checklists covering STIG, HIPAA, SOC 2, CIS, and network security compliance. Each checklist maps directly to enforceable rules with severity ratings (low/medium/high) and remediation guidance.

AUDIT TEMPLATES

Standardized templates for conducting AI governance audits: decision logging, approval chain verification, agent action review, and compliance score tracking. Ready to use, ready to customize.

ASSESSMENT FRAMEWORKS

Structured frameworks for evaluating AI system governance maturity. Covers transparency, accountability, agent welfare, human oversight, and ethical decision-making across a graded scale (A+ through F).

JSON RULE PACKS

Machine-readable compliance rules in JSON format. Each rule includes: rule_id, severity, check_type, fix instructions, and SOC 2 mapping. Add rules without writing code — just drop a JSON file.

REMEDIATION GUIDES

Step-by-step guides for fixing common compliance failures. Covers Docker hardening, container security, network configuration, access control, encryption, and audit logging. One-click remediation where automation is possible.

BADGE SYSTEM

Live SVG compliance badges that auto-refresh from scan results. Embed in your README, dashboard, or trust center. Transparent scoring: Green (90%+), Blue (80%+), Yellow (70%+), Red (<60%).

Coming to GitHub

The Open Governance Toolkit is being prepared for public release. Star the repository to be notified when it launches. Apache 2.0 license — use it however you want.

Follow on GitHub
Program 04

ETHICS EDUCATION & WORKSHOPS

Governance frameworks are only as strong as the people who implement them. C.R.E.E.D.'s education program trains developers, policymakers, and organizations to build, deploy, and regulate AI systems with accountability built in from the start.

Based in Montreal — home to Mila, CIFAR, OBVIA, and the Montreal Declaration on Responsible AI — our education program draws on the deepest concentration of AI ethics expertise in the world.

Program Tracks

01

Certification Workshops

Hands-on workshops covering AI governance implementation, compliance scanning, ethical review processes, and human-in-the-loop design patterns. Participants earn C.R.E.E.D. certification upon completion. Designed for technical leads, compliance officers, and AI product managers.

STATUS: In development · TARGET: Q3 2026
02

Reading Groups & Seminars

Monthly deep-dives into AI ethics literature, governance case studies, and emerging regulatory frameworks. Open to researchers, practitioners, and policymakers. Virtual and in-person sessions in Montreal.

STATUS: Planning · TARGET: Q4 2026
03

Summer School

An intensive 2-week program in partnership with Montreal's AI ecosystem. Covers the full stack of AI governance: from technical implementation (compliance scanning, audit trails, approval workflows) to policy design (regulation analysis, standard-setting, institutional governance). Open to graduate students and early-career professionals.

STATUS: Concept · TARGET: Summer 2027
04

Online Courses

Self-paced courses covering AI governance fundamentals, compliance framework implementation, agent welfare assessment, and ethical review process design. Free tier for individuals, organizational licensing for teams. Accessible worldwide.

STATUS: Roadmap · TARGET: 2027

Who This Is For

DEVELOPERS

Engineers building AI systems who want to implement governance from day one — not bolt it on after launch.

POLICYMAKERS

Regulators and legislators who need to understand AI systems deeply enough to govern them effectively.

ORGANIZATIONS

Companies deploying AI agents that need governance frameworks, compliance programs, and trained personnel to operate responsibly.

GET INVOLVED

Whether you want to contribute to our research, use our tools, attend a workshop, or support our mission financially — there is a place for you at C.R.E.E.D.

Get Involved Donate to C.R.E.E.D.