AI Governance Fails When Capabilities and Rules Don't Align
McCann argues that most AI systems have mismatched boundaries between what they can do and what governance covers, creating inevitable blind spots.
AI governance structurally fails when the boundary of system capabilities diverges from the boundary of governance rules.
- — Every AI system has two independent boundaries: expressiveness (what it can do) and governance (what rules cover).
- — Misalignment creates three regions: governed capabilities (safe), ungoverned capabilities (risk), and rules addressing non-existent capabilities (theater).
- — Rice's theorem proves no algorithm can decide whether arbitrary programs comply with behavioral governance policies.
- — Coterminous governance requires architectural separation of computation from effects, not post-hoc governance layers.
- — Structural governance integrates checks into the execution pipeline rather than running as a separate monitoring system.
- — Current deployed systems treat governance and expressiveness as independent design choices, guaranteeing failure modes.
- — The framework distinguishes effect governance (actions in the world) from output governance (content quality, bias).
Astrobobo tool mapping
- Knowledge Capture Document your system's effect boundary (all possible actions) and governance boundary (all covered rules) in a structured inventory. Use McCann's three-region model to categorize each capability.
- Focus Brief Prepare a one-page summary of whether your system's expressiveness and governance boundaries are coterminous. State the evidence (architectural separation, integrated checks, or lack thereof).
- Daily Log Track any new capabilities added to the system and the corresponding governance updates. Flag delays between capability release and governance coverage.
Frequently asked
- Coterminous governance is a system property where the boundary of what an AI system can do (expressiveness) exactly matches the boundary of what governance rules cover. McCann argues this requires architectural separation of computation from effects, so governance checks are built into the execution pipeline. Without this alignment, ungoverned capabilities and ineffective rules are structurally inevitable.
cite ▸
Alan L. McCann. (2026, May 2). AI Governance Fails When Capabilities and Rules Don't Align. Astrobobo Content Engine (rewrite of arxiv/cs.AI). https://astrobobo-content-engine.vercel.app/article/ai-governance-fails-when-capabilities-and-rules-don-t-align-af18f7
Alan L. McCann. "AI Governance Fails When Capabilities and Rules Don't Align." Astrobobo Content Engine, 2 May 2026, https://astrobobo-content-engine.vercel.app/article/ai-governance-fails-when-capabilities-and-rules-don-t-align-af18f7. Based on "arxiv/cs.AI", https://arxiv.org/abs/2604.27292.
@misc{astrobobo_ai-governance-fails-when-capabilities-and-rules-don-t-align-af18f7_2026,
author = {Alan L. McCann},
title = {AI Governance Fails When Capabilities and Rules Don't Align},
year = {2026},
url = {https://astrobobo-content-engine.vercel.app/article/ai-governance-fails-when-capabilities-and-rules-don-t-align-af18f7},
note = {Astrobobo rewrite of arxiv/cs.AI, https://arxiv.org/abs/2604.27292},
}