The Two Boundaries: Why Behavioral AI Governance Fails Structurally
Source: https://www.semanticscholar.org/paper/b32c7671f795085a9430ed9a4c066d72b1636a6e ↗
Full text: open-access via OpenAlex ↗
McCann's central move is elegant and underexploited: Rice's theorem (1953) already proves that no behavioral layer added on top of a Turing-complete system can ever fully govern its effects — the gap between what a system can do and what governance covers is undecidable in the general case.
This reframes 'AI governance' from a policy problem into an architectural one, where safety theater is not an implementation failure but a structural inevitability when expressiveness and governance boundaries are defined independently.
The proposed solution — 'coterminous governance,' achieved by separating computation from effect at design time rather than layering oversight afterward — translates a theoretical result into a testable architectural criterion with mechanized proofs.
For product directors, this is the argument that explains why every post-hoc content filter, every bolt-on moderation layer, and every compliance dashboard is fighting a battle that formal logic guarantees it will lose at the edges.
The citation count is low and the work is new, but the abstract delivers a genuine analytical architecture grounded in a classical theorem rather than a policy framework, which clears the bar the library sets for AI governance entries.
Read alongside Peterson on compliance design and Falk/Tsoukalas on the organizational consequences of AI deployment.