AI Governance under Political Turnover: The Alignment Surface of Compliance Design
Source: https://arxiv.org/abs/2604.21103v1 ↗
Texto completo: arXiv preprint ↗
Peterson frames a problem that most AI governance literature ignores: compliance layers built to make algorithmic decisions reviewable can also be gamed by successive administrations who learn to satisfy the form of oversight without its substance. The 'alignment surface' concept — the stable approval boundary that political actors navigate — is a genuine analytical contribution, not a restatement of existing principal-agent or accountability frameworks. For product directors working with government clients or building platforms that touch regulated decisions, this is the structural explanation for why 'compliant' systems can still produce politically variable outcomes across administrations. The truncated abstract is a weakness, and citation count is unknown, but the problem space is precisely the governance gap the library needs to fill, and the framing is conceptually original enough to justify inclusion.