Library · paper

Understanding the Affordances of Control in AI Reasoning for Human-AI Decision-Making

Jiwon Moon, Chien-Ming Huang & Ziang Xiao
2026

Fuente: https://www.semanticscholar.org/paper/6c94c6d875a588d112712379d11d8841618d6ffa

The paper's central finding is unsettling in a precise way: giving users the ability to edit AI reasoning increases their sense of control but also increases over-reliance when the AI is wrong — an illusion of control that is worse, in practice, than no explanation at all. Read-only chain-of-thought explanations, the current industry default, perform worst of all by inducing agreement without engagement. This inverts the standard explainable-AI design rationale, which assumes that more transparency reliably improves human judgment. For product directors building AI-assisted decision tools, the empirical result — not just the intuition — that transparency mechanisms can undermine the agency they appear to restore is a structurally important finding. The work belongs alongside Simon's bounded rationality canon and the behavioral economics thread in the library, grounding abstract concerns about human-AI collaboration in a controlled experiment with a clear psychological mechanism.

aidecision-makingcognitiondesign