Human Trust in AI Search: A Large-Scale Experiment
Source: https://www.semanticscholar.org/paper/1c62d4f8f028874e8c8e21e3e7340f3ad3a7f927 ↗
Aral and Li treat the generative search interface not as a neutral conduit but as an architecture that actively shapes the epistemic dispositions of its users — a framing that connects squarely to the library's concern with how technology redesigns attention and judgment.
The causal finding that hallucinated citations increase trust while explicit uncertainty signals suppress it inverts naive assumptions about transparency: making AI limitations visible does not straightforwardly improve decisions.
This is a preregistered randomized experiment at real scale — 12,000 queries, seven countries, a U.S.-representative sample — which gives the mechanism claims unusual empirical standing.
For product directors, the deeper implication is that interface choices (reference links, confidence markers, social feedback) are policy decisions about which populations bear the epistemic risk of AI error, and the differential vulnerability by education and demographics makes this a governance question as much as a design one.