Library · paper

Who Gets Flagged? The Pluralistic Evaluation Gap in AI Content Watermarking

Alexander Nemecek, Osama Zafar, Yuqiao Xu, Wenbiao Li & Erman Ayday
2026·arXiv

Fuente: https://arxiv.org/abs/2604.13776v1

Texto completo: preprint en arXiv

Watermarking is positioned as neutral infrastructure for AI content authentication, but Nemecek et al. reveal how its effectiveness varies systematically across cultural and demographic lines — what they term the 'pluralistic evaluation gap.' The technical appears neutral but the social is embedded in the statistical: watermark robustness depends on content properties that correlate with language, visual tradition, and group membership. For product directors this is essential reading on how seemingly objective technical solutions reproduce existing inequalities at scale. The paper demonstrates that infrastructure choices are never neutral — they encode assumptions about whose content matters, whose signals are detectable, whose authenticity gets verified. Read alongside Zuboff's Surveillance Capitalism and Noble's Algorithms of Oppression for the broader pattern of how technical systems embed social hierarchies.

aiethicsorganizationscomplexity