Library · paper

LLMorphism: When humans come to see themselves as language models

Valerio Capraro
2026

Source: https://www.semanticscholar.org/paper/30113c39f29d9058084e874993422f663ba2639b

Capraro names something real and undertheorised: the reverse inference problem, where LLMs that speak like humans lead people to conclude that humans think like LLMs.

This is not a restatement of anthropomorphism or computationalism — it runs in the opposite direction, flattening human cognition downward to match the machine rather than elevating the machine to human status.

The argument that this spreads through analogical transfer and metaphorical availability gives it genuine analytical traction, not just rhetorical alarm.

For product directors, the implications are practical: if the populations you design for increasingly understand their own reasoning through LLM vocabulary, every assumption you hold about how users model themselves is in motion.

The final framing — that the public debate has been attending to only half the problem — is the kind of inversion that reorients a field.

Citation count is unknown and the paper is new, but the conceptual architecture is sufficiently original and the library gap it fills (AI critique meeting philosophy of mind and cultural cognition) is real.