The Auto-Icon

3/15/202610 min read

Jeremy Bentham's auto-icon at the UCL Student Centre Jeremy Bentham's auto-icon at the UCL Student Centre

Jeremy Bentham, founder of utilitarianism, left precise instructions in his 1832 will: that his body be publicly dissected, his skeleton stuffed and dressed in his black suit, and the whole thing placed seated in a chair — "in the attitude in which I am sitting when engaged in thought." He wanted to remain present. He called it an auto-icon, and his justification was rigorous: "Identity is preferable to similitude." The real body is better than any portrait or sculpture.

The attempt to mummify his head went wrong. The skin warped, lost all expression. They had to replace it with a wax replica. The real head ended up between the mannequin's feet, then in a box, then in a safe. In 1975, students from King's College stole it and demanded a hundred pounds in ransom. University College London paid ten.

Bentham still sits in his glass case in the lobby of the UCL Student Centre. Legend has it that he attends academic council meetings and is recorded as "present but not voting." It's false, but the legend is more revealing than the fact: we want to believe that a preserved presence is a real presence.

Talking heads

The fantasy predates Bentham. In the Middle Ages, stories circulated about brazen heads capable of answering any question. Albertus Magnus supposedly spent thirty years building one. His student, Thomas Aquinas, allegedly smashed it to pieces, fed up with its constant chatter. Roger Bacon worked seven years on his. None ever existed, but the legend persisted for centuries: a device that contains wisdom and dispenses it on demand, independent of any living person.

The Pythia at Delphi did exist, and functioned for over a thousand years. But there's a detail that tends to be forgotten: the oracle didn't give clear answers. It gave ambiguous material. The interpretive work belonged to the one asking. The Pythia didn't solve problems — she provoked thought. Whoever arrived at Delphi after weeks of travel had to do something with what they received. The answer didn't come from the oracle. It came from the friction between one's own question and the ambiguity of what was heard.

Agents of people

Today the brazen head has a product and a business model. Grammarly offers to replicate someone's writing style — even without that person knowing, without their consent.

Grammarly using Maximiliano Firtman's style without his consent Grammarly using Maximiliano Firtman's style without his consent. Source: @maxifirtman

There are platforms building tutor-agents: conversational interfaces that let you "talk" with someone who has their own perspective on a subject. A South Korean TV network recreated a deceased child in virtual reality so her mother could say goodbye. The industry already has a name: afterlife bots, griefbots, thanabots.

Each of these products does something that classical rhetoric had already identified. Prosopopoeia: giving voice to the absent, making the dead speak. Quintilian said this figure has the power to "bring down the gods from heaven, raise the dead, and give voices to cities and states." But rhetoric always knew it was a fiction. The orator spoke on behalf of someone, and the audience knew it. What these products do is different: they erase the awareness that interpretation is happening. The agent doesn't present itself as someone speaking on behalf of. It presents itself as the person.

There's a question of ownership here that is only beginning to be discussed. If a product trains a model on my texts and offers "my style" to third parties, whose style is it? But there's a prior, harder question: what is lost when the intermediary disappears and the consumer believes they're accessing the source? What's lost is the awareness of interpretation. And without that awareness, what looks like direct access to knowledge is actually a simulacrum that no one perceives as such.

Experimentation as aesthetics

It's not the first time a new technology has generated the illusion that the medium is the message. The avant-gardes of the twentieth century walked that path with an intensity worth remembering.

Marinetti published the Futurist Manifesto in Le Figaro in 1909. "A roaring motorcar, which seems to race on like machine-gun fire, is more beautiful than the Winged Victory of Samothrace." It wasn't a metaphor: it was a programme. Destroy syntax, burn the libraries, flood the museums. Technology not as a tool but as a value in itself. Three years later, in the Technical Manifesto of Futurist Literature, he proposed abolishing adjectives, replacing punctuation with mathematical symbols, evoking smells and temperatures through typography. Form devoured substance. And what followed was no small thing: Marinetti co-wrote the Fascist Manifesto with Alceste De Ambris in 1919. The worship of progress without ethical grounding found its natural course.

Futurist Manifesto, published in the magazine Poesia, 1909 Futurist Manifesto, published in the magazine Poesia, 1909

Kandinsky had seen it as early as 1911. In Concerning the Spiritual in Art he wrote: "This vain squandering of artistic power is called 'art for art's sake.'" Replace "colours" with "AI tools" and the sentence works just the same.

Clement Greenberg put it precisely in 1939: the true function of the avant-garde was not to "experiment," but to find a path along which culture could keep moving amid ideological confusion. Experimentation was not the end. It was the means to keep something that mattered alive. Greenberg also coined a description for the result when one is confused with the other: kitsch. Ersatz culture. The outward trappings without any of the subtleties.

Much of the current experimentation with AI in education is exactly that. Demos that look like education but contain none of its substance. The medium is so eye-catching that no one asks whether the educational outcome improves.

The workshop as counterexample

But not all avant-gardes fell into that trap. And the alternative has a far longer tradition than it might seem.

In the botteghe of the Renaissance, an apprentice like Leonardo spent years in Verrocchio's workshop before touching an important commission. He ground pigments, prepared panels, watched the master work. There was no curriculum. There was proximity, repetition, and a standard absorbed through immersion. When Verrocchio asked Leonardo to paint an angel in his Baptism of Christ, he didn't give detailed instructions: he gave him access to the problem. Vasari tells us that upon seeing the result, Verrocchio never painted again. Not because Leonardo knew more — he was twenty — but because the workshop had done its job: it had trained an eye and a hand that could already surpass the master.

Centuries later, the Bauhaus recovered that logic in the middle of the twentieth century. Gropius put it in the 1919 founding manifesto: "The artist is an exalted craftsman." In 1923 he adopted the motto "Art and Technology — A New Unity," but the key word is unity. Technology was never venerated for its own sake. It was subordinated to the purpose of building — literally and metaphorically. Moholy-Nagy, in the same tradition, prioritised process over final product. His pedagogical method sought the union of "intellect and feeling." He didn't teach how to master tools. He taught how to think with them.

Richard Feynman at Caltech Richard Feynman at Caltech

MIT has carried the same principle in its motto since 1861: mens et manus, mind and hand. Its legendary Building 20 — a "temporary" wooden barracks built during World War II that lasted over half a century — became an incubator precisely because its precariousness invited people to intervene in the space: researchers sawed through walls, ran cables between floors, improvised laboratories. The building was ugly and provisional, but it produced discoveries because people did things in it, not just thought about them.

Feynman told an anecdote that illustrates the difference better than any theory. When he taught in Brazil, he discovered that students could recite the laws of optics from memory, but none could explain why the sea looks bright under the sun. They knew the formulas. They didn't know the light. "They could say the right words," he wrote, "but they didn't know what they meant." Students who functioned, in a sense, like language models: capable of producing the most probable answer without understanding anything they were saying.

The lesson shared by the bottega, the Bauhaus, Building 20, and Feynman's classroom is not that new tools are dangerous. It's that education that builds judgement requires contact with the material — and that contact cannot be simulated. When the purpose is clear, the tool integrates. When it isn't, the tool becomes the spectacle.

What doesn't change

I use artificial intelligence every day. I've integrated it into my product work, my writing, my preparation of teaching materials. In the current edition of my programme at Instituto Tramontana — which will be my last — AI is a constant working tool, both for me and for the participants. I'm not writing this from a place of resistance to change.

The dominant narrative says AI transforms education because it allows scaling access to knowledge. It's true. But content-based education was already solved. YouTube solved it fifteen years ago. Khan Academy solved it. Wikipedia solved it. Access to content stopped being the bottleneck long ago. What AI scales is precisely that: more content, more accessible, more personalised. And that's fine. But it's not the hard part.

The hard part is what Socrates called the spark. The provocation that forces you to think for yourself. Education as disturbance, not transfer. That requires something a conversational agent cannot offer: the resistance of the material, the friction of real-time error, the presence of someone who has already been through it and knows exactly when to stay silent and when to ask.

"Education is not the filling of a vessel, but the kindling of a flame." — Plutarch, attributed to Socrates

Herbert Simon, who was hardly a technophobe, put it decades before the internet existed: the scarcity is not of information, but of attention. More accessible content is not more education. It's more demand on a resource that was already saturated.

When an educational venture decides that physical space no longer matters, that shared living can be replaced by online sessions and agents to chat with, it may be solving a financial problem. What has happened to many training schools, turned into video platforms competing on price. But what is sacrificed is the possibility of a kind of learning that only happens when attention is concentrated, when the body is present, when there isn't a tab next door competing for your time. Learning a craft — learning by doing, making mistakes in front of others, receiving correction in the moment — is precisely what doesn't scale. And it's precisely what matters.

Identity and similitude

Bentham wanted his auto-icon to preserve identity, not similitude. But what he achieved was exactly the opposite: an imperfect similitude that preserves nothing of what made Bentham be Bentham. A wax head where there should be thought. A stuffed suit where there should be presence.

Today's educational agents are digital auto-icons. They preserve a similitude — the style, the data, the most probable answers — but not the identity. They cannot do what the person did: read a room, calibrate a pause, decide that what this student needs right now is not an answer but a silence. They cannot fail productively. They cannot change their mind mid-sentence because something in the other person's face has revealed they were on the wrong track.

None of this means AI has no place in education. It does, and a big one. But its place is not to replace the spark. It's to free up time and attention so the spark can happen.