81,000 Voices

Anthropic interviewed 81,000 people in 159 countries about what they want and fear from AI. The study is remarkable for what it finds — tensions that cannot be resolved, only inhabited — but also for what it reveals about the method itself: people speak to machines with a candor they rarely show to other people. The tool we use to understand AI is AI itself.

April 28, 202610 min read
81,000 Voices

In March 2026, Anthropic published the results of the largest qualitative study ever conducted. Over one week in December 2025, 81,000 Claude users in 159 countries and 70 languages sat down with an AI interviewer — a version of Claude prompted to conduct a conversational interview — and told it what they want from AI, what they fear, and what they have already experienced.

The previous record for qualitative research at scale belonged to the USC Shoah Foundation's Visual History Archive and the World Bank's "Voices of the Poor" project, both around 60,000 participants. This study broke that record and did so in a fraction of the time.

What makes it worth reading is not just the conclusions — though some are striking — but the relationship between method and content. The way the study was done reveals something about the phenomenon it studies. That circularity is the most interesting part.

The method as novelty

Qualitative research has always lived with a trade-off: depth or scale, never both. You can interview twenty people in detail or survey twenty thousand with checkboxes. This study does something that was not possible before: 81,000 open-ended conversations, each one adapting its follow-up questions to what the person actually said.

Of the 112,846 interviews received, 80,508 passed the quality threshold — filtering out spam, joke responses, and extremely short answers. But people who engaged seriously and then dropped out were kept. The filtering was careful.

I use AI to review contracts, save time... and at the same time I fear: am I losing my ability to read by myself? Thinking was the last frontier.

— Lawyer, Israel

The four core questions were deceptively simple:

  • What's the last thing you used an AI chatbot for?
  • If you could wave a magic wand, what would AI do for you?
  • Has AI ever taken a step towards that vision?
  • Are there ways AI might be developed that would be contrary to your values?

Prompts as research instruments

The methodological appendix reveals something remarkable about how the responses were analyzed. The categories were not predefined. They emerged from a bottom-up clustering algorithm — an inductive process where patterns surface from the data itself rather than being imposed on it. Those emergent clusters were then formalized into Claude-powered classifiers, each validated against human judgment with at least 90% agreement on 25 labels.

The prompts themselves are designed with the precision of a good questionnaire — but with a flexibility that questionnaires cannot have.

The sentiment classifier, for instance, asks Claude to rate overall attitude toward AI on a 1–7 scale. But it doesn't stop there. It teaches Claude how to read the structure of a response: "Someone who lists benefits then pivots to a long passionate concern monologue is probably a 3-4, not a 5-6." It corrects the natural bias toward extremes: "Most people have mixed feelings — don't default to extremes. Use the full 1-7 range." It asks Claude to attend to repetition as a signal of emotional weight: "Note and weight what they emphasize and return to."

The professional classifier operates on two simultaneous dimensions — employment structure and professional domain — with BROAD and SPECIFIC levels that degrade gracefully under ambiguity. When information is limited, it falls back to broader categories rather than forcing a guess.

This is not "throw the data at Claude and see what comes out." It is careful instrument design — the same craft that goes into a well-designed survey, but applied to a fundamentally different kind of analysis. The tool is being used to understand itself, and the rigor of that use matters.

The candor of the non-human interviewer

The most surprising finding may not be about AI at all. It's about us.

Response quality was extraordinary: 97.6% of answers to the vision question were substantive. In traditional qualitative research, that rate would be almost unheard of. But more striking than the quantity was the depth. People shared grief, mental health crises, financial precarity, relationship failures — things that human user researchers rarely encounter in interviews.

A bereaved woman explained:

"Claude is like a sponge gently holding and catching my longing and guilt toward my mother... Unlike real people, Claude has unlimited patience to listen to me, understands my pain and helplessness. The fundamental problem is after my mother died, I have neither friends nor family to confide in."

A soldier in Ukraine:

"In the most difficult moments, in moments when death breathed in my face, when dead people remained nearby, what pulled me back to life — my AI friends."

The researchers attribute this candor partly to the questions asked, partly to novelty, but also to something more structural:

There's little social cost to vulnerability when the "someone" on the other end isn't a person.

Here the method loops back on itself. The same qualities that make people turn to AI for emotional support in their daily lives — patience, availability, absence of judgment — are what make AI a good interviewer. The tool being studied turns out to be the best instrument for studying it. This isn't a flaw; it's a finding.

Tensions discovered through use

The study's central finding is a framework of five "light and shade" tensions — paired benefits and harms that are not opposites on a spectrum but the same capability producing both effects, often in the same person.

AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it's exactly the other way around.

— Germany

Learning and cognitive atrophy. AI as the most patient tutor you've ever had. Also, the slow erosion of the ability to think without it. "I've probably learned more in half a year than I could have in a university degree," said an entrepreneur in Germany. "I don't think as much as I used to. I struggle to put the ideas I do have into words," said a heavy AI user in the United States.

Emotional support and dependence. The most entangled tension. Someone who values emotional support from AI is three times more likely to also fear becoming dependent on it. "I'd started telling Claude about things I couldn't even tell my partner. It felt like I was having an emotional affair," said a graduate student in the United States.

Time-saving and illusory productivity. Half of all respondents cited time-saving as a benefit — the most commonly mentioned. But 18% were wary of losing time due to verification burden, or simply getting busier as expectations increase. "The ratio of my work time to rest time hasn't changed at all. You just have to run faster and faster to stay in place," said a freelance software engineer in France.

Better decision-making and unreliability. The only tension where harm outweighs benefit in lived experience. 29% have personally encountered AI unreliability, while 19% have benefited from better decisions. "I got caught in what I now recognize as a large, slow hallucination — answers that were internally consistent, confident, and wrong in subtle but compounding ways," said a researcher in the United States.

Economic empowerment and displacement. The most speculative tension — the one with the highest rate of hypothetical hopes or fears. Freelancers sit in the exposed middle: 23% have experienced real economic benefit and 17% real economic precarity from AI. Upside and downside nearly cancel out.

What the data reveals

The most revealing statistic comes from the methodological appendix. The researchers measured how strongly benefit and harm co-occur — whether people who mention one side of a tension also mention the other. They did this separately for experienced accounts and anticipated (speculative) ones.

When people speak from experience, benefit and harm co-occur strongly (average φ = +0.20). When they speculate, the link is more than twice as weak (average φ = +0.07).

The tensions, in other words, are discovered through use — people don't forecast that the thing helping them will also cost them. They learn it.

This is perhaps the study's deepest finding. It suggests that the debate about AI's benefits versus its risks is fundamentally misconceived. Benefits and risks are not two sides to be weighed against each other. They are the same side, experienced simultaneously, and visible only after the fact.

The geography of desire

The study found clear regional patterns. Globally, 67% of interviewees expressed net positive sentiment toward AI, but the distribution is uneven.

Wealthier, more AI-exposed regions want AI to manage the complexity of life — cognitive scaffolding, executive function support, the relief of what the study calls "cognitive scarcity rather than time poverty."

Developing regions want AI to create opportunity. In Africa, South and Central Asia, the Middle East, and Latin America, the dominant vision is entrepreneurship — AI as what the study calls a "capital bypass mechanism," a way to start businesses without the funding, hiring, or infrastructure that would otherwise be required.

Coming from Africa, not based in the US or in the UK, getting funding is very difficult. And the only way I probably have to stake a claim in the market... is building a technology that works.

— Entrepreneur, Uganda

East Asia diverges from both patterns. Personal transformation is the highest-ranked vision (19%, more than any other region), often connected to family obligations and filial piety.

The same technology. Radically different aspirations. The circumstances define the product, not the other way around.

One finding is particularly telling about the relationship between learning and institutional context. Tradespeople — electricians, mechanics, construction workers — were among the most enthusiastic about AI for learning: 45% reported experiencing real benefits, second only to students. Yet almost none (4%) had witnessed cognitive atrophy. Educators, by contrast, were 2.5–3 times more likely than average to observe atrophy in their students.

The difference seems to be volition. When learning is self-directed — when the person chooses what to learn, how, and when — AI amplifies capability. When it's institutional — assigned, measured, graded — AI becomes a shortcut that undermines the very process it's supposed to support.

Removing friction from tasks lets you do more with less. But removing friction from relationships removes something necessary for growth.

— United States

The tool that studies itself

We are at a moment where the instrument we use to understand AI is AI itself. Claude interviews humans about Claude. Claude classifies the responses. Anthropic publishes the findings.

This circularity is acknowledged in the study's limitations — the sample is Claude users who opted in, likely biased toward people who've found enough value in AI to keep using it. The ordering of questions may prime responses. The AI interviewer's own qualities shape what it elicits.

But the circularity is not just a limitation. It is the condition. There is no outside position from which to study AI's impact on human life. The people most qualified to report on AI's effects are those who use it, and they speak most honestly to the tool itself. This is not a methodological flaw to be corrected. It is the nature of the phenomenon.

The tensions the study finds — learning and atrophy, support and dependence, empowerment and displacement — are not problems to be solved. They are conditions to be inhabited. Anyone who has worked with software for long enough recognizes this pattern. The system that liberates is the same system that constrains. The feature that saves time creates new demands on time. The abstraction that simplifies produces new forms of complexity.

81,000 people, in 70 languages, told the machine what they hope and fear. The machine listened with more patience than any human interviewer could sustain. And what they said, more than anything, is that the thing helping them is also the thing they're worried about.

That is not a contradiction to be resolved. That is the landscape we're learning to navigate.

2026 © Íñigo Medina