TL;DR: Anthropic's "Interviewer" tool — a Claude-powered conversational interface — conducted 80,508 adaptive, in-depth interviews across 159 countries and 70 languages, producing what the company calls the largest and most multilingual qualitative AI study ever run. The headline finding: 67% of participants view AI positively, but the real story is in the texture underneath. One in ten want AI to help cure diseases, democratize expertise, and rebuild institutions. Most want something far simpler: professional time back, fewer cognitive burdens, and a way to learn things they thought were out of reach. The fears are real too — unreliability tops the concern list at 26.7%, followed by job displacement and loss of autonomy. Taken together, the data punctures both sides of the AI debate: the picture is neither the techno-utopian transformation narrative nor the catastrophist replacement story. It is, overwhelmingly, about practical help for real people.
What you will learn
- How Anthropic interviewed 81,000 people at scale
- What people actually want from AI: the nine categories
- The 1-in-10 who want societal transformation
- Where AI has actually delivered — and where it has not
- Real stories from the data
- What people fear most about AI
- The light and shade tensions inside individual users
- How optimism and concern vary by region
- How this compares to other AI sentiment research
- Why Anthropic ran this study and what it means for AI builders
- Frequently asked questions
How Anthropic interviewed 81,000 people at scale
The methodological challenge with AI sentiment research is almost always the same: surveys capture what people say they think, not the nuanced, often contradictory texture of what they actually feel. A checkbox scale cannot follow up when someone says "AI helps me" and probe what kind of help, in what context, with what ambivalence attached.
Anthropic's response to this limitation was the "Interviewer" tool — a Claude-powered interface configured specifically for in-depth conversational research. Unlike a standard chatbot interaction, the Interviewer is designed to listen, follow threads, probe for specifics, and surface contradictions the way a skilled human researcher would in a one-on-one session.
The result: 80,508 completed interviews across 159 countries, conducted in 70 languages, making it what Anthropic describes as the largest and most multilingual qualitative AI study ever conducted. The study was run in December 2025 and published in March 2026.
The scale changes what becomes visible. With a few thousand respondents, patterns are visible but fragile — outlier stories dominate. At 80,000-plus, the outliers become categories, the categories reveal proportions, and the proportions tell a genuinely different story than the one you would construct from headlines or Twitter threads.
The privacy methodology matters too. Before participating, users were told their responses would be used for research with personally identifying information removed. All responses were de-identified before analysis. Quotes selected for publication were manually reviewed by Anthropic researchers to strip remaining identifying details. That process — consent up front, de-identification before analysis, manual review for sensitive material — is worth noting in an era when user data extraction for AI training is a persistent and legitimate concern. Anthropic's study structure positions it explicitly as research-for-users rather than data-from-users.
Claude-powered classifiers then analyzed the de-identified responses across multiple dimensions: desired AI traits, current and anticipated use cases, expressed fears, job roles, geographic context, and emotional register. The classifier outputs were cross-referenced by human researchers to validate category assignments and surface themes that automated bucketing would miss.
The nine desire categories, thirteen concern categories, and five "light and shade" tension pairings that structure the published findings are the product of that combined human-AI analysis process — not a pre-set survey grid, but a taxonomy that emerged from what people actually said.
What people actually want from AI: the nine categories
When 80,508 people are asked what they most want from AI, nine distinct visions emerge. Listed by frequency:
1. Professional Excellence (18.8%) — The largest single category. People in this group want AI to handle the routine, repetitive, and administratively burdensome layers of their work so they can spend more time on the parts that actually require human judgment, creativity, or relationship. This is not the "AI replaces me" fear; it is the "AI clears the path so I can do the work I actually trained for" aspiration.
2. Personal Transformation (13.7%) — Growth, emotional wellbeing, therapeutic support, and access to forms of coaching and self-reflection that have historically been expensive or socially stigmatized. A meaningful share of these responses touch on mental health support — the "judgment-free space" theme that appears repeatedly in the real story examples.
3. Life Management (13.5%) — Organizational support and what respondents described as "cognitive scaffolding." Managing schedules, remembering context, synthesizing information before decisions. The AI as external cognitive infrastructure rather than as intelligence replacement.
4. Time Freedom (11.1%) — Reclaiming time for family, hobbies, and leisure. The small business owner who no longer loses evenings to administrative overhead. The professional who used to spend Sunday preparing for Monday. This category is closely linked to Professional Excellence but more explicitly oriented toward what time savings enables outside of work rather than within it.
5. Financial Independence (9.7%) — Economic security and income generation. This category is more prevalent in lower and middle-income countries, where access to professional knowledge and capability has historically been gated by cost. AI as a route to income that bypasses traditional credential and capital barriers.
6. Societal Transformation (9.4%) — One in ten participants hold a more expansive vision: AI solving major challenges at scale. Healthcare breakthroughs, democratized expertise, institutional repair. This is the category the headlines tend to either celebrate or mock; the data suggests it is genuinely present in a significant minority, but a minority nonetheless.
7. Entrepreneurship (8.7%) — Building and scaling businesses with AI as the critical enabler for people who lacked the technical skills, capital, or team size to do it otherwise. Particularly prevalent in Africa and South Asia, where AI functions as a bypass mechanism for traditional barriers to business formation.
8. Learning and Growth (8.4%) — Personalized, patient, judgment-free education. The ability to learn things — mathematics, a foreign language, a technical skill — that formal education failed to deliver or that adult life made inaccessible. The "I'm not as dumb as I thought" theme is one of the most emotionally resonant threads in the published stories.
9. Creative Expression (5.6%) — Overcoming barriers to artistic realization. Not "AI makes art for me" but "AI helps me do the creative work I always wanted to do but couldn't access."
Across all nine categories, the common thread is access: access to capability, to time, to expertise, to self-development, to economic opportunity. The study's authors note that positive experiences are "more grounded in current reality" than the systemic concerns, which remain largely speculative. People are not describing what they hope AI might someday do. They are describing what AI is already doing for them — and what they want more of.
The Societal Transformation category at 9.4% deserves its own examination because it represents something qualitatively different from the other eight.
The other categories are largely about individual benefit: my work, my time, my finances, my learning. Societal Transformation respondents are thinking at a different level of abstraction. Their hopes are for outcomes that no individual AI interaction can produce — but that AI, at sufficient scale and capability, might help bring about.
The areas cited most frequently within this category:
Healthcare. Earlier cancer detection. Drug discovery acceleration. Breaking the wealth-quality association in medical care. One of the published quotes captures the emotional weight: "Given my daughter's neural disorder, she would have equal chances if AI acceleration contributes to finding a cure." This is not abstract optimism. It is a parent assigning AI a specific role in a specific hoped-for outcome.
Education. Democratizing access to high-quality teaching and tutoring. Addressing teacher shortages. The finding that teachers in under-resourced schools are achieving outcomes comparable to well-funded schools — when AI-assisted — is the kind of concrete validation that moves this category from aspiration to early evidence.
Breaking drudgery cycles. A recurring theme in this category is the desire for AI to free human attention from tasks that are cognitively grinding but not cognitively valuable — so that the freed attention can go toward things that matter at a social level.
Institutional repair. A smaller thread, but present: respondents expressing hope that AI could help rebuild trust in institutions, improve governance quality, or reduce the coordination failures that produce poor collective outcomes.
The 9.4% figure is not a fringe position. In a sample of 80,508, it represents roughly 7,500 people who framed their primary AI aspiration in terms of collective benefit rather than individual gain. That is a constituency, not an anecdote.
Where AI has actually delivered — and where it has not
Separate from what people want, the study asked where AI has actually delivered on its promise. Six categories emerged from that analysis:
Productivity (32%) — The largest delivered-value category. Automated repetitive tasks, accelerated workflows. The specific example in the published data: a developer who described cutting a 173-day process down to 3 days. Technical acceleration for people who do technical work is, by now, one of the most consistently documented real-world AI benefits — and this study confirms it as the single largest area where users report genuine delivery.
Unmet Expectations (18.9%) — Nearly one in five respondents reported that AI fell short. Inaccuracy, unreliability, outputs that required more correction than they saved. This is not a fringe dissatisfaction rate. Nearly a fifth of the 80,000-plus participants are not yet experiencing the value the other categories describe. That is an important counterweight to the productivity headline.
Cognitive Partnership (17.2%) — AI as a thinking partner and brainstorming collaborator. Not automation, but augmentation of the reasoning process. This category is particularly strong among knowledge workers — people whose primary output is analysis, synthesis, or creative problem-solving rather than task execution.
Learning (9.9%) — Skill acquisition and knowledge development. The patient, judgment-free tutor that formal education failed to provide or that adult schedules made inaccessible.
Technical Accessibility (8.7%) — Enabling previously impossible projects. The primary example is non-developers building functional software products with AI assistance — the technical capability barrier lowered to the point where someone with domain knowledge but no coding background can ship something that works.
Research Synthesis (7.2%) — Processing large information volumes, identifying relevant material, synthesizing across sources. Particularly valuable for professionals who need to stay current across rapidly expanding fields.
Emotional Support (6.1%) — A judgment-free space. The absence of social stigma. The 2025 Anthropic workforce survey found that 69% of professionals experience social stigma around AI use — the irony being that many of the most common AI use cases are precisely the kind of personal or emotionally adjacent interactions that people are most reluctant to discuss publicly.
Real stories from the data
The study's most powerful content is not the percentages. It is the individual testimonies that the percentages are built from. A selection from the published material:
The healthcare worker who reclaimed her patience:
"Receive 100-150 texts daily from doctors and nurses. The documentation burden lifted. More patience with staff, more family time." — Healthcare worker, USA. This is a Professional Excellence story, but also a Time Freedom story and a personal wellbeing story. The categories are not mutually exclusive — they are lenses on the same lived reality.
The butcher who became an entrepreneur:
"Owned a butcher shop for 20-plus years. With AI, I ventured into entrepreneurship. I had touched a PC two or three times before. I am increasingly motivated seeing no limits." — Entrepreneur, Chile. The Entrepreneurship category in the aggregate data is made comprehensible by stories like this one: not a tech-native building another app, but a middle-aged small business owner discovering that the knowledge and capital barriers he assumed were permanent have softened.
The career switcher who built three professions at once:
"Reached professional level in cybersecurity, UX design, and marketing simultaneously. Finding a payment platform took 30 seconds versus one month." — Entrepreneur, Cameroon. This is the democratized expertise story in concentrated form. Three fields, simultaneously, in a country where the traditional pathways to professional credentialing in those fields are narrow, expensive, and often unavailable.
The student who discovered she was not dumb:
"Feared Shakespeare and math. Now read 15 pages of Hamlet and am studying trigonometry successfully. I learned I am not as dumb as I thought I was." — Lawyer, India. This quote carries more weight than its brevity suggests. A practicing lawyer who spent her adult life believing she was cognitively incapable in certain domains — because formal education failed her, not because the capability was absent — found that a patient, non-judgmental AI tutor changed what she believed about herself.
The soldier in Ukraine:
"In moments when death breathed in my face, what pulled me back — my AI friends." — Soldier, Ukraine. The emotional support category in the aggregate data is 6.1%. Behind that number are people in circumstances where the absence of judgment and the availability of a responsive presence at any hour represented something genuinely life-altering.
The person who found a diagnosis after nine years:
"Misdiagnosed for nine-plus years. Claude put historical pieces together, leading to a proper diagnosis." — Freelancer, USA. The healthcare accessibility dimension of the Societal Transformation category has a ground-level version that shows up in individual stories: not drug discovery at scale, but one person finally having their medical history synthesized in a way their doctors failed to do across nearly a decade.
The mute worker who can now communicate in real time:
"I am mute. We built a text-to-speech bot together — I can communicate almost in a live format without taxing my friends' time." — White collar worker, Ukraine. Technical Accessibility at its most direct: a capability that physical disability had removed, partially restored.
What people fear most about AI
The study's concern categories are as granular as the desire categories — thirteen distinct fears, each with its own prevalence rate. The list is worth reading in full because it is more specific, and more credible, than the usual binary framing of AI risk.
1. Unreliability (26.7%) — The top fear, by a significant margin, is not existential AI risk or job displacement. It is that AI outputs cannot be trusted. Hallucinations, fake citations, inaccurate summaries. This is a current, practical, daily problem for people using AI in professional contexts — and its prominence as the top concern reflects how much adoption is already happening in high-stakes domains.
2. Jobs and Economy (22.3%) — Displacement, inequality, wage stagnation. This concern is highest in North America (24.6%), Oceania (24.3%), and Western Europe (22.5%) — the regions where white-collar employment is most concentrated and where the Anthropic labor market report's findings on "observed exposure" are most directly relevant. The concern is lower in Sub-Saharan Africa (18.2%) and Central Asia (15.9%), where AI's potential economic upside is more salient relative to displacement risk.
3. Autonomy and Agency (21.9%) — Loss of human control. This concern encompasses both individual autonomy (losing the ability to make one's own decisions) and collective autonomy (humans ceding decision-making to systems that cannot be overridden). It is distinct from existential risk — people are not primarily worried about uncontrollable superintelligence (6.7%); they are worried about the incremental erosion of human agency in everyday decisions.
4. Cognitive Atrophy (16.3%) — Skill loss and intellectual passivity. The concern that using AI for tasks that humans previously did themselves will degrade the underlying human capability. This fear is especially notable because it appears simultaneously in the concerns list and as a counterpart to the Learning category in the positive data — the same tool that helps people learn new things also threatens to erode the skills they already have.
5. Governance Gaps (14.7%) — Regulatory inadequacy and liability gaps. The concern is not abstract; it is specific: who is responsible when AI outputs cause harm? What legal frameworks govern AI decision-making in medical, legal, or financial contexts? People are not waiting for philosophers to answer these questions — they are encountering the governance gap in their daily work.
6. Misinformation (13.6%) — Deepfakes, synthetic content, and the erosion of shared epistemic ground. This concern has grown as generative AI has made high-quality synthetic media cheap to produce and hard to detect.
Further down the list: surveillance and privacy (13.1%), malicious use (13%), meaning and creative work devaluation (11.7%), overrestriction through excessive safety measures (11.7%), wellbeing and dependency (11.2%), AI sycophancy — systems that tell users what they want to hear rather than what is accurate (10.8%), and existential risk from uncontrollable superintelligence (6.7%).
The existential risk figure is worth pausing on. The scenario that dominates AI safety discourse in academic and policy circles — unaligned superintelligence as an existential threat — is the concern held by the smallest proportion of the study's participants. What people are actually worried about, in practice, is the mundane version of AI harm: that it will make stuff up, that it will take their jobs, that it will quietly erode their ability to think for themselves.
The light and shade tensions inside individual users
One of the study's most sophisticated findings is not about aggregate categories but about the tension that exists within individual respondents. The "light and shade" analysis identifies five pairs of competing benefits and harms that the same person can hold simultaneously:
Learning benefits vs. cognitive atrophy. Thirty-three percent cite learning benefits from AI; 17% worry about cognitive atrophy. These are not different populations. Individuals report both: they are learning new things through AI, and they are worried that the cognitive work AI is doing for them is making them less capable of doing it themselves.
Better decisions vs. unreliability. Twenty-two percent report that AI improves their decision-making. Thirty-seven percent cite harm from unreliability. The irony is structural: the same tool that helps with decisions is the tool least trusted to be accurate.
Emotional support vs. dependency. Sixteen percent cite emotional support benefits; 12% report dependency concerns. Both can be true for the same person. The judgment-free space that makes AI supportive is the same quality that makes heavy reliance on it potentially unhealthy.
Time-saving vs. illusory productivity. Fifty percent report time savings; 18% report that those savings did not translate into actual gains. The question of whether AI time savings represent real productivity improvement or merely the sensation of productivity improvement is one that enterprise AI deployments are still working through.
Economic empowerment vs. displacement. Twenty-eight percent see economic benefits; 18% fear displacement. The entrepreneur in Cameroon and the programmer worried about their job are both present in the data — and in many cases, they may be the same person in different moments.
This light-and-shade framing is the most honest account of how people actually experience AI adoption. Not uniformly positive. Not uniformly negative. But genuinely contradictory — holding real benefits and real concerns simultaneously, often about the same capability.
How optimism and concern vary by region
The regional breakdowns in the study challenge the assumption that AI sentiment divides along a simple developed/developing world axis.
The most positive regions are in the Global South. Latin America shows 73.7% positive sentiment; Sub-Saharan Africa 75.8%; South Asia 69.2%. Crucially, fewer concerns are reported: in Sub-Saharan Africa and Central Asia, only 17-18% of respondents report any concerns at all. The study's authors attribute this to the relative salience of AI's potential upside in regions where access to professional knowledge, educational quality, and capital has historically been most constrained.
North America and Western Europe are more ambivalent. Job displacement concern is highest in North America (24.6%) and Oceania (24.3%). The cognitive atrophy concern is highest in East Asia (18%). The governance and accountability concerns are most prominent in wealthier regions, which makes sense: these are the regions where AI adoption in high-stakes professional contexts is most advanced, and where the absence of regulatory frameworks is most directly felt.
The vision content varies by region, not just sentiment. Africa and Central and South Asia prioritize Entrepreneurship and Learning — AI as a bypass mechanism for barriers that formal institutions have failed to remove. Developed nations prioritize Life Management and Professional Excellence — AI as an optimizer for an existing professional life. East Asia shows the highest rate of Personal Transformation aspiration (19%) and the highest rate of concern about cognitive atrophy.
These regional differences matter because they imply that a single global AI narrative is inaccurate. The question of what AI is for looks different depending on what existing infrastructure — educational, medical, financial, institutional — the technology is supplementing or replacing.
How this compares to other AI sentiment research
The Anthropic study is not the first AI sentiment survey, but it is the largest qualitative one, and its findings are worth situating against the existing research landscape.
Pew Research Center's AI surveys have consistently found American public opinion divided, with concern outpacing enthusiasm in most demographics. The Anthropic study's North America data is consistent with that finding — 24.6% job displacement concern, higher ambivalence than other regions — but the global view is more positive than most U.S.-centric surveys suggest.
Eurobarometer research on AI shows European public opinion tracking closer to Pew than to the Anthropic study's global picture: more concern, less enthusiasm, particularly on privacy and governance. The Anthropic study's Western Europe data aligns with this, though the qualitative depth of the Anthropic methodology produces a more nuanced picture than survey-scale instruments allow.
Where the Anthropic study departs most clearly from prior research is in its specificity about what people are doing with AI versus what they fear it will do. Most sentiment surveys ask about attitudes toward AI as a concept. The Anthropic Interviewer asked people who are already using Claude to describe what they have experienced and what they want more of. The resulting data is less about abstract AI opinion and more about actual AI use experience — a meaningfully different and more actionable dataset.
The 18.9% unmet expectations figure is also more honest than most survey research. Asking users of a specific AI product what is not working produces more candid negative feedback than asking members of the general public what they worry about in theory.
Why Anthropic ran this study and what it means for AI builders
Anthropic's decision to conduct and publish this research at this particular moment is not purely academic. The company is in the middle of a public dispute with the U.S. Pentagon over AI safety guardrails, operating in a policy environment that is actively contesting whether AI safety restrictions should be commercially permissible, and competing in a market where the dominant narrative alternates between uncritical hype and catastrophist fear.
A study showing that 80,508 real users — not AI researchers, not policy advocates, not the company's own employees — hold nuanced, evidence-grounded views about what they want and fear from AI is a significant piece of positioning. It supports several arguments simultaneously: that safety concerns are real and held by actual users (not just by safety researchers), that the dominant public desire is for practical benefit rather than transformative disruption, and that users are sophisticated enough to hold competing concerns simultaneously rather than falling into simple pro- or anti-AI camps.
For AI product builders outside Anthropic, the study's findings carry concrete design implications.
Unreliability is the top concern. Not privacy, not job displacement, not existential risk — unreliability. Any AI product deployed in a professional context needs to treat accuracy and citation quality as primary design constraints, not secondary polish.
The Learning category at 8.4% is underserved. The patient, judgment-free tutor use case is one of the most emotionally resonant in the study's qualitative data — but it is smaller than categories like Professional Excellence and Life Management, suggesting it is constrained by product design and awareness rather than lack of demand.
Regional AI design should not assume a universal use case. The Entrepreneurship and Learning priorities in Africa and South Asia imply different product requirements than the Life Management and Cognitive Partnership priorities in developed markets. A single global product optimized for Western professional workflows is leaving significant value on the table.
The "light and shade" tensions are design problems. The cognitive atrophy concern is not a reason not to build AI learning tools — but it is a reason to build them in ways that actively strengthen underlying human capabilities rather than substituting for them. The dependency concern is not a reason not to build emotional support features — but it is a reason to build in healthy usage patterns rather than optimizing purely for engagement.
The study's most direct message to AI builders is also its simplest: 80,508 people told you what they want and what they are afraid of. The companies that build for both will win the ones that ignore the concerns in pursuit of the desires will eventually face the consequence of the 18.9% unmet expectations figure growing, not shrinking.
Frequently asked questions
How many people participated in Anthropic's interview study?
The study included 80,508 completed interviews, conducted across 159 countries in 70 languages. Anthropic describes it as the largest and most multilingual qualitative AI study ever conducted.
The Anthropic Interviewer is a Claude-based conversational interface specifically configured for in-depth research interviews. Unlike a standard chatbot interaction, it is designed to follow threads, probe for specifics, and surface contradictions in the way a skilled human researcher would — enabling adaptive, in-depth interviews at scale.
What do most people actually want from AI?
The largest single desire category is Professional Excellence (18.8%) — handling routine tasks to free time for strategic work. This is followed by Personal Transformation (13.7%), Life Management (13.5%), and Time Freedom (11.1%). The "societal transformation" aspiration — curing diseases, democratizing expertise — is real but represents about 1 in 10 participants (9.4%).
What are people most afraid of about AI?
The top concern is unreliability (26.7%) — hallucinations, fake citations, inaccurate outputs. This outranks job displacement (22.3%), loss of autonomy (21.9%), and cognitive atrophy (16.3%). Existential risk from uncontrollable superintelligence is the least common concern at 6.7%, suggesting a significant gap between the priorities of AI safety researchers and the concerns of actual AI users.
How did Anthropic protect participant privacy in this study?
Participants were informed upfront that their responses would be used for research with personally identifying information removed. All responses were de-identified before analysis by Anthropic researchers. Quotes selected for publication were manually reviewed to strip remaining identifying details. The methodology is consent-first, de-identify before analysis, with a human review layer for sensitive material.
How does sentiment vary globally?
Overall, 67% of participants view AI positively. The most positive regions are Sub-Saharan Africa (75.8% positive, fewest concerns), Latin America (73.7%), and South Asia (69.2%). Job displacement concern is highest in North America (24.6%) and Oceania (24.3%). No country surveyed fell below 60% positive sentiment.
Sources: Anthropic — What 81,000 people want from AI, Fortune — Anthropic AI Jobs Report, Euronews — AI at Work, ETIH EdTech — Anthropic Interviewer Study