Anthropic's latest economic research landed quietly at an Axios summit in Washington D.C. last week, but the implications it carries are anything but quiet. The company's March 2026 Economic Index report, subtitled "Learning Curves," documents something researchers and executives have been speculating about for two years: the gap between workers who know how to use AI well and those who don't is growing fast — and it is starting to map onto pre-existing divisions of wealth, geography, and professional status in ways that are difficult to reverse.
The headline finding is precise and striking. Users who have been working with Claude for six months or more show a 10% higher task-success rate than those who are newer to the platform. And the effect compounds. "The longer you've been using it, the stronger this effect," Anthropic's head of economics Peter McCrory told the Axios AI Summit audience on March 24. The company's data, drawn from its ongoing economic tracking work, shows this isn't a temporary learning curve that flattens out — it is a widening gap between early adopters who have internalized how to work with AI systems and the broader population that hasn't yet had the time or access to develop those instincts.
The report also offers a reassurance that will land differently depending on where you sit in the labor market: there is, as of now, little evidence of widespread job displacement driven by AI adoption. But the word "yet" hangs heavily over that finding. McCrory and the Anthropic team are explicit that the current stability could change rapidly, and that the skills-sorting happening right now is laying the groundwork for disruptions that won't announce themselves with much warning.
What the Data Actually Shows
The Economic Index report draws on Anthropic's internal usage data, supplemented by external labor market analysis, to paint a picture of how Claude is actually being deployed across different worker populations.
The 10% success-rate advantage for experienced users is the sharpest quantitative finding, but it sits within a broader pattern. Anthropic's data shows that Claude adoption clusters heavily among knowledge workers — professionals in software engineering, financial analysis, legal research, content strategy, consulting, and adjacent fields. These are workers who already occupied the higher-earning, more economically resilient positions in pre-AI labor markets. They are also, not coincidentally, the workers who had both the professional access and the personal time to begin experimenting with AI tools when they first became available in 2022 and 2023.
The geographic concentration reinforces this picture. Claude usage concentrates in high-income countries — the United States, the United Kingdom, Germany, Canada, Australia — and within those countries, it concentrates further in knowledge worker hubs: technology corridors, financial centers, major university cities. This is not a tool that has, so far, meaningfully penetrated lower-income labor markets or the economies of middle- and lower-income countries.
The compounding effect that McCrory flagged is worth dwelling on. A 10% task-success advantage after six months sounds modest until you consider what task success translates to in professional context: it means experienced AI users are completing work faster, with fewer errors, and with a higher rate of usable output. Over a workday, a workweek, a quarter — that advantage accumulates into a productivity differential that becomes visible to employers, reviewers, and clients. Workers who have been at this for a year are not just 10% better than newcomers; they are materially, measurably more productive in ways that register in performance reviews and hiring decisions.
Skills-Biased Technology, Amplified
Peter McCrory's framing of Claude and its peers as "skills-biased technology" is doing precise work. The term comes from labor economics, where it describes technological change that disproportionately benefits workers who already have higher skills or education levels — in contrast to general-purpose productivity gains that lift all boats. The canonical example is the computer revolution of the 1980s and 1990s, which raised productivity broadly but raised the wages and employment prospects of college-educated workers far more than those without degrees, contributing materially to the income divergence that has defined the US economy since.
Anthropic's data suggests AI is exhibiting this pattern with greater speed and concentration. The difference between AI and prior skills-biased technology cycles is partly the pace — the performance gap between experienced and inexperienced AI users opened in months, not years — and partly the mechanism. Previous skills-biased technology rewarded people who could operate and maintain increasingly complex systems. AI rewards something more diffuse and harder to teach: the ability to communicate effectively with probabilistic systems, to decompose complex problems into AI-legible sub-tasks, to evaluate AI output critically rather than accepting it at face value, and to iterate rapidly on prompts and workflows.
These skills are learnable, but they are not taught in most formal educational settings, and they are not uniformly distributed across the population. They tend to correlate with the kinds of cognitive habits that higher education and certain professional environments cultivate — analytical decomposition, comfort with ambiguity, rapid hypothesis testing. Workers who already have those habits adapted to AI tools quickly. Workers who don't face a steeper learning curve, and the data suggests the curve does not flatten at the same level.
TechBuzz AI's analysis of the Anthropic findings notes that this creates a compounding feedback loop that is particularly hard to interrupt: workers who start with advantages use AI to become more productive, which gives them more professional success, which gives them more access to AI tools and training, which widens the advantage further. The researchers who spoke to TechBuzz described the dynamic as "a ratchet, not a ladder."
The Job Displacement Question
The "little evidence of widespread job displacement" finding is the one that will generate the most debate, because it is simultaneously true and potentially misleading.
It is true in the sense that the labor market data Anthropic is working with does not show mass unemployment attributable to AI adoption. Occupational categories that were supposed to be early displacement candidates — entry-level coding, basic legal research, financial data processing — have not collapsed. Workers in those roles are still employed, still being hired in most markets.
But McCrory's qualifier — "so far" — is doing a lot of work. What the data does show, even without displacement, is substitution at the task level: AI is taking over specific sub-tasks within jobs faster than it is eliminating the jobs themselves. Entry-level coders are still employed, but fewer of them are doing the same volume of work because mid-level engineers with AI assistance are handling tasks that previously required junior staff. Legal researchers are still billing hours, but fewer of them are needed per matter. The headcount hasn't dropped yet — but the ratio of AI-assisted senior workers to entry-level workers is shifting, and that shift compounds over time.
Axios's reporting from the summit captures a tension that Anthropic's economists are clearly wrestling with: the tools they are building are showing up in their own data as engines of inequality, even in a scenario where broad displacement hasn't materialized. The optimistic read is that AI is making the economy more productive without destroying jobs, and that over time the productivity gains will translate into wage increases and new job categories that absorb displaced workers. The pessimistic read is that we are in the early period of a displacement cycle that moves slowly enough to avoid alarm but fast enough to foreclose the adaptation pathways that would otherwise allow the labor market to adjust.
Anthropic, to its credit, is not pretending this tension doesn't exist. McCrory's public statements at the summit were notably candid about the risk. "Situation could change rapidly," he said, describing the job displacement picture — a framing that reads less like reassurance and more like a warning about the pace of change ahead.
A Skills Gap Hardening Into a Class Gap
The phrase that is going to follow this report around — "skills gap hardening into class gap in real time" — was not Anthropic's official language, but it captures what the data describes.
Skills gaps in the labor market are generally considered temporary and addressable. Workers fall behind during technological transitions, training programs catch up, labor markets rebalance. This is the standard narrative about technological displacement, and it is true often enough that it shapes policy thinking. Community colleges build new certification programs, community organizations run digital literacy initiatives, employers invest in workforce development. The skills gap closes, sometimes slowly and unevenly, but it closes.
Class gaps are different. Class gaps involve not just skill differentials but differential access to the conditions that allow skills to be developed: time, financial stability, professional networks, access to good tools, and the risk tolerance to experiment with new ways of working. Class gaps tend to persist across technological transitions because they are self-reinforcing — economic precarity limits the ability to invest in new skills, which limits earning power, which maintains precarity.
What Anthropic's data is showing is that the AI skills gap is already exhibiting class-gap characteristics rather than ordinary skills-gap ones. The workers who got to the frontier fastest were workers who were already economically comfortable, professionally connected, and institutionally supported in ways that gave them both the access to AI tools and the slack to learn them. The workers who are still on the steeper part of the learning curve are disproportionately those for whom taking time to experiment with a new tool represents a real professional risk rather than an investment.
TechBuzz AI's workforce analysis puts it this way: the skills-biased technology literature from the computer revolution took decades to establish that computers widened inequality. With AI, the sorting is happening in real time and at an observable speed. The difference between a worker who has been using Claude for 18 months and one who started last month is not hypothetical or statistical — it is visible in output quality, task completion speed, and the kind of work they can take on.
The geographic concentration Anthropic documents sharpens this picture at the macro level. AI productivity gains are accruing primarily to workers in high-income countries, and within those countries, to workers in already-affluent metropolitan knowledge economies. The promise that AI would be a great equalizer — giving a freelancer in Lagos or a small-business owner in rural Kentucky access to the same cognitive tools as a McKinsey consultant in New York — is not showing up in Anthropic's data. What is showing up is AI as an amplifier of existing advantage.
What This Means for Employers
The Anthropic findings carry direct operational implications for how employers should think about talent development, hiring, and team composition in 2026 and beyond.
The 10% success-rate advantage for experienced AI users is not a ceiling — it is a floor, and it is six months old. If the trend McCrory described continues, the advantage of a two-year experienced AI user over a beginner will be substantially larger. Employers who are not actively investing in AI skill development across their workforce today are not maintaining the status quo; they are falling behind.
The task-substitution pattern is also reshaping hiring math in ways that are not yet showing up in job posting data but will. A mid-level analyst who is highly proficient with AI can now cover scope that previously required a team of two or three. Employers who recognize this are not posting fewer jobs yet — but they are getting more selective about which workers they fight to retain and develop. The workers who haven't developed AI fluency are increasingly competing with AI-augmented colleagues rather than with unaugmented peers, and the performance comparison does not favor them.
For employers trying to act constructively on this, the data points to a straightforward if expensive prescription: structured, sustained AI skills development programs that give employees real time to practice with AI tools in their actual work context. The 10% advantage is a usage-duration effect — it comes from putting in time. Programs that give employees time, not just access, are the ones most likely to close the internal skills gap before it becomes an internal class gap.
The alternative — relying on workers to develop AI proficiency on their own time and initiative — is what produced the current gap in the first place. Workers with more professional slack, more economic stability, and more institutional support used that slack to get ahead. Workers without it didn't. Leaving AI skill development to individual initiative will replicate that outcome at scale inside organizations.
The Policy Void
What makes Anthropic's findings particularly uncomfortable is the absence of policy infrastructure adequate to address what the data describes.
The standard toolkit for responding to skills-biased technological change — job training programs, community college curricula updates, employer tax incentives for workforce development — operates on timescales measured in years. The gap Anthropic is documenting is opening on a timescale measured in months. A retraining program that takes 18 months to design, fund, and deploy is arriving after the first full cohort of AI power users has already pulled away from the rest of the workforce.
The geographic concentration issue compounds the policy challenge. Federal and state workforce development programs are generally designed to address broad labor market shifts, not the kind of highly concentrated, sector-specific productivity gains that AI is currently producing. A program designed to help displaced manufacturing workers in Ohio is not well-calibrated to address the emerging AI skills divide among knowledge workers in New York and San Francisco — nor is it calibrated to help workers in lower-income countries who are being structurally excluded from the productivity gains AI is generating.
Axios's summit coverage noted that the Anthropic presentation generated significant discussion among the Washington policy community in attendance — but the discussion was characterized more by concern than by actionable consensus. The honest assessment from policy observers is that the tools available to governments for addressing AI-driven inequality are either too slow, too blunt, or too dependent on the same institutional capacity that is already being concentrated among the AI-advantaged.
This is not an argument for inaction — it is an argument for urgency and for approaches that don't rely on the traditional policy cycle. Employer-driven skills development, AI-native educational programs that integrate with existing work rather than requiring workers to step away from it, and direct subsidization of AI tool access for workers and small businesses in underserved markets are all levers that operate faster than traditional workforce development timelines. But they require coordinated effort from employers, AI developers, and government that is not currently organized at the required scale.
Anthropic's Position in This Story
There is an inherent tension in Anthropic releasing data that documents AI-driven inequality. The company builds the tool that is creating the skills gap and also employs the economists tracking the gap — a position that invites questions about whether the findings are calibrated to reassure or to alert.
The "Learning Curves" report reads as genuinely alert rather than reassuring. The framing is careful — McCrory and his team are not claiming the sky is falling, and they are explicit that the job displacement picture has not materialized in the way some predicted — but the underlying message is not comfortable. The findings describe a tool that is already sorting workers in ways that reinforce existing inequalities, and that is doing so faster than conventional policy responses can track.
TechBuzz AI observed that Anthropic has been more willing than most AI developers to publish internal data that complicates the triumphalist AI narrative. The Economic Index series, which Anthropic has been publishing since 2025, has consistently surfaced findings about labor market concentration, access inequality, and the geographic skew of AI adoption that other developers have been less eager to discuss publicly. The March 2026 report continues that pattern.
Whether that transparency translates into changed behavior — from Anthropic, from other AI developers, from employers, from policymakers — is the question the report leaves open. The data is clear about what is happening. The path to a different outcome is not.
What Comes Next
Anthropic has indicated the Economic Index will continue to publish quarterly data on AI adoption patterns, task substitution, and labor market outcomes. The next report, expected in June 2026, will capture whether the six-month advantage effect has grown, stabilized, or begun to narrow as AI tools become more widely adopted and training resources more broadly available.
The short-term direction of the gap will be one of the more consequential empirical questions in the labor market this year. If the advantage continues widening at its current pace, the window for policy and employer intervention to prevent the skills gap from calcifying into something more durable is closing. If the gap begins to narrow as adoption broadens and training resources scale, the optimistic scenario — AI as a widely distributed productivity gain that eventually reduces rather than increases inequality — remains viable.
TechCrunch's coverage of the report noted that several economists outside Anthropic are now citing the Economic Index data in their own research, which suggests the findings will shape academic and policy conversations well beyond the immediate news cycle.
The March 2026 report is not an alarm, and Anthropic has not framed it as one. But it is, in its careful and data-grounded way, a description of a trajectory — one that leads toward a labor market that looks more unequal, not less, and that gets to that outcome not through the dramatic job-destruction scenarios that have dominated AI discourse, but through the quieter, more durable mechanism of skills sorting: the slow hardening of a gap that was supposed to be temporary into one that looks a lot like the divisions that were already there.
FAQ
What is Anthropic's Economic Index?
Anthropic's Economic Index is an ongoing research program that tracks how Claude is being used across the labor market, which occupations and industries are adopting AI tools, at what pace, and with what measurable effect on worker productivity and labor market outcomes. Anthropic has published the index quarterly since 2025, making it one of the most detailed public datasets on real-world AI adoption patterns from any major AI developer. The March 2026 "Learning Curves" report is the latest installment.
What does the 10% success rate advantage actually mean in practice?
The 10% figure refers to task completion success rates — the rate at which experienced Claude users (six months or more) complete tasks successfully compared to newer users. In professional context, this translates to faster and higher-quality output: fewer failed drafts, fewer iterations required to reach usable output, and the ability to take on more complex tasks that newer users struggle to complete with AI assistance. Across a workday and workweek, the cumulative effect is a meaningful and measurable productivity differential.
Is AI actually causing job losses?
Anthropic's data as of March 2026 shows little evidence of broad job displacement — meaning employment levels in AI-exposed occupations have not collapsed. However, the report documents significant task substitution: AI is taking over specific sub-tasks within jobs faster than it is eliminating jobs outright. This means fewer entry-level positions are needed to support the same volume of work, even if total headcount has not yet declined. Economists describe this as "hollowing out" rather than displacement, and it tends to precede more significant employment shifts as the substitution reaches a tipping point.
Why does AI adoption concentrate in high-income countries and knowledge worker hubs?
Several factors drive the concentration. Access to AI tools at meaningful scale requires reliable internet connectivity, the financial capacity to pay for subscriptions or API access, and professional workflows where AI can be integrated into existing tasks. These conditions are more common in high-income countries and in knowledge worker roles within those countries. Additionally, the productivity gains from AI are largest for tasks that involve language, reasoning, and information synthesis — tasks concentrated in professional and knowledge work rather than in manual, physical, or highly contextualized local service work.
What can workers do to avoid falling behind?
The Anthropic data suggests one clear prescription: get actual practice time with AI tools in your real work context, not just in structured training environments. The six-month usage effect that generates the 10% advantage is a doing effect, not a learning-about-it effect. Workers who have spent six months actually using Claude to complete real work tasks perform better than those who have attended training sessions but haven't built the habit of integrating AI into their daily workflows. For workers whose employers don't provide time or access, the calculus is harder — but the data is clear that the gap between those who have the time to practice and those who don't is where the inequality is being generated.