AI & the Labor Market
Notes from Massenkoff & McCrory — Anthropic Research, March 2026
Anthropic’s latest research shifts the frame from eventual displacement to current impact: what AI is doing to jobs right now. The answer sits in a 61-point gap between what AI could theoretically automate and what it actually automates in practice — a metric that reveals more about the next five years of labor markets than any prediction about artificial general intelligence.
Everyone Has Opinions, Almost No One Has Data
Everyone has opinions about AI taking jobs, but almost no one has data to back them up. Massenkoff and McCrory at Anthropic introduce "observed exposure," a metric that fuses theoretical capability with actual Claude usage data to create a clearer picture. Previous attempts relied solely on theoretical capability, such as the Eloundou et al. 2023 framework, which rates tasks as fully automatable, partially automatable, or infeasible. That approach tells you what is possible. What is happening inside organizations today is a different picture.
Building the Metric
The innovation involves layering Claude usage data on top of those theoretical task scores to weight automated workflows fully and augmentative assistance at half. Three distinct data streams converge to form this new metric: the O*NET database covering over 800 occupations, theoretical task-level exposure from the earlier Eloundou framework, and the Anthropic Economic Index tracking real-world Claude usage. By aggregating these task scores to the occupation level using time-allocation weights, we finally have a tool that measures what AI is doing to jobs.
The Gap Between Theory and Practice
The gap between theory and practice is enormous, particularly in fields we assume are being revolutionized immediately. In Computer and Math occupations, theoretical feasibility sits at 94% while observed coverage lags at 33%, creating that massive 61-point chasm. Office and Admin roles show a similar divergence with 90% theoretical feasibility against roughly 25% actual adoption. The discrepancy reflects organizational inertia, cultural resistance, and integration costs — the messy reality of deploying tools within established institutions. Every change management playbook and every Davenport-style "quick win" strategy exists precisely because of this gap.
Who Gets Exposed First
When we look at the most exposed individual occupations by observed coverage, the pattern becomes specific and stark. Computer Programmers lead with 75% coverage as code generation, debugging, and documentation move from theory to daily practice. Customer service representatives follow closely with high coverage driven by chatbots, automated responses, and ticket routing, representing the quick wins already deployed at scale. Data entry keyers hit 67% exposure because structured input and output are areas where AI excels. Meanwhile, roughly 30% of workers have zero coverage, including cooks, mechanics, lifeguards, and bartenders, proving that physical presence and nuanced interpersonal judgment remain untouched.
The Young Worker Signal
The demographic profile of these highly exposed workers runs directly against our popular assumptions about who will be displaced. Top-quartile exposed workers earn 47% more than unexposed workers and are 16 percentage points more likely to be female. They are 11 percentage points more likely to be white, nearly double the Asian representation, and hold graduate degrees at 17.4% compared to just 4.5% for unexposed workers. They are older and more educated, meaning the people earning the most from knowledge work face the highest exposure to AI doing that work. The data shows the opposite: highly educated, well-paid knowledge workers face the highest exposure.
No Unemployment — Yet
The sharpest finding in the paper is the young worker signal, which is an early warning system for the industry. Workers aged 22 to 25 in highly exposed fields saw approximately a 14% decline in job-finding rates during the post-ChatGPT period, while workers over 25 showed no change. Low-exposure fields also saw no change, making the effect specific to young people trying to enter AI-exposed occupations. While multiple explanations exist, from young workers staying in current positions to measurement error, the pattern is clear and specific. If AI handles the tasks that junior employees used to do, such as data entry, first-draft writing, and basic code, the entry-level rung of the career ladder is being removed. This is not happening in some theoretical future. It is happening now.
Connecting the Threads
The paper's difference-in-differences analysis finds no systematic unemployment increase among highly exposed workers since late 2022, leading to two distinct interpretations. The optimistic read suggests AI augments workers, allowing them to adapt just as the offshoring panic of the 2000s predicted massive job loss that never materialized. The realistic read acknowledges that it has been barely three years since ChatGPT and organizational adoption remains slow, a fact proven by the 94% versus 33% gap. The offshoring comparison is also flawed because offshoring required building global supply chains and navigating complex regulations, whereas AI requires only a subscription and a browser. The friction to adoption is orders of magnitude lower, so when the gap between theoretical and observed exposure closes, the effects will not be gradual.
The Gap Is the Opportunity
The Bureau of Labor Statistics independently projects weaker employment growth through 2034 for positions with higher observed exposure, confirming the trend with two different signals. Furthermore, the measurement baseline matters significantly because the Eloundou framework was calibrated to early 2023 capabilities. Claude, GPT-4, and others have improved dramatically since then, meaning what was infeasible in 2023 is becoming feasible now. The theoretical ceiling is rising while the observed floor catches up, accelerating the potential for disruption.
These threads connect everything I have been reading about the state of the industry. The 94% versus 33% gap is Davenport's adoption gap measured at labor market scale, where the journey from Center of Excellence to quick win to scale explains exactly why theoretical exposure has not become actual displacement yet. The creator economy entry celebrates AI amplifying individual builders, but this paper adds the shadow side: if entry-level coding jobs dry up, no one learns the craft from the bottom up. Disruption theory needs updating because Christensen described disruption from below, whereas AI disrupts from everywhere simultaneously, automating low-end tasks like data entry and high-end tasks like programming at the same time.
The same gap shows up everywhere: the distance between what technology can do and what organizations do with it is where value gets created. In the Davenport framework, the gap is a business opportunity for consultants and implementers. In labor markets, the gap is shrinking, creating a real-time race between institutional change and individual adaptation. Organizations that move through the adoption cycle faster will capture value, while workers in those organizations will face displacement sooner.
For builders of AI tools, this paper reframes the entire market landscape. We are in the first inning, and the fact that 33% actual adoption sits against 94% theoretical capability means most of the value creation is still ahead. The organizations that haven't adopted yet represent the biggest market opportunity. As AI handles more tasks, the question shifts from "can AI do this?" to "did AI do this correctly?" That is the Nyantrace question. Agent observability is the trust layer for an AI-powered workforce, ensuring that as automation scales, we can verify the output. The junior pipeline problem is itself a product opportunity because if AI removes entry-level tasks, someone needs to build the new training pipeline through AI-assisted mentorship, simulated work environments, and progressive complexity tools. Capability outruns adoption, and the gap creates opportunities and casualties. The winners are the ones who see the gap clearly and move through it deliberately.
Observability and governance for AI agent systems. If you're building with agents, I'd like to talk.
nyantrace.ai →