What observed exposure is and why it changes the game
Artificial Intelligence and the job market have been a duo sitting at the center of every conversation for years now, but the truth is that most of what we hear about automation is still based on theoretical projections. These are models that try to predict what AI could do to certain professional tasks, without necessarily looking at what is already happening in practice. Anthropic, the company behind the Claude assistant, decided to tackle exactly this gap and developed a concept called observed exposure. Instead of estimating hypothetical scenarios, this metric cross-references real AI usage data with the official classification of professional tasks in the United States, offering a much more concrete picture of which professions are already being impacted by automation and how intensely that is playing out in everyday work life.
The new measure combines information from three distinct sources: the O*NET database, which lists the tasks associated with roughly 800 occupations in the U.S.; proprietary Claude usage data compiled through the Anthropic Economic Index; and the theoretical exposure estimates developed by Eloundou and colleagues in 2023. That last source classifies each professional task on a simple scale — a score of 1 if a language model alone can double the speed of the task, 0.5 if that requires additional tools built on top of the model, and 0 if it is not feasible. Combining these layers of data is what makes it possible to go beyond a purely theoretical exercise and see the real impact of Artificial Intelligence on day-to-day professional life.
The real brilliance of this approach is that it starts from reality rather than assumptions. When researchers analyze real conversations between users and Claude, they can identify usage patterns that reveal which professional activities are already being delegated, supplemented, or transformed by Artificial Intelligence. This makes it possible to build a map of automated occupations that does not rely on guesswork or exaggerated projections. The result is a much more grounded portrait that shows both the areas already feeling the effects of automation and those that remain practically untouched by the technology. It is exactly the kind of evidence that has been missing from the public debate and that helps separate unfounded alarmism from what genuinely deserves attention from workers, companies, and governments.
Another important point is that observed exposure does not just measure whether a profession is being affected — it also measures the degree of that exposure. The calculation takes into account several qualitative factors that the researchers consider predictive of real-world impact on jobs. An occupation’s exposure is higher when:
- Its tasks are theoretically feasible with AI
- Its tasks show significant usage in the Anthropic Economic Index data
- The interactions happen in work-related contexts
- There is a greater proportion of automated use or API-based implementation, rather than use that only supplements human work
- The AI-impacted tasks represent a meaningful share of the overall professional role
This level of granularity makes all the difference when trying to understand the real landscape of the job market, because it avoids dangerous generalizations like saying an entire professional category is under threat when, in practice, only a specific slice of its tasks is actually being automated. Fully automated implementations receive full weight in the metric, while augmentation use — where the professional uses AI as an assistant but stays in control — receives half the weight. It is a subtle distinction, but one that completely changes how the data reads.
Who is most exposed — and the surprise in the data
One of the most counterintuitive findings from Anthropic’s study is the profile of the workers who show up most on the observed exposure radar. Contrary to what many people assume, it is not the least qualified or lowest-paid professionals leading the list of automated occupations. The data, drawn from the Current Population Survey in the three months before the launch of ChatGPT (August to October 2022), shows that the workers most exposed to Artificial Intelligence are actually the most educated, highest-earning, and predominantly female. The most exposed group is 16 percentage points more likely to be female, 11 percentage points more likely to be white, and nearly twice as likely to be Asian, compared to the group with no exposure. In terms of pay, they earn an average of 47% more. People with graduate degrees make up 17.4% of the most exposed group, versus just 4.5% of the unexposed group — a nearly fourfold difference.
This completely challenges the popular narrative that automation will first replace manual, repetitive work. What is actually happening is that generative AI has a natural affinity for cognitive tasks like writing, data analysis, programming, specialized customer support, and content production — activities that historically require higher education and are concentrated in higher salary brackets.
The study presents a ranking of the ten occupations with the highest observed exposure, and the results are telling. At the top of the list are Computer Programmers, with an impressive 75% task coverage — which makes perfect sense given the extensive use of Claude for coding. Right behind them are Customer Service Representatives, whose core tasks are increasingly showing up in API traffic. And in third place are Data Entry Keyers, whose primary task of reading source documents and entering information shows significant automation, with 67% coverage.
On the other end of the spectrum, 30% of American workers have zero exposure because their tasks appeared with insufficient frequency in the data to reach the minimum detection threshold. This group includes professions like cooks, motorcycle mechanics, lifeguards, bartenders, dishwashers, and locker room attendants — occupations that involve physical work and in-person interaction, territory where generative AI simply does not operate.
The gender dimension also stands out. The higher concentration of women among the most exposed professionals reflects the demographic makeup of fields like education, communications, human resources, and healthcare administration — sectors where cognitive and text-based tasks dominate everyday work. This finding matters because public policies around professional reskilling and adaptation to the new job market landscape need to account for these demographic nuances to actually be effective. Ignoring who the most impacted people are makes any action plan too generic to work in practice.
Theoretical automation versus real automation — the gap few people see
Perhaps the single most important finding from the entire study is the enormous distance between the theoretical potential for automation and what is actually happening. According to Anthropic’s analysis, current Artificial Intelligence already has the technical capability to impact a significant share of professional tasks, but real-world usage still represents a small fraction of that potential. To put it in concrete terms, 97% of the tasks observed across the four previous Economic Index reports fall into categories classified as theoretically feasible by Eloundou and colleagues. In other words, the correlation between what is possible and what is being used is high, but the actual volume of real usage still lags far behind what theory would allow.
The Computing and Mathematics category illustrates this point well. The theoretical measure indicates that 94% of tasks in this field could be accelerated by language models. But observed exposure shows that Claude currently covers only 33% of those tasks. The same logic applies to administrative occupations, where theoretical capability sits at 90% but practical coverage is considerably lower. This gap exists for a range of practical reasons: not all companies have adopted AI tools, many professionals are still learning how to use these technologies, there are regulatory and cultural barriers, and some tasks — even if technically automatable — depend on human context, ethical judgment, or social interaction that AI still cannot replicate satisfactorily.
The original paper includes a very illustrative example of this distance. The theoretical classification by Eloundou and colleagues marks the task of authorizing medication refills and providing prescription information to pharmacies as fully exposed, with the maximum score. However, the Anthropic researchers simply did not observe Claude performing this task in practice. The theoretical assessment even seems correct — a language model probably could speed up that process — but regulatory barriers, human verification requirements, and practical implementation limitations keep that possibility squarely in the realm of theory for now.
This gap between theory and practice is good news for anyone fearing a scenario of mass unemployment in the short term. And it is worth noting that the study itself acknowledges a track record of exaggerated predictions in this space. The researchers point out that a prominent attempt to measure job vulnerability to international outsourcing identified about a quarter of American jobs as vulnerable, but a decade later most of those occupations had maintained healthy employment growth. Even official government projections of occupational growth, while directionally correct, added little predictive power beyond simple linear extrapolation of past trends. And the effects of industrial robots on employment generate opposing conclusions across different studies. This dose of humility runs through the entire paper and is one of its strongest qualities.
U.S. market projections confirm the trend
One interesting data point that reinforces the validity of the observed exposure metric is its correlation with the official employment projections from the Bureau of Labor Statistics (BLS) for the 2024 to 2034 period. The study found that for every 10 percentage point increase in observed coverage, the BLS employment growth projection drops by 0.6 percentage points. The relationship is modest but statistically meaningful, and it works as a kind of independent validation — labor market analysts who have no access to Claude usage data arrived at conclusions pointing in the same direction. Interestingly, this correlation does not appear when using only the theoretical measure from Eloundou and colleagues, which suggests that observed exposure captures something that theory alone cannot.
There is also a vast territory that remains completely out of AI’s reach. Many tasks, such as the physical agricultural work of pruning trees and operating machinery, or legal tasks like representing clients in court, continue to be exclusively human domain. This reminder is important for keeping expectations calibrated and avoiding the kind of alarmism that dominated previous technology debates.
What the data says about unemployment and the future of work
And here comes probably the question everyone wants answered: is AI already causing unemployment? The short answer, at least so far, is not in any detectable way. The researchers chose to focus on unemployment as their primary indicator because it most directly captures the potential for economic harm — an unemployed worker is someone who wants to work and has not yet found a position. Comparing workers in the most exposed quartile with those who have no exposure at all, the study found no systematic increase in unemployment among the most exposed groups since late 2022.
The original paper details that the analysis used a difference-in-differences model, comparing unemployment trends before and after the launch of ChatGPT between the most and least exposed groups. The average change in the gap between the two groups since November 2022 was small and statistically insignificant. In other words, unemployment in the most exposed group may have risen slightly, but the effect is indistinguishable from zero in the data.
However, the study does find suggestive evidence that hiring of younger workers may have slowed in occupations with high exposure. This is a subtle but potentially significant signal. If companies are using AI to handle tasks that would previously have been assigned to early-career professionals, the impact may not show up as layoffs but rather as a reduction in the flow of new hires. It is the kind of effect that takes time to materialize in traditional unemployment statistics and deserves close monitoring in the years ahead.
Looking at the full picture revealed by the study, what emerges is a scene of transition — not disruption. The job market is indeed being reshaped by Artificial Intelligence, but in a way that still allows adaptation by professionals and organizations. The professions with the highest observed exposure are those whose workers are already, for the most part, incorporating AI into their routines as a productivity tool. This suggests that the ability to work with AI, rather than against it, could become a decisive competitive advantage in the coming years. Professionals who learn to use these tools to expand their capabilities tend to become more valuable in the market, while those who resist the change may face difficulties down the road — not because of the technology itself, but because of a lack of adaptation to an environment that is already in motion.
The importance of measuring before reacting
One methodological aspect that deserves a spotlight is the researchers’ deliberate decision to establish this measurement system now, before significant effects have materialized. The idea is that by building this baseline with concrete data, future analyses will be able to identify economic disruptions more reliably than studies conducted after the fact. It is a rare stance in a field dominated by grand predictions, and it shows real analytical maturity. The commitment to revisiting these analyses periodically transforms the study from a static snapshot into a tool for continuous monitoring.
The researchers also acknowledge that their approach does not capture every channel through which AI might reshape the job market. There are indirect effects, shifts in value chains, creation of new occupations, and organizational transformations that do not show up in individual task-level analysis. But the choice to focus on what is measurable and verifiable, rather than trying to encompass the entire phenomenon, is precisely what gives the results their credibility.
The most valuable contribution of this Anthropic research might be exactly the shift in perspective it proposes. Instead of fueling the debate with grand predictions about a distant future, observed exposure brings the conversation into the present and uses concrete evidence to show where we actually stand. And where we stand is at a point where Artificial Intelligence is already part of the daily routine of many professions, but still far from representing an existential threat to employment as we know it. The real challenge is not whether AI will transform the job market — that is already happening. The challenge is making sure this transformation is tracked closely, with real data and proportional responses, so that as many people as possible can navigate this transition without being left behind 🚀
