← Displaced

Investigation

They Spent Decades Building Expertise. Now They Use It to Train Their Replacements.

Published April 8, 2026  ·  10 min read

There is a job that did not exist ten years ago, that most people have never heard of, and that is quietly becoming one of the most common forms of work for highly educated professionals in their fifties and sixties who cannot find employment in their own fields.

It is called data annotation. And its defining feature — the thing that makes it both necessary and quietly devastating — is that it works best when the person doing it has spent decades becoming an expert in the exact profession the AI is being trained to replace.

A doctor reviews how an AI model responds to a medical question and flags what it got wrong. A lawyer evaluates whether an AI-generated contract clause is legally sound. An engineer reads through AI-produced code and identifies the subtle error three levels deep. They correct the model. They improve its responses. They teach it to do their job better.

And then they go home to an apartment they can afford on $20 an hour and wonder what comes next.

This is not a small phenomenon. Across the United States and increasingly in Europe, a quiet workforce has emerged — hundreds of thousands of skilled professionals, many of them older workers who found themselves locked out of their fields through a combination of AI disruption, age discrimination, and a job market that has grown steadily less hospitable to anyone over fifty — who are now doing the invisible labor of making AI smarter.

The Guardian reported this week on five of them in detail. Their stories are different in specifics but identical in structure. A man with a master’s degree in information management who spent decades designing software systems, living in motels and then a car after losing work, now earns $20 an hour reviewing Meta’s AI responses. An emergency medicine physician who earned between $300,000 and $500,000 a year, unable to return to clinical work after illness and a gap in practice, now takes gig-style AI training assignments that pay between $30 and $140 an hour but appear and disappear without warning, sometimes vanishing entirely for weeks. A PhD in public policy who spent eighteen years in higher education, earning six figures, now makes $25 an hour training Meta’s models — on a contract with no job security, and without any realistic prospect of returning to the work she spent her career building.

“It’s just devastating and demoralising to think of all the time I spent on my career and the sacrifices I made to earn my graduate degrees. Look where I’m at now.”

The term used in economics for what these workers are doing is a “bridge job” — lower-paying, less demanding work that helps people stay financially afloat as they approach retirement. Historically, bridge jobs meant temp work, retail, food delivery. The new bridge job, for skilled professionals, is teaching AI to do what they used to do.

The reason these workers are valuable to AI training companies is precisely their expertise. That expertise, built over decades and no longer employable at its full value in the market, is being used to improve the systems that made it unemployable in the first place.

A doctor can tell when a medical AI is generating a dangerous answer. A lawyer can spot when a contract clause creates unacceptable risk. That knowledge, irreplaceable in the training phase, is worth $20 to $40 an hour to the companies acquiring it — and nothing like what it was worth in the labor market where it was built.

According to AARP research published in early 2026, 64% of workers aged 50 and older report experiencing or witnessing age discrimination in the workplace. Workers over 60 take approximately 50% longer to find new jobs than people in their twenties and thirties — and only a fraction ever regain their previous earning levels. A 2025 analysis by Glassdoor found that complaints about ageism during the hiring process jumped 133% in the first quarter of that year compared to the year before. The job market had already been hostile to older workers before AI accelerated the displacement. Now it has become genuinely brutal.

What makes this moment different from previous waves of technological disruption is the specific intersection of two things happening simultaneously.

AI is disproportionately affecting the kinds of knowledge work — analysis, drafting, research, communication — that older, more experienced workers have spent their careers developing. Anthropic’s research published in March 2026 found that the most AI-exposed occupations are concentrated in exactly these areas: computer programming, financial analysis, customer service, document processing, market research. The workers most likely to be displaced are, as the paper notes, older, female, more educated and higher paid. Not the workers anyone assumed would be first.

At the same time, the AI systems doing the displacing require human expertise to improve. They need people who know what good looks like — who can identify when a medical response is subtly wrong, when a legal analysis misses a jurisdiction-specific nuance, when code that appears to work will fail under specific conditions. The people best placed to provide that expertise are, almost by definition, the same people whose careers are being disrupted.

The result is a workforce that is both the source of the knowledge powering AI and one of its primary casualties. They are not passive victims. They are doing real intellectual work, making real contributions to the systems that will shape the next decade. They are also, in most cases, doing it for a fraction of their previous earnings, without benefits, on contracts that can disappear overnight.

The instability of this work is worth dwelling on, because it tends to get obscured by the framing of AI training as opportunity rather than fallback.

Data annotation work is contract-based, gig-economy work. Pay starts at around $20 an hour for most roles and rises to $100 or more for highly specialized expertise — but the higher rates are not guaranteed, and the work is not stable. Assignments appear on platforms and are claimed by whoever logs in first. Some weeks there is consistent work. Other weeks there is nothing. There are no benefits, no paid leave, no pension contributions, no job security of any kind. Companies can terminate contracts at any time, and they do — the emergency physician the Guardian interviewed was laid off from her first AI training role during a mass cutback in early 2025, after a year of stable work, with no warning.

This is not a criticism of the companies offering the work. It is a description of the structure of a new labor market that is absorbing skilled professionals who have nowhere else to go and offering them terms that would have been unacceptable to them a decade ago. The fact that they accept those terms is a measure of how few options they have, not of how good the options are.

I want to be careful not to flatten these workers into victims, because most of them are not describing their situation that way — or not only that way.

The doctor the Guardian spoke to said the transition to AI training was a “phenomenal” shift that let her combine medical expertise with analytical work. She believes doctors should engage with AI rather than resist it, that physicians training these models are helping steer them toward more accurate and responsible responses. The IT professional making $20 an hour training Meta’s models is simultaneously building a coaching practice for neurodivergent clients and developing an online job search course. He says, with apparent conviction, that he is “betting on himself.”

The will to find meaning in a difficult situation is not the same as the situation being acceptable. Both things can be true. A person can be resourceful and resilient and creative in the face of a genuinely unfair set of circumstances, and the circumstances can still be genuinely unfair.

What connects every one of the workers in the Guardian’s report is not despair. It is the gap. The gap between what they built, what they know, what they are capable of — and what the market is currently willing to pay for it. A 60-year-old with a master’s degree reviewing AI responses for $20 an hour is not where the implicit social contract of education and expertise was supposed to lead.

The pattern here connects to something I have noticed across every story on this site.

The people most affected by AI disruption are not the ones who appear in the dramatic headlines — the mass layoffs, the announcements, the press releases. They are the ones accumulating quietly in a new kind of work that didn’t exist before, doing jobs that pay less than they used to make, that offer none of the stability they used to have, and that require the very expertise the market has decided it no longer needs at full price.

My sister lost her translation work gradually, not all at once. The lawyer I wrote about — myself — still has a full pipeline, but watches the floor approaching. The junior associates described in The Last Junior are not fired; they are simply not hired. And now, an entire generation of skilled professionals in their fifties and sixties is quietly becoming the invisible labor force behind the AI systems that displaced them, earning gig wages for the expertise it took them decades to build.

None of these people appear in the unemployment statistics. None of them make the front page. They exist in a grey zone between employment and precarity that the official numbers have no good way of measuring. They are working. They are just not working in the way their expertise was supposed to allow — and they are working, in many cases, to make that gap permanent for everyone who comes after them.

They are not passive victims. They are doing real intellectual work. They are also doing it for a fraction of their previous earnings, without benefits, on contracts that can disappear overnight.

The Guardian piece ends with the IT professional in his car, then his motel, then his apartment, now earning $20 an hour reviewing AI responses and betting on himself. He is resourceful. He is adapting. He has not given up. All of that is true and worth saying.

And the system that produced that outcome — the system in which a man with decades of expertise ends up training the model that replaced him for wages that would have been insulting to him at thirty — that system is not something anyone designed deliberately. It emerged from thousands of individual decisions, each rational in isolation, adding up to something that nobody chose and that almost nobody is describing honestly.

That is what this site exists to do.

Check your own risk

Where does your role actually sit?

Enter your occupation and see your AI automation risk score, based on Oxford Martin School + Anthropic Economic Index — 758 occupations.

Check my job risk →

Sources

The Guardian — AI training work and older workers (April 7, 2026)Massenkoff & McCrory — Labor market impacts of AI (Anthropic, March 2026)AARP — Age discrimination in the workplace survey (2026)Inc. / Glassdoor — AI may worsen ageism in hiring (2025)

Related

Oracle Just Fired 10,000 People for AI. One of Them Described the Email as “Thank You. Go F*** Yourself.” →You Were Not Replaced by AI. You Were Replaced by a CEO Who Wanted a Headline. →The Last Junior →Even the People Building AI Are Being Replaced By It →The Most At-Risk Jobs Right Now →