← Displaced

Essay

Even the People Building AI Are Being Replaced By It

Published March 30, 2026  ·  8 min read

A few days ago I came across a Reddit thread that I haven’t been able to stop thinking about.

Someone had posted a quote from Dario Amodei, the CEO of Anthropic — the company that makes Claude, the AI that now writes a significant portion of my own legal work. The quote went something like this:

“I have engineers within Anthropic who don’t write any code. They just let Claude write the code and they edit it and look it over. At Anthropic, writing code means designing the next version of Claude itself — so we essentially have Claude designing the next version of Claude.”

The thread had hundreds of replies. Some people said this was proof that coding was dead. Others said it was marketing spin from a CEO trying to make his product sound indispensable. A few engineers pushed back and said that reviewing code is writing code, and that Amodei was being deliberately misleading.

They were all a little bit right. And none of them were asking what I think is the more important question.

What’s actually happening at Anthropic

Let me start with what’s actually true.

The engineers at Anthropic are not sitting in a room watching a chatbot generate software while they eat lunch. What Amodei is describing is something more specific and more interesting than that. The people building Claude are using Claude to build Claude. They define what needs to be built. They review what comes back. They catch what’s wrong and redirect. They make the judgment calls about what matters.

One senior developer in the thread described it well. He said it felt like being an engineering manager, except instead of managing a team of humans he was managing a team of code-generating agents. The architecture decisions, the systems thinking, the knowing-what-to-reject — that was still entirely human. The typing was not.

And that distinction matters enormously. Because it means the job didn’t disappear. It transformed.

But here’s the part that the optimists in that thread kept glossing over.

That developer also said something else. He said there are no junior engineer spots anymore. The entry-level work — the implementation, the boilerplate, the first drafts — that’s gone. Everyone is operating like a senior engineer or a manager now. And senior engineers and managers are a much smaller group than the full engineering workforce used to be.

Another commenter put it plainly: where you needed 100 programmers, you now need one.

The same story, different field

I’ve been thinking about this in relation to my own field.

I wrote recently about being a corporate lawyer who is slowly replacing himself. The parallel is almost uncomfortable. What Amodei is describing in engineering is exactly what I experience every day in legal work. I don’t draft from scratch anymore. I direct, I review, I catch what’s wrong, I make the calls that require genuine judgment. The AI does the rest.

And for now, that feels like an upgrade. I have my evenings back. I’m doing better work in less time. The unbillable hours that used to eat my weekends have largely disappeared.

But the Reddit thread surfaced something that I think deserves more honest attention than it usually gets.

One commenter — a developer with decades of experience — asked a question that nobody answered satisfactorily. He said: the engineers reviewing AI-generated code today learned to do that by writing code for years. How do you train the next generation of reviewers when there’s nobody writing code to learn from?

It’s the same question in law. The junior associates who used to spend years doing document review and first drafts — that work is evaporating. Those years were how people learned. How do you become a good senior lawyer if you skip the part where you do the slow, tedious, foundational work that teaches you what to look for?

This is not a small problem. It’s a structural one. And almost nobody is talking about it seriously.

What the data actually shows

The Anthropic research paper that this site is built around — Labor Market Impacts of AI, published in March 2026 by researchers Maxim Massenkoff and Peter McCrory — found exactly this pattern in the data. Hiring of workers aged 22 to 25 into AI-exposed occupations has dropped by around 14% since ChatGPT launched. The jobs at the top of the ladder are holding. The rungs at the bottom are being quietly removed.

A Harvard Law School study found that in high-volume litigation, AI reduced the time spent on a standard complaint response from 16 hours to 3 to 4 minutes. The senior lawyers doing that work are still employed. The junior associates who would have spent those 16 hours learning are not being hired to replace them.

And a report from Challenger, Gray & Christmas found that over 55,000 jobs in the United States were directly attributed to AI-driven layoffs in 2025 alone. That number only counts the companies that publicly said AI was the reason. The actual number is certainly larger.

The honest view from inside the transition

What makes the Reddit thread so interesting is the honesty of the people actually living inside this transition.

One engineer described his current day as solo work that would previously have required a full team. Refactors that four years ago nobody would have even considered attempting. Features shipped in days that used to take months. He said some days he throws away 80 or 90 percent of what the AI generates because it’s stuck on an outdated pattern. Other days he flies.

That variability matters. Because the public conversation about AI and jobs tends to flatten everything into a binary. Either AI is replacing everyone or it isn’t. Either you’re safe or you’re not. The reality that engineers are actually describing is messier and more interesting than that. The work is faster, harder to evaluate, more uneven in quality, and more dependent on having a deep enough foundation to know when something is wrong.

Which brings me back to Amodei’s quote.

The engineers at Anthropic reviewing Claude’s code are, without exception, among the most technically sophisticated people in the world. Anthropic’s hiring process is notoriously brutal. These are not average engineers who learned to code in a bootcamp and got lucky. They are people with decades of systems thinking who can look at a thousand lines of generated code and immediately see the subtle flaw three layers deep that will cause a problem six months from now.

The question isn’t whether those people can work effectively with AI. Obviously they can. The question is what happens to the pipeline that produces people like that — if the formative years of that training disappear because the entry-level work has been automated away.

The scaffolding problem

I don’t think the answer is to resist the tools. That ship has sailed.

But I do think the honest answer to “is your job safe” is more complicated than most people want to hear. For experienced people with deep domain knowledge, the tools are mostly a gift. More output, less grind, better work-life balance, higher quality at the top end.

For people just starting out, or for people in the middle of their career without a strong foundation, the picture is much harder. The scaffolding that used to exist — the entry-level work, the junior roles, the slow accumulation of judgment through doing — that scaffolding is being quietly dismantled. And nobody has built anything to replace it yet.

The engineers at Anthropic who don’t write code anymore spent years writing code before they got there. That’s not incidental. That’s the whole story.

I built this site because my sister lost her translation career without seeing it coming. I wrote about my own legal work because I think honesty matters more than reassurance. And I’m writing about this Reddit thread because I think the most important conversations about AI and work are not happening in think tanks or on earnings calls.

They’re happening in comment sections, at 11pm, by people trying to figure out whether the career they’re building still has a foundation under it.

It might. But you deserve an honest answer, not a comfortable one.

Check your own risk

Don’t wait to find out the hard way.

Enter your occupation and see your AI automation risk score — based on Oxford Martin School + Anthropic Economic Index — 758 occupations.

Check my job risk →

Sources

Massenkoff & McCrory — Labor market impacts of AI (Anthropic, 2026)Harvard Law School CLP — The Impact of Artificial Intelligence on Law & Law FirmsChallenger, Gray & Christmas — Job Cut Reports 2025

Related

I Am a Lawyer. And I Am Slowly Replacing Myself. →They Didn’t Fire Anyone. They Just Stopped Hiring. →My Sister Was a Translator for 15 Years. AI Took Her Work in Months. →The Most At-Risk Jobs Right Now →