For all the talk about artificial intelligence replacing human workers, there’s an awkward truth hiding behind the hype:
A lot of AI still depends on humans doing the hard part.
Not just engineers. Not just researchers. I’m talking about laid-off writers, marketers, teachers, lawyers, and other skilled professionals who are now being hired as contractors to help train the very systems that are making their old jobs less secure. Recent reporting from New York Magazine and The Verge describes a fast-growing shadow workforce feeding AI models with prompts, rubrics, evaluations, and “gold standard” answers so the machines can learn how to perform professional tasks. Mercor alone says around 30,000 professionals work on its platform each week, and the company was valued at $10 billion in late 2025. OpenAI and Anthropic have been identified as clients.
That should make all of us stop for a minute.
Because the story we keep hearing is that AI is here to automate work. Cleanly. Efficiently. At scale. But what’s actually happening is messier and far more human. Behind the polished demos and confident predictions is a growing gig economy of smart people being paid to teach machines how to imitate judgment, taste, reasoning, and communication.
One worker profiled by New York Magazine, a former content marketer named Katya, put it in terms that are hard to improve on: her job was gone because of ChatGPT, and then she was invited to help train the model. Others described the experience as being asked to dig their own grave. That may sound dramatic, but you can see why they feel that way. Many of these workers have degrees, experience, and real expertise. What they often do not have is stable employment, clear visibility into how their work will be used, or much leverage once they are inside the system.
And this is where the conversation gets more interesting.
I’m not anti-AI. Far from it. I use AI constantly. I see the upside. I believe these tools are going to keep making life and business easier in all kinds of ways. But I also think we do ourselves no favors when we pretend the current AI boom is some clean break from human labor. It isn’t. It’s a transfer. In many cases, expertise is being broken into pieces, priced by the task, and fed back into systems designed to make that expertise cheaper and more scalable.
That’s not magic. That’s labor.
Highly educated labor, often invisible labor, and increasingly precarious labor.
This is part of a broader pattern sometimes called “ghost work,” where humans sit behind supposedly automated systems doing the tagging, sorting, reviewing, correcting, and refining that makes the technology appear smarter than it really is. What’s changing now is that the same pattern is moving up the professional ladder. It’s no longer just content moderation and data labeling. It’s white-collar knowledge work. Lawyers. Consultants. Teachers. Journalists. Voice actors. Scientists. People whose judgment used to be considered the valuable part.
That matters because it changes how we should think about “automation.”
When a machine replaces a repetitive task, that’s one thing. When a machine is trained through thousands of hours of fragmented human expertise, then presented to the public as autonomous intelligence, that’s something else. It doesn’t mean the technology is fake. It means the marketing is often misleading. The machine may be fast, scalable, and increasingly useful, but there is still a large human supply chain underneath it.
And the economic timing makes this worse. Handshake reported that as of August 2025, job postings on its platform had declined more than 16% year over year, while applications per job were up 26%. That’s exactly the kind of environment where highly capable people become vulnerable to unstable contract work dressed up as participation in the future.
To be fair, there is another side to this.
Some people will look at these jobs and say: so what? Work changes. Industries evolve. New opportunities appear. If AI companies need experts, and experts need income, then maybe this is simply the market doing what markets do. Mercor itself frames this as a new category of work in which professionals teach machines judgment and nuance, then move toward higher-value efforts that AI cannot reliably do.
Maybe. But that optimistic version depends on two things being true.
First, the workers would need to share meaningfully in the upside, not just rent out their expertise by the hour until the demand drops.
Second, “higher-value work” would need to appear fast enough and broadly enough to absorb the people whose old roles have already been hollowed out.
That outcome is possible. It just isn’t guaranteed.
What I think we’re really seeing is the commoditization of professional knowledge. Not all of it. Not the best of it. But enough of it to matter. A profession that once offered identity, stability, and long-term value can now be atomized into contract tasks: write the perfect answer, score the output, build the rubric, label the edge case, repeat. The knowledge still matters. The person behind it, less and less.
That’s the part we should be talking about.
Not whether AI is good or bad. That’s too simplistic.
The better question is: who benefits when human expertise becomes training data?
If the answer is mostly venture-backed platforms and model companies, while the people supplying the judgment become disposable, then the AI economy has a real moral problem, not just a technical one.
And if we keep calling that “automation,” we’re missing the story entirely.
The future may absolutely include remarkable AI tools that save time, reduce busywork, and expand what small teams can do. I believe that. But let’s stop pretending the machine rose up on its own. Right now, a lot of the so-called automated economy is still being held together by people who are underpaid, unseen, and teaching the system how to live without them.
