In 1830, a group of English textile workers smashed the machines that were replacing them. They called themselves Luddites, and history has not been kind to their reputation. We use the word now as shorthand for irrational resistance to progress.
But here’s what most people forget: the Luddites weren’t wrong about what was happening to them. The power looms did eliminate their jobs. Their wages did collapse. Their children did end up in factories. The Luddites were wrong about the timeline — not the pain. The economy eventually absorbed the disruption and generated new kinds of work. But “eventually” meant decades of genuine suffering, and the people who bore the cost were not the people who reaped the reward.

Every prior technology revolution automated human energy. The steam engine replaced your arms. The assembly line replaced your legs. The tractor replaced your back. You could watch those machines work and think: that’s not me. That’s just my labor.
AI is different. For the first time in history, we’re automating human thought — and leveraging human emotion in the process. It doesn’t feel like it’s replacing what you do. It feels like it’s replacing what you are.
No wonder the resistance runs deeper this time.
∗ ∗ ∗
The dominant narrative goes like this: AI will eliminate most jobs. A handful of companies will own the means of production. Everyone else will scramble for scraps. This is presented not as a possibility but as an inevitability — a physics problem with a predetermined outcome.
It’s a compelling story. It’s also wrong. Not because AI won’t displace work — it will, massively. But because the dystopian endgame requires you to ignore a constraint so fundamental that it should embarrass anyone who claims to understand economics.
Consumer spending drives roughly 70% of U.S. GDP. Another word for consumers is workers. The economy doesn’t function if the people who buy things can’t afford to buy things. You cannot automate away 40% of jobs and maintain the consumer base that funds the companies doing the automating. The math doesn’t work. Not because of compassion — because of arithmetic.
This isn’t a moral argument. It’s a market structure argument. The doom narrative treats the labor market as if it exists in isolation from the consumer economy. It doesn’t. They’re the same people.
∗ ∗ ∗
Here’s what the doomers get right: the transition will be painful. People will lose specific jobs. Entire categories of work will shrink or vanish. The displacement is real, and pretending otherwise is dishonest.
Here’s what they get wrong: the rate of that displacement.
Every technology adoption curve in history has been throttled by the same bottleneck: human behavior. Not capability. Not compute. Not capital. Humans have to be willing to change how they work, and that willingness has a speed limit.
We watched this play out in real time with AI in 2025 and 2026. NVIDIA’s latest survey of the financial services industry found that 65% of organizations are actively using AI — up from 45% just two years earlier. Sounds like rapid adoption, right? But look at the operational reality: 40% cite data issues as their top barrier. A third say they can’t find the talent to execute. Implementation difficulties persist at 28%. The money is pouring in, but the humans are the bottleneck.
The InvestOps 2026 survey tells the same story from a different angle. When compressed settlement timelines hit the financial industry, 65% of firms responded by adding headcount — not by deploying automation. Two-thirds of the industry threw people at the problem because the organizational infrastructure to absorb change simply wasn’t there yet.
That’s not a technology limitation. That’s a human one. And it’s the single most important variable the doom narrative ignores.
∗ ∗ ∗
Employees are not passive recipients of transformation. They are the transformation. Every deployment, every workflow, every process change requires human participation. The companies that understand this — that adoption is a change management problem, not a technology problem — will navigate the transition. The companies that don’t will spend millions on tools nobody uses.
Which brings us to the question worth asking about the doom narrative itself: cui bono? Who benefits?
Trace the loudest dystopian voices back to their source. Many sit on the boards of the very companies building frontier models. The same people selling you the future are warning you it might destroy civilization.
That’s not concern. That’s positioning.
When you can’t yet show where the real revenue is coming from, fear is a remarkably effective product. It keeps you in the conversation. It keeps regulators engaged. It keeps competitors scared. It keeps the funding flowing.
Meanwhile, the actual work of enterprise AI adoption is boring, human, and operational. It’s change management. It’s process redesign. It’s getting a mid-level manager in financial services to trust a new workflow. None of that makes headlines. All of it makes money.
∗ ∗ ∗
We are the throttle. Not the algorithm. Not the board of directors. Not the venture capitalists. We collectively determine how fast this moves and what it looks like on the other side.
That’s not optimism. That’s economics. And it changes the question entirely — from “will AI destroy us?” to “how do we proceed deliberately?”
The doom narrative wants you frozen. The reality is that you have more agency than anyone is telling you — collectively, strategically, and operationally. The firms that understand this are already through the J-Curve and building what comes next. The firms that don’t are still arguing about whether the sky is falling.
The sky isn’t falling. But the ground is shifting. And the difference between those two things is everything.
∗ ∗ ∗
This is Signal #03 from The Signal, the thought leadership platform for Flatten the AI J-Curve: Your Unfair Advantage in the Race to Enterprise Adoption, available May 5, 2026.
Subscribe to The Signal | flattenthej.com | Available May 5, 2026