In 1975, a young engineer at Kodak named Steve Sasson built the first digital camera. It was the size of a toaster, took 23 seconds to capture a single image, and stored it on a cassette tape. His bosses looked at it and understood immediately what it meant. Not what it could do—what it would destroy.
They buried it.
Not because they were stupid. Because they were rational. Kodak’s entire business depended on a world where photographs were physical objects. Sasson had just demonstrated that photographs didn’t need to be physical at all.
Here’s the part of the Kodak story nobody dwells on: those engineers kept working. For years. They knew what they’d built. They understood the trajectory. And they showed up every morning to a company whose leadership had decided that the future those engineers had invented was too dangerous to pursue. They were asked, in effect, to un-know what they knew—to keep contributing to a business model they understood was already dead.
That’s not a technology story. That’s a loyalty story. And it’s the same story playing out right now in every enterprise deploying AI.

The Double Bind
There is a cognitive dissonance at the center of enterprise AI adoption that almost no one is naming.
The corporation sends one signal: adopt these tools or we will not survive. The urgency is real. The pace of rollouts is staggering. Every all-hands meeting carries the same subtext—this is existential for the business, and we need everyone on board.
The culture sends a different signal entirely: AI is coming for your job. The doom narrative is everywhere—in the headlines, in the podcasts, in the whispered conversations after the town hall ends. The message is clear: this technology replaces people like you.
So the employee sits between two truths that cannot coexist. Help us build the thing that might eliminate your role. Learn the tool that could make you redundant. Train the system, feed it your expertise, teach it how you think—and do it with enthusiasm, because the company’s survival depends on it.
That is not a change management problem. That is a double bind. And it is the most honest description of what millions of knowledge workers are feeling right now but cannot say out loud.
Why the Frozen Middle Exists
Many of these people love their companies. They’ve spent years building something they believe in. They will do almost anything to help the organization survive.
But it is irrational to sacrifice your livelihood for an institution that may not need you on the other side of the transformation. The emotional architecture of what’s being asked is closer to self-sacrifice than professional development. And the people being asked to do it know this. They just can’t say it, because saying it out loud means being labeled a resistor, someone who “doesn’t get it.”
This is why the Frozen Middle exists. Not because middle managers are afraid of technology. Because they’ve done the math. They understand—in their bodies, if not in their spreadsheets—that they are being asked to participate in their own displacement. And the rational response to that is not enthusiasm. It’s self-preservation. It’s quiet resistance. It’s doing just enough to avoid being flagged while protecting the one asset AI cannot replicate: institutional knowledge, held close, shared slowly, never fully documented.
If you’re a senior leader wondering why your AI initiative is stalling around Month 4—welcome to the Kill Zone. This is what it looks like from the inside.
Where Hope Enters
Real hope starts with honesty.
The pace of this transfer—and it is the largest transfer of cognitive energy in human history—is not set by the technology. It is set by us. By the choices leaders make about how fast to move, how transparently to communicate, and whether they have the courage to name what everyone already knows.
The dystopian narrative misses something fundamental. Consumers drive roughly 70% of the economy. Another word for consumer is worker. The person you’re asking to adopt AI is also the person who buys your product, pays their mortgage, and makes economic decisions that sustain the ecosystem your company operates within. The idea that the technology simply rolls over everyone, that adoption is a freight train with no brake—it isn’t just wrong. It’s a failure of imagination.
The rate of AI adoption is throttled by human behavior. By organizational readiness. By the willingness of real people to engage with a transition that requires them to change how they work, what they value, and how they see their own contribution. That willingness is not a technology variable. It’s a leadership variable.
Which means the pace is yours.
The Questions That Matter
The organizations that navigate this well will not be the ones with the best tools. They will be the ones that asked the hardest questions first. What do we actually value? What work should exist at all? What are we building, and for whom? And—the one nobody asks—what is the honest contract between this organization and the people we’re asking to walk through the fire with us?
Those aren’t AI questions. They have human answers. And the leaders who ask them—before they automate, before they optimize, before they chase the next capability—are exercising the one thing AI cannot replicate: judgment about what matters.
The Kodak engineers had the invention. What they didn’t have was leadership willing to face the transition honestly and bring them along.
The dread you felt in that meeting—watching AI do in nine seconds what took your team nine days—was honest. It was human. And it was useful, if you let it be. Because the leader who feels the weight of this moment and still chooses to engage, who names the double bind instead of pretending it doesn’t exist, who tells their people the truth about what’s changing and what isn’t—that leader is exactly who the room needs right now.
You don’t have to build the future by asking people to erase themselves from it.
You just have to choose.
∗ ∗ ∗
David Luria is the author of Flatten the AI J-Curve: Your Unfair Advantage in the Race to Enterprise Adoption (May 2026) and the founder of Corso & Alexander.
Read the full piece: Subscribe to The Signal on Substack
Free tools: flattenthej.com
Flatten the AI J-Curve — Available May 5, 2026