In 44 BC, a Roman judge named Lucius Cassius Longinus Ravilla had a habit that made him unpopular with litigants and indispensable to jurisprudence. Every time a case came before him, before hearing arguments, before reviewing evidence, he asked one question: cui bono? Who benefits?
Not who claims to benefit. Not who says they’re acting in the public interest. Who actually, materially, financially benefits from this outcome? Cassius understood that the stated reason for an action and the actual reason are rarely the same thing.
Twenty centuries later, we could use him in the AI conversation.

The Fear Business
The dominant AI narrative oscillates between two poles: utopian hype and dystopian doom. Both are presented with conviction. Both claim to be grounded in evidence. And both are, at their core, business models.
Trace the loudest dystopian voices back to their source. Many sit on the boards of the very companies building frontier AI models. The same people selling you the future are warning you it might destroy civilization. That’s not concern. That’s positioning. When you can’t yet show where the real revenue is coming from, fear is a remarkably effective product. It keeps you in the conversation. It keeps regulators engaged. It keeps competitors scared. It keeps the funding flowing.
The hype side runs the same playbook in reverse. Consulting firms publish reports predicting trillions in AI-driven productivity gains. The same consulting firms sell AI transformation engagements. Vendors publish benchmarks showing dramatic improvements. The same vendors sell the tools. None of this is necessarily wrong. But the incentive structure should make you pause before treating any of it as disinterested analysis.
The Imagination Gap
This is where the Imagination Gap lives. Not in a deficit of information—there has never been more information about AI available to executives. The gap lives in the inability to separate signal from noise when every signal carrier has a financial interest in the direction you move.
The executive who reads five analyst reports, attends three vendor demos, and hires a consulting firm to build an AI roadmap has not solved the Imagination Gap. They’ve outsourced it. And the organizations they’ve outsourced it to are not in the business of telling you to slow down. They’re in the business of selling you the next phase.
Three Questions Worth Asking
Cassius gave us a gift that costs nothing to use. Before acting on the next AI prediction, the next transformation roadmap, the next vendor pitch, ask three questions:
Who sells this? Not who published it—who profits from you believing it? If the answer is the same entity making the claim, adjust your confidence accordingly.
Does this require me to act before I understand? Urgency is the signature move of someone who benefits from your haste. The firms that navigate AI well are the ones that refuse to be rushed.
What would be true if the opposite were correct? If the doom narrative is wrong, what does that imply? If the hype is overstated, what changes? If you can’t construct a plausible counter-narrative, you’re not thinking. You’re reacting.
Cui bono. Ask it about everything. Ask it about vendor pitches. Ask it about analyst reports. Ask it about consulting proposals. Ask it about doom predictions and hype cycles alike.
And ask it about this post, too. The author has a book coming out.
∗ ∗ ∗
David Luria is the author of Flatten the AI J-Curve: Your Unfair Advantage in the Race to Enterprise Adoption (May 2026) and the founder of Corso & Alexander.
Read the full piece: Subscribe to The Signal on Substack
Free tools: flattenthej.com
Flatten the AI J-Curve — Available May 5, 2026