top of page

Follow the Money, Part 2: AI Makes It Worse

  • mbhirsch
  • 14 minutes ago
  • 5 min read

A few weeks ago, I wrote about the Infinite Capacity Fiction, the organizational belief that resources are endlessly malleable, that trade-offs are optional if we just try hard enough. I used stories from T-Mobile, HERE, Qualcomm, and LEGO to show how companies avoid the discomfort of real prioritization.


Since writing that piece, I've been struck by how often the pattern surfaces. It comes up on calls, in workshops, in casual conversations with product leaders. It's everywhere.


Here's what I didn't say then: AI is making it worse.



I was on a call this week with a product leader whose team had done something genuinely impressive. They'd compressed their feature definition process, the work of scoping a feature, writing the PRD, detailing use cases, from five days down to one. Real acceleration. The kind of efficiency gain that AI advocates promise.


Engineering timelines didn't change. QA timelines didn't change. The actual building and testing of software takes exactly as long as it did before. So while product could define features in a day instead of a week, the overall delivery timeline barely moved.


And the executive response? Frustration. "We gave everyone AI tools and nothing's going faster."


Think about what's happening here. The team banked a legitimate win, four days saved on every feature. But instead of recognizing that as breathing room, or as an opportunity to invest those four days in better research, deeper customer understanding, or more rigorous prioritization, the organization interpreted the unchanged delivery timeline as a failure. The savings were invisible because they were immediately absorbed into higher expectations.


The Infinite Capacity Fiction found a new enabler.



I keep seeing this pattern. AI compresses execution time; organizations respond by expanding scope.


"How long does it take to write another PRD?" becomes a real question now. The answer used to be days. Now it's an afternoon. So the roadmap grows. Not because market conditions changed or customer needs shifted, but because the cost of adding one more thing dropped, and the organizational instinct to avoid trade-offs found new ammunition.


Remember "both" from Part 1? My boss at Qualcomm, refusing to choose between quality and schedule? AI makes "both" more defensible. "We have AI now, surely we can handle both." The Infinite Capacity Fiction didn't go away when AI arrived. It got an upgrade.

"AI compresses execution time. Organizations respond by expanding scope. The Infinite Capacity Fiction didn't go away when AI arrived. It got an upgrade."

And it doesn't just happen with roadmaps. I was at a product management meetup recently where several people described the same phenomenon: teams creating deliverables just because AI makes them easy to produce. Competitive analyses no one reads. Strategy documents that duplicate existing work. Presentations with more slides than the audience has attention span.


Years ago, a colleague of mine called this the "thud factor," how loud a sound did the report make when you dropped it on the desk? More pages, more weight, more credibility. AI didn't invent this instinct. It just made thud free.


More output is not more value. But when the cost of producing output drops to near zero, the discipline of asking "does this need to exist?" becomes harder to maintain. The marginal effort is so low that questioning it feels petty. Of course we should have a detailed competitive landscape; AI can generate it in ten minutes. The fact that no one will act on it doesn't enter the conversation.



This points to something even more corrosive.


I had another conversation this week with a product leader who put it bluntly: we talk about outcomes and OKRs all day long, but when executives look at a product team, they're measuring volume. If the team isn't producing visible output at the rate leadership expects, the assumption is that the team isn't working hard enough.


But some of the most valuable product work is invisible. Killing a bad idea before it consumes engineering cycles. Saying "not yet" until the customer problem is properly understood. Vetting an approach thoroughly before committing resources. None of this produces artifacts. None of it shows up in a sprint review or a status update.


AI makes this worse because it raises the baseline for what "productive" looks like. When a tool can generate a PRD in an afternoon, a team that spent the week deciding the PRD shouldn't be written at all looks like it accomplished nothing. The judgment work, the subtraction, becomes even harder to defend when addition is nearly free.


"When addition is nearly free, subtraction looks like laziness."


I catch myself in this trap too.


I run a consulting practice. Every hour I spend is a trade-off between marketing, content creation, client work, course development, business development, and the hundred other things that keep a small business alive. Before AI, some of those trade-offs were enforced by friction. I simply didn't have time to write another article, build another framework, develop another lead magnet. The constraints were uncomfortable, but they were also clarifying.


Now? AI compresses the effort on almost everything I do. And the temptation isn't to work less; the temptation is to do more. I could always write one more newsletter, build one more resource, pursue one more partnership. The incremental effort is small. The cumulative overextension, however, is not.


The Infinite Capacity Fiction is harder to resist when you're the one telling it to yourself.



I realize the glaring irony here. I teach AI adoption for a living. I help product teams build AI capabilities. I believe AI is fundamentally changing how we should work.


And I'm telling you the tool makes an existing organizational pathology worse.


These aren't contradictory positions. AI is genuinely powerful. It does compress execution; it does create real efficiency. But efficiency without discipline just means doing more underfunded things faster. The technology doesn't solve the underlying problem, the human discomfort of choosing what not to do. If anything, it provides sophisticated cover for continuing to avoid that discomfort.



So what do you do?


I don't have a tidy framework for this either. But I think the discipline starts with a question that most organizations skip: "Should we?" before "Can we?"


AI has collapsed the distance between those two questions. When producing something required significant effort, "can we?" served as a natural filter. Lots of things weren't worth the investment. Now that AI makes almost everything possible, the only remaining filter is judgment. Should we build this feature? Should we create this deliverable? Should we pursue this initiative?


That judgment, the willingness to subtract even when addition is cheap, is the discipline that separates organizations using AI strategically from organizations using AI to accelerate their existing dysfunction.


If you recognized yourself in Part 1, absorbing the cost of the Infinite Capacity Fiction, watch for this version too. It's sneakier. It comes dressed as capability rather than negligence, sounds like "we can do more now" instead of "we refuse to choose." But the weight lands in the same place: on the people doing the work.


The next time someone justifies expanding scope by pointing to AI, ask the question they're avoiding: "We can do this. But should we?"


Break a Pencil,

Michael


P.S. If Part 1 was about naming the Infinite Capacity Fiction, Part 2 is about recognizing its newest disguise. Know a leader who's using AI to justify doing more instead of doing better? Send this their way.

 
 
 
bottom of page