top of page

Building Cognitive Muscle in an AI World

  • mbhirsch
  • 44 minutes ago
  • 8 min read

Why the teams winning with AI are deliberately choosing friction over efficiency (Part 3 of 3)


Hey there,


Last week, I left you with an unresolved tension: the compressive work we're delegating to AI might be the same cognitive work that builds our capacity for creative expansion. Our assertion was that rote compression can be safely delegated—but developmental compression, the cognitive struggle that builds expansive capacity, cannot.


But knowing what to protect doesn't solve the practical problem. You still have work to do, deadlines to hit, stakeholders to manage. How do you build AI workflows that leverage compression strengths while preserving—and even enhancing—your capacity for creative expansion?


Think about it like physical training. If you let machines do all the heavy lifting, you get weaker. But if you use machines strategically—say, a treadmill for cardio efficiency so you have more energy for resistance training—you get stronger.


The same principle applies to cognitive work. Delegating all compression to AI is like never lifting anything heavy. You're efficient in the moment, but your cognitive muscles atrophy. The strategic move: use AI to eliminate low-resistance cognitive work so you have more capacity for high-resistance cognitive work.

"Delegating all compression to AI is like never lifting anything heavy. You're efficient in the moment, but your cognitive muscles atrophy."
ree

The teams winning with AI aren't the ones using it most enthusiastically. They're the ones who've mapped exactly where AI's compression architecture creates value and where it creates cognitive atrophy.


Stop Asking AI to Do True Creation

First principle: Stop trying to get AI to think outside the box. Instead, identify which boxes AI can navigate brilliantly—and deploy it there ruthlessly.


The product leaders I work with who've built successful AI workflows all made the same shift. They stopped asking "How can AI be creative?" and started asking "Where can AI's pattern recognition and recombination create disproportionate value?"


Here's the difference in practice:


I tested this with two prompts. The first asked AI to "brainstorm innovative product features for a B2B SaaS project management tool." The second defined a specific possibility space: user pain points from research, technical constraints, strategic bets—and asked AI to explore exhaustively within those boundaries. See both prompts and full responses here.


Same AI. Radically different value.


In the first, AI did what it does: sophisticated recombination of PM tool patterns from training data. Useful questions, broad directions, but nothing implementable.


In the second, it generated 20 specific features, each mapped to exact constraints and pain points. It challenged assumptions within the possibility space ("Is 2 sprints realistic for bidirectional integration?"). It identified tensions I'd face ("Enterprise culture may demand immediate responsiveness").


The difference: I did the expansion work (defining the possibility space). AI did the compression work (exhaustive exploration within it).


This collaboration model works:

  • Humans define possibility spaces (expansion beyond training distribution)

  • AI explores them exhaustively (compression within defined constraints)


Not the other way around.


Build Workflows That Constrain Hallucination

Smart product leaders don't try to eliminate hallucination—you can't, it's architectural. They build systems that limit where hallucination can cause damage.


Think about managing a brilliant employee who occasionally fabricates details. You don't fire them. You give them tasks where strengths shine and build verification where weaknesses could cause problems.


Strategies that work:


Human-in-the-loop verification. Let AI draft stakeholder communication. You verify accuracy and political nuance before sending. AI handles compression (organizing talking points, maintaining consistency). You handle expansion (strategic narrative connecting insights to decisions).


Constrained generation spaces. "Summarize THIS document" works better than "tell me about THIS topic" because the former has bounded input. AI compresses what's actually there rather than hallucinating from training data.


Structured outputs. When AI generates JSON or fills predefined templates, hallucination becomes detectable. The structure itself is verification.


Multiple-pass workflows. AI generates → human verifies → AI refines. Each pass compresses toward accuracy within your defined constraints.


A Concrete Example: Information Synthesis

Last week in my Saturday Morning Coffee video, I demonstrated this principle with scattered strategic information—meeting notes, Slack threads, competitive research, customer feedback, VP constraints.


The old approach: Read each source individually, hold everything in your head, manually connect dots. 30+ minutes of cognitive grinding.


The AI approach: Feed all sources to AI with strategic prompts. Get synthesis in 30 seconds.


AI handled the dot-connecting (compression). I handled the strategic evaluation (expansion).


AI flagged that Sales says "we'll lose deals without notifications" while data shows 34% of churned customers mention performance but only 12% mention notifications. Loud problem versus real problem.


AI surfaced that Enterprise customers (67% of revenue, 18% of base) want notifications while SMB customers (33% of revenue, 82% of base) want performance.


AI found the contradictions. I needed to resolve them. That's the division of labor.


AI gave me cognitive capacity for strategic questions: Do we optimize for revenue or customer count? Solve the loud problem or the real problem? Can we actually hold scope discipline when Sales pushes back?


These require institutional knowledge, relationship intuition, political awareness. AI can't do that. I can.


The Four-Question Delegation Framework

Before delegating any work to AI, I run it through the "AI or Human?" decision framework—four questions that map directly to the rote versus developmental compression distinction:


1. Does human effort create leverage on the outcome?

If improving this by 20% won't change what happens next, delegate to AI.

  • High leverage: Stakeholder communication during crisis, user research synthesis for major decisions, product messaging for launches

  • Low leverage: Weekly status reports, meeting notes from routine standups, internal documentation updates


2. What's the upside/downside of getting it wrong?

High stakes require human judgment. Low stakes should be automated.

  • High stakes: Board presentations, competitive analysis for strategy, customer-facing crisis communications

  • Low stakes: Internal process documentation, routine feedback categorization, standard meeting summaries


3. Can humans actually improve the outcome by 20%+?

Even if improvement would matter, can human effort actually deliver it?

  • Human advantage: Interpreting emotional nuance, crafting vision narratives that inspire, reading political dynamics

  • AI advantage: Pattern analysis across large datasets, generating multiple variations, consistent formatting


4. How much cognitive energy does human improvement require?

Is this the best use of finite cognitive capacity?

  • Protect cognitive energy for: Setting product strategy, resolving team conflicts, stakeholder negotiation, creative problem-solving

  • Spend AI energy on: Routine communications, data formatting, documentation, initial drafts


This framework maps directly to rote versus developmental compression from last week: Questions 1-2 identify whether to delegate. Questions 3-4 identify what you're protecting when you don't.


Application Across the Six Personas

Let me make this concrete using the Six Personas of Product Leadership—a framework I developed to capture the distinct, often conflicting roles product leaders must navigate simultaneously to be effective:


Strategic Orchestra Conductor:

  • Delegate: Market analysis, competitive research, pattern identification across customer data

  • Protect: Crafting strategic narrative connecting insights to decisions, holding tension between competing priorities


Talent Gardener:

  • Delegate: Capability assessment frameworks, skill gap analysis, performance data synthesis

  • Protect: Coaching conversations that build judgment, developmental feedback reading unstated signals


Vision Translator:

  • Delegate: Converting technical specs to user stories, organizing feature dependencies, tracking implementation status

  • Protect: Translating abstract vision into concrete roadmaps, synthesizing user needs with business constraints


Political Navigator:

  • Delegate: Drafting stakeholder communications, organizing talking points, maintaining message consistency

  • Protect: Editing politically sensitive content, relationship maintenance conversations, navigating unstated organizational dynamics


Innovation Architect:

  • Delegate: Research synthesis, technology landscape analysis, trend identification

  • Protect: Intuitive leaps across distant domains, pattern recognition that violates expectations, constraint navigation creating breakthrough thinking


Business & Portfolio Choreographer:

  • Delegate: Resource allocation models, portfolio performance analysis, dependency mapping

  • Protect: Making trade-off decisions with incomplete information, balancing short-term delivery with long-term value creation

"AI handles bounded compression. You protect unbounded expansion."

What This Actually Looks Like

One of my cohort teams recently redesigned their workflow for developing product strategy in a new market. Here's how they mapped the division of labor:


AI handled the foundational compression work: market size analysis, competitor feature matrices, customer pain point synthesis from research transcripts, and technology landscape mapping. All the exhaustive pattern recognition across datasets that would take humans days to compile.


The humans handled the strategic expansion work: identifying which market insights violated conventional wisdom, synthesizing patterns across wildly different analogous markets, articulating a strategic narrative that didn't exist in any competitor's positioning, and holding the tension between conflicting user needs and business constraints. All the creative leaps that require judgment beyond what exists in training data.


The workflow itself was straightforward. AI compressed existing information exhaustively, giving the team clean, organized raw material. The humans then used that compression as the foundation for expansive thinking—the kind of strategic synthesis AI fundamentally cannot do.


The result wasn't just time savings. Their cognitive capacity actually increased because they were doing more high-quality cognitive struggle, not less. They eliminated the rote compression grind and invested that capacity in developmental compression work that built their strategic thinking muscles.

"In an AI-saturated world where compression is commoditized, expansion is the only sustainable competitive advantage."

The Strategic Choice

You know from Part 2 that there are two paths: delegate all compression and watch your capacity atrophy, or ruthlessly delegate rote compression while protecting developmental compression and build enhanced capacity for creative expansion.


But here's what I didn't tell you: the divergence accelerates.


Every time you delegate developmental compression, your capacity for it atrophies. Every time you protect it while delegating rote work, your capacity grows.


Six months from now, you're not just more or less capable. You're on fundamentally different trajectories.


One path: You've become dependent on AI for sophisticated recombination but incapable of the creative leaps AI fundamentally cannot make. You're productive. You're efficient. You're increasingly obsolete.


The other path: You've systematically eliminated cognitive friction from rote work and invested that capacity in developmental struggle. You've built tolerance for ambiguity. You make creative leaps AI can't. You're irreplaceable.


The difference isn't using AI or not using AI. It's strategic thinking about delegation instead of blind enthusiasm about AI.


In an AI-saturated world where compression is commoditized, expansion is the only sustainable competitive advantage. But you can't buy expansion from AI—you can only build it through the cognitive struggles you're tempted to optimize away.


Your Assignment

Pick one workflow this week. Run it through the four-question framework. Map what's rote compression versus developmental compression.


Then redesign it: Delegate the former ruthlessly. Protect the latter fiercely.


Pay attention to what happens over the next month. Not your efficiency. Your capacity. Your cognitive stamina. Your ability to make creative leaps.


That's what matters in an AI-saturated world.


Break a Pencil,

Michael


P.S. This is Part 3 of a three-part series. If you missed them: Part 1 explained AI's compression architecture, Part 2 explored what we risk losing. Part 3 gives you the strategic implementation framework.


P.P.S. Ready to build systematic AI capabilities while protecting cognitive development? Book a call to discuss a private cohort for your team. Or join my next "Build an AI-Confident Product Team" cohort on Maven where we work through exactly these frameworks.


P.P.P.S. The "AI or Human?" decision framework is available as a downloadable one-pager. Watch the full Saturday Morning Coffee demo here. These are the tools that make strategic delegation systematic rather than random.


P.P.P.P.S. Know a product leader optimizing for efficiency without thinking about cognitive capacity? Forward this. The difference between trajectories is strategic thinking about delegation, not efficiency optimization.


P.P.P.P.P.S. I chose Josef Albers' "Homage to the Square" series for this piece because his entire artistic practice was built on the principle this article advocates: exploring infinite possibilities within rigid constraints. Each composition is Albers systematically investigating color relationships within an unchanging square format—compression with constraint. Sixty years before we started talking about AI architecture, Albers proved that creative expansion requires strategic constraint. Sometimes the most profound insights about our AI moment come from artists who never touched a computer.

 
 
 

Comments


bottom of page