top of page

The Legacy Game Paradox

  • mbhirsch
  • Sep 1
  • 7 min read

The board game lesson that's changing how I think about AI strategy


Hey there,


Last week, I wrote about the Elul Principle—why AI transformation requires deliberately unbecoming who you were, not just accumulating new capabilities. Several of you replied with thoughtful reflections on mental models you're examining.


The natural follow-up question: "If we're going to shed old patterns, what do we build in their place?"


The answer came to me in an unexpected place: around my kitchen table, playing a legacy board game with my family.


ree

Legacy games are fascinating creatures in the board game world. Unlike traditional games where you reset to identical starting conditions each time, legacy games evolve permanently. Each session leaves scars and upgrades that change future games forever. Stickers get placed. Cards get destroyed. Rules get rewritten. The game you play in month six bears little resemblance to the game you started with.


What I realized watching my family adapt to these constant rule changes shifted my perspective on AI strategy: We weren't trying to master the current game. We were building the capability to rapidly adapt to whatever game emerged next.


The Meta-Game Most Companies Are Missing

Most product teams approach AI adoption like traditional board games—they're trying to optimize for the current ruleset. Master ChatGPT prompting. Build workflows around Claude's capabilities. Create processes that leverage today's AI tools effectively.


But they're missing the deeper game entirely.


The companies that will win in 2027 won't be those who mastered 2025's AI tools. They'll be the ones who built organizational capabilities for navigating whatever technological shift comes next.


This is what I call the Legacy Game Paradox: In environments of constant change, permanent advantages come from developing adaptation strategies, not optimization strategies.


The Legacy Game Paradox: In environments of constant change, permanent advantage comes from developing adaptation strategies, not optimization strategies.

Think about it through the lens of my family's game nights. I typically start strong because I'm naturally analytical and quickly identify optimal moves within the initial ruleset. But as the game evolves—new mechanics introduced, old strategies invalidated, familiar patterns disrupted—my early advantage disappears.


Meanwhile, my daughter, who starts more cautiously but develops systematic approaches for learning new rules and rapidly testing hypotheses about what works, consistently outperforms everyone as the game evolves. She's not better at any specific version of the game. She's better at the game of learning games.


The same dynamic is playing out in AI transformation.


What Adaptive Capacity Actually Looks Like

After watching product teams navigate AI adoption over the past two years, I've identified three distinct approaches that separate the winners from the experimenters:


The Optimizers focus on mastering current AI tools. They build sophisticated ChatGPT workflows, create detailed prompt libraries, and achieve impressive efficiency gains. But when GPT-5 arrives with different capabilities, or when new AI paradigms emerge, they need to re-evaluate their entire approach and often rebuild from scratch. Every technological shift becomes a reset rather than an evolution.


The Experimenters constantly try new AI tools but never develop systematic approaches to evaluation or implementation. They're always chasing the latest release but never building institutional knowledge about what actually drives AI success in their specific context. They mistake motion for progress.


The Adapters do something completely different. They build organizational capabilities around the process of AI evaluation, implementation, and iteration itself. They develop frameworks for rapidly assessing new tools, systematic approaches for testing hypotheses, and institutional memory for what worked under what conditions.


I saw a perfect example of this recently in a LinkedIn post from Dave Glick, SVP of Enterprise Business Services at Walmart. His team didn't just implement "vibe coding" (having AI write code based on natural language descriptions). They systematically built capabilities around the pattern itself.


When they moved to testing, instead of manually creating test cases, they asked: "How can we apply this AI-enhanced approach to testing?" Within days, they had AI analyzing their PRDs to generate test plans, then writing the actual tests, then piping them into automated testing frameworks.


And they haven't stopped there. They're already planning "vibe PRDs, vibe design, vibe security reviews." They're not optimizing individual AI implementations—they're building organizational reflexes for identifying where AI can enhance any workflow.


This is legacy game thinking in action. Each AI implementation reveals patterns they can apply to the next implementation. They're not getting better at using specific AI tools; they're developing systematic approaches for integrating AI into their entire development lifecycle.


They're not getting better at using specific AI tools; they're developing systematic approaches for integrating AI into their entire development lifecycle.

The Adapters are playing the legacy game. They know the rules will keep changing, so they optimize for rule-change resilience instead of rule mastery.


The Robosub Connection

This connects to what I observed at my daughter's robotics competition. The teams that succeeded weren't necessarily those with the most sophisticated initial designs. They were the ones with the most systematic approaches to learning from each pool test and rapidly implementing improvements.


The competition format itself reinforces this: four days of pool testing before the real competition begins. Why? Because data collection, testing, and fine-tuning are all required to get autonomous robots to perform specific tasks in a course that changes slightly every day.

Every pool test generated documented insights not just about what worked, but about their process for identifying what worked. They were building meta-capabilities: the ability to rapidly diagnose problems, design solutions, and implement changes when conditions shift unexpectedly.


Most product organizations facing AI transformation could learn from this approach. Instead of asking "How do we implement AI effectively?" start asking "How do we build organizational capacity to continuously adapt our AI implementation as capabilities evolve?"


The Strategic Framework: Three Levels of Adaptation

Based on my work helping teams build adaptive AI capabilities, there are three distinct levels where adaptive capacity creates sustainable competitive advantage:


Level 1: Tool Adaptation: Building systematic approaches for evaluating and implementing new AI tools as they emerge. This includes frameworks for rapid prototyping, hypothesis testing, and integration planning. Teams operating at this level don't panic when ChatGPT gets updated or when new AI tools launch—they have repeatable processes for assessment and adoption.


Level 2: Process Adaptation: Developing organizational reflexes for modifying workflows and decision-making processes as AI capabilities evolve. This means building processes that are designed to be modified, not optimized for current conditions. Teams at this level continuously experiment with different ways of integrating AI into their work, documenting what works under different conditions.


Level 3: Strategic Adaptation: Creating institutional capability for recognizing and responding to fundamental shifts in how AI changes competitive advantage in your industry. This involves building sensing mechanisms for technological change, scenario planning for different AI futures, and organizational agility for strategic pivots when necessary.


Most teams get stuck at Level 1, treating each new AI tool as a separate learning project instead of building systematic adaptation capabilities that compound over time.


What would it take for your team to push beyond Level 1 adaptation?


The Practical Paradox of Permanent Impermanence

Here's where the legacy game metaphor gets really interesting. In these games, the most valuable permanent changes aren't specific rule modifications—they're upgrades that increase your adaptation speed for future changes.


You don't add a rule that makes you better at combat. To succeed, you need to add a mechanism that lets you learn new combat strategies faster. You don't optimize for the current map configuration. You must build capabilities that help you navigate whatever map configurations emerge.


The same principle applies to AI transformation. The most valuable "permanent" changes to your product development process aren't specific AI implementations—they're systematic approaches that increase your speed of AI experimentation and implementation.


The most valuable "permanent" changes to your product development process aren't specific AI implementations—they're systematic approaches that increase your speed of AI experimentation and implementation.

This is why most AI transformation efforts feel frustrating. Teams are trying to create stable, optimized processes in an environment designed for continuous change. They're approaching a legacy game with traditional game strategy.


Your Legacy Game Assignment

This week, audit your team's approach to AI adoption through the legacy game lens:


Current State Assessment: Are you building capabilities around specific AI tools (traditional game thinking) or around the process of AI evaluation and adaptation (legacy game thinking)?


Adaptation Speed Test: When the next major AI release happens (GPT-5, Claude 4, whatever comes next), will your team need to start from scratch, or do you have systematic approaches for rapid evaluation and integration?


Meta-Capability Inventory: What organizational knowledge are you building about your team's AI learning process itself? Do you know what conditions enable successful AI adoption for your specific context?


Future-Proofing Question: If AI capabilities evolve in unexpected directions over the next 18 months, does your current approach prepare you to adapt quickly, or does it lock you into today's assumptions about how AI should work?


The goal isn't to predict the future of AI. It's to build organizational reflexes that perform well regardless of which AI future actually emerges.


Because here's the sophisticated insight that separates adaptive organizations from optimized ones: In environments of accelerating change, the meta-skill of learning becomes more valuable than any specific knowledge you can accumulate.


In environments of accelerating change, the meta-skill of learning becomes more valuable than any specific knowledge you can accumulate.

That's the real competitive advantage hiding in plain sight. While your competitors optimize for today's AI landscape, you need to be building the organizational capability to dominate whatever AI landscape emerges next.


If this resonates, forward it to that product leader who's still treating AI adoption like tool accumulation instead of capability building. They need this framework.


But understanding adaptation is only half the equation. The other half is timing—recognizing when natural learning cycles create optimal conditions for transformation. That's exactly what I've been observing as we enter September, and it's more sophisticated than most leaders realize. More on that next week.


Break a Pencil,

Michael


P.S. Ready to build systematic adaptation capabilities instead of just experimenting with tools? My private cohort approach focuses on exactly this kind of meta-skill development—building organizational reflexes that compound over time rather than one-off AI implementations. [Learn more here.]


P.P.S. If you're just joining this three-part exploration of transformation, last week's piece on "The Elul Principle" explains why most AI efforts fail because they focus on addition instead of strategic subtraction. The full piece is available here. This week we're exploring what to build after you've identified what to let go of. Next week: when and how to time these transformations for maximum impact.

New to Broken Pencils? Get weekly insights on product leadership and AI transformation delivered to your inbox. [Subscribe here.]


 
 
 

Comments


bottom of page