top of page

What My Daughter's Robotics Team Taught Me About AI Transformation

  • mbhirsch
  • Aug 19
  • 4 min read

Why constraint-driven teams outlearn resource-rich organizations


Hey there,


I just spent a week at the Robosub 2025 international competition watching my daughter's high school robotics team compete against 46 university programs. What I witnessed wasn't just impressive teenage engineering—it was a masterclass in systematic learning extraction that most Fortune 500 companies have completely failed to understand.


ree

These teams spend nine months preparing for literally three minutes of autonomous underwater performance. No debugging in real-time. No "let's iterate next sprint." The robot either completes the mission or it doesn't, and there's nowhere to hide behind vanity metrics or rationalized partial success.


But what crystallized for me watching their semifinal run was that these resource-constrained teenagers have developed learning discipline that billion-dollar product organizations haven't perfected.


The Learning Extraction Machine Most Companies Never Build

Every pool test generates documented insights. Every sensor calibration gets systematically analyzed. Every competition run—successful or failed—produces transferable knowledge that compounds year over year. Nothing is wasted because nothing can be wasted.


Meanwhile, most product teams I work with treat AI experiments like casual trials. They try ChatGPT for writing requirements, note that it "worked pretty well," then move on to testing Claude for customer research without systematically documenting what made the first experiment successful, what conditions enabled that success, or how those insights apply to future implementations.


The result? Each new AI initiative feels like starting from scratch because teams are generating experience without building institutional knowledge.


The Constraint Paradox

The reality is that product teams face robotics team-level resource constraints—limited time, budget pressure, demanding timelines—but they've never developed systematic learning discipline that scarcity should force.


These high school teams prove something that most executives miss entirely: Constraints should drive learning excellence, not learning chaos.


When you can only afford a few pool tests before competition, you extract maximum insight from every single run. When you only have bandwidth for three AI experiments this quarter, you should be documenting everything: what specifically worked, what conditions were necessary, what this reveals about your team's readiness, how these lessons transfer to other use cases.


Instead, most teams operate like they have unlimited experimental budget. They approach AI adoption with the casual experimentation habits of well-funded research labs while facing the resource reality of cash-strapped startups.


The Institutional Memory That Never Materializes

What struck me most was watching this year's team methodically implement insights from last year's competition. Not just "fix what broke" but "systematically incorporate every successful pattern we observed from any team."


They study their own performance data, but they also reverse-engineer what made other teams successful. Every technique that worked for a competitor gets evaluated for their own system. Every failure mode they witness gets documented as a pattern to avoid.


This creates what I call compounding competitive intelligence—each year's learning builds on previous years, creating institutional knowledge that transcends individual team member turnover.


Most product organizations don't even attempt this. Teams experiment with AI tools individually, maybe share success stories in Slack, but there's no systematic process for capturing, analyzing, and institutionalizing what actually drives successful AI adoption across different contexts.


The Three-Minute Test Every AI Initiative Needs

The most powerful insight from these robotics competitions is how clarity about success metrics drives everything else. These teams know their moment of truth is coming. Every design decision, every test protocol, every team process gets optimized around surviving three minutes of autonomous performance under pressure.


Product teams desperately need their own version of the three-minute test. Not "let's experiment with AI and see what happens" but "here's exactly what success looks like, here's when we'll measure it, and here's how we'll systematically extract learning regardless of outcome."


Most AI initiatives fail because they're designed like research projects instead of competition events. Teams optimize for interesting discoveries rather than measurable performance improvements. They celebrate learning without building systems to capture and transfer that learning.


The Framework That Actually Works

After watching these teenagers systematically outlearn organizations with thousand-times their resources, I've identified the three disciplines that separate sustainable AI transformation from expensive experimentation:


1. Systematic Learning Extraction Every AI experiment—successful or failed—must generate documented insights about what worked, what conditions were necessary, and how lessons transfer to other contexts. Not "this tool is good" but "this tool works when X conditions exist, fails when Y conditions exist, and teaches us Z about our team's readiness."


2. Institutional Knowledge Building Create processes for capturing insights across experiments and teams. Build shared understanding of what drives AI success in your specific context rather than hoping individual experiences somehow aggregate into organizational capability.


3. Constraint-Driven Discipline Treat limited experimental bandwidth as a feature, not a bug. When you can only test a few AI implementations per quarter, optimize for maximum learning extraction rather than maximum experimentation volume.


Your Three-Minute Test

Here's your assignment: Pick your most important AI initiative and ask these questions that these robotics teams would automatically ask:


  • What does unambiguous success look like?

  • When will we measure it?

  • What specific insights will we extract regardless of outcome?

  • How will we transfer these learnings to future implementations?

  • What conditions were necessary for this experiment, and how do we replicate or avoid them?


The magic isn't in the sophistication of your AI tools—it's in the discipline of your learning systems.


These teenagers just proved that systematic learning extraction under constraints beats random experimentation with abundant resources. The question isn't whether you have enough time to implement AI properly. The question is whether you have enough discipline to learn systematically from the time you have.


Most companies are treating AI transformation like a series of independent experiments. The winners will be those who build institutional learning machines that compound competitive advantage over time.


The robots don't lie about their performance. Neither should your AI transformation metrics.


Break a Pencil,

Michael


P.S. Ready to build systematic AI capabilities that compound over time instead of starting fresh each quarter? My next "Build an AI-Confident Product Team" cohort starts September 2. This learning extraction approach is exactly what separates sustainable transformation from expensive theater. [Learn more here.]


P.P.S. Proud coach and dad moment: they made it to the third chance (one step before finals) and finished 12th overall—the highest of the four high schools in the competition and ahead of MANY of the top engineering universities around the world. Sometimes systematic preparation under constraints beats abundant resources and casual effort. Always, actually.


 
 
 

Comments


bottom of page