top of page

The Elul Principle

  • mbhirsch
  • Aug 25
  • 7 min read

The hardest part of AI transformation isn't learning—it's unlearning


Hey there,


This Saturday, I turn 53. Also, today, the Jewish month of Elul begins—thirty days of introspection leading up to the high holidays. And also, my eldest daughter just started her senior year at CU Boulder, while my younger daughter prepares for her freshman year in engineering school.


Three different forms of transition, all converging in the same week. None of them about adding more—all of them about becoming something fundamentally different than what came before.


Which brings me to a realization that crystallized watching these different types of renewal unfold: The hardest part of AI transformation isn't learning new skills. It's unlearning the mental models that made you successful in the pre-AI era.


Most product leaders haven't recognized this yet, which explains why so many AI initiatives feel like expensive organizational theater.


ree

The Growth Mindset Trap

We've been sold a seductive lie about transformation: that we revere growth but neglect the essential discipline of pruning. Learn more tools. Develop new capabilities. Expand your skill set. Add AI to everything. The corporate world has turned "growth mindset" into a religion of relentless accumulation.


The corporate world has turned "growth mindset" into a religion of relentless accumulation.

But watch my daughters navigate their transitions, and you see something different. My senior isn't just adding more knowledge—she's systematically shedding activities and subjects that are no longer going to help launch her career. My incoming freshman isn't building on high school study habits—she's deliberately releasing attachment to patterns that won't serve her in engineering school.


Real transformation requires what I call the Elul Principle: the courage to deliberately unbecome who you were.


Elul demands honest self-examination—not just celebrating what you've gained, but confronting what you need to release. It's uncomfortable work because it forces you to acknowledge that some of your most cherished competencies might be holding you back.


Most AI transformation efforts fail precisely because teams approach them like inventory management instead of identity evolution.


What You're Really Fighting

When I joined Qualcomm to work on consumer mobile television, I thought I had the perfect background. Years of consumer entertainment and technology experience at Sony and Universal Electronics. I understood what consumers wanted from their devices, how to position products in competitive markets, how to navigate the intersection of technology and entertainment.


But I kept hitting walls. Conversations about mobile chipset technology felt irrelevant to the consumer experience I was trying to optimize. I dismissed discussions about RF performance and power consumption as engineering minutiae that didn't affect user adoption.


It took me months to realize that my consumer tech expertise—the very background that got me hired—was actually limiting my ability to succeed at Qualcomm.


The sophisticated insight I was missing was that mobile television wasn't just a consumer entertainment play. It was fundamentally a chipset innovation story where consumer experience emerged from deep wireless technology advantage.


Until I learned to de-emphasize my consumer tech identity and develop genuine curiosity about the mobile chipset technology that was core to Qualcomm's DNA, I was solving the wrong problems with the wrong framework.


That identity shift—from consumer experience expert to wireless technology strategist—unlocked everything. Same core competencies, fundamentally different perspective on what drove competitive advantage.


The same dynamic is happening with AI transformation, but most product leaders don't recognize it yet.


The Strategic Parallel You're Missing

There's a foundational principle in business strategy that most leaders intellectually understand but rarely practice: What you say "no" to is often more important than what you say "yes" to.


Amazon's success isn't just about what they decided to build—it's about what they decided not to be. Apple's competitive advantage isn't just their product innovation—it's their systematic refusal to pursue opportunities that would dilute their focus.


The same principle applies to AI transformation, but at the level of mental models and decision-making patterns.


Your team's success won't come from adding AI tools to existing workflows. It will come from releasing attachment to workflows that assume human intelligence is the bottleneck for every cognitive task.


Your team's success won't come from adding AI tools to existing workflows. It will come from releasing attachment to workflows that assume human intelligence is the bottleneck for every cognitive task.

Most product managers are still operating from the mental model that their job is to personally synthesize all market intelligence, stakeholder feedback, and competitive analysis into strategic insight. But what if your job became designing systems where AI handles the synthesis while you focus on the strategic judgment that emerges from that synthesis?


That shift requires unbecoming the heroic individual contributor and becoming the strategic system designer. Same core competencies, fundamentally different identity.


The Robosub Revelation

Last week, I watched my daughter's robotics team at the international Robosub competition. What struck me wasn't just their technical capability—it was their systematic approach to discarding what didn't work.


Every pool test generated documented insights about what to eliminate. Every sensor calibration identified assumptions to release. Every failed run produced clarity about approaches to abandon.


These teenagers succeeded by systematically unbecoming the team they were at the beginning of the year.


They didn't just accumulate knowledge about underwater robotics. They developed institutional discipline around releasing attachment to ideas, approaches, and even technical solutions that seemed promising but proved suboptimal under real conditions.


Most product teams facing AI transformation could learn from this approach. Instead of asking "What AI capabilities should we add?" start asking "What assumptions about human cognitive advantage should we systematically examine and potentially release?"


The Four Levels of Unbecoming

Based on my work with product leaders navigating AI transformation, there are four distinct levels where deliberate unbecoming creates competitive advantage:


Level 1: Task-Level Unbecoming Release attachment to manually performing cognitive work that AI can handle more effectively. Stop personally writing first drafts of everything. Stop manually analyzing patterns in data sets. Stop crafting every stakeholder communication from scratch.


Level 2: Process-Level Unbecoming Release attachment to workflows designed around human cognitive bottlenecks. Stop designing processes that assume one person must synthesize all inputs. Stop creating decision frameworks that require manual pattern recognition across complex data sets.


Level 3: Role-Level Unbecoming Release attachment to being the person who knows everything and makes every decision. Start becoming the person who designs systems for better decision-making. Stop being the individual contributor who produces insights; become the strategic architect who orchestrates insight generation.


Level 4: Identity-Level Unbecoming Release attachment to being irreplaceable through individual cognitive performance. Start deriving value through strategic judgment, creative problem-framing, and the orchestration of human and artificial intelligence toward better outcomes.


Most teams get stuck at Level 1, treating AI transformation like a efficiency upgrade instead of an identity evolution.


The Practical Paradox

As I’ve helped teams navigate this transition, I have learned that the people most capable of leveraging AI transformation are often the most resistant to it, because they have the most to unbecome.


Your best individual contributors—the ones who built their careers on being the smartest person in the room, the ones who pride themselves on synthesizing complex information faster than anyone else—these are precisely the people who need to undergo the most profound identity evolution.


It's not about becoming less capable. It's about becoming capable in fundamentally different ways.


The senior PM who spent twenty years mastering stakeholder synthesis doesn't need to stop understanding people. He needs to stop believing that understanding people requires manually processing every piece of stakeholder data. AI can handle the pattern recognition; his job becomes the strategic interpretation of those patterns.


But that shift requires unbecoming the person who derives value from superior information processing speed and becoming the person who derives value from superior strategic judgment applied to AI-enhanced information quality.


Your Elul Assignment

This week, as Elul begins, practice the discipline of deliberate examination—not just of what you want to add, but of what you might need to release.


Ask yourself these questions:

  • What mental model about your value as a product leader assumes that AI can't or shouldn't handle certain types of cognitive work?

  • What workflow or process are you protecting because it makes you feel irreplaceable, even though it might be limiting your strategic impact?

  • What aspect of your professional identity feels threatened by AI capabilities, and how might releasing attachment to that identity actually expand your influence?

  • Where are you spending cognitive energy on work that could be systematically handled by AI, freeing you to focus on judgment, creativity, and strategic orchestration that genuinely requires human intelligence?


The goal isn't to diminish your capabilities. It's to examine whether your current identity as a product leader is limiting your potential effectiveness in an AI-enhanced world.


Because here's the truth that requires a level of sophisticated thinking most transformation efforts never reach: The future belongs not to people who accumulate the most AI tools, but to those with the courage to systematically unbecome the versions of themselves that can't leverage those tools strategically.


The future belongs not to people who accumulate the most AI tools, but to those with the courage to systematically unbecome the versions of themselves that can't leverage those tools strategically.

That's the real work of transformation. And it starts with the honesty to examine what you might need to let go.


But if we're going to release old patterns, what do we build in their place? The answer lies in understanding adaptation itself as a competitive capability—which is exactly what I've been learning from an unexpected source. More on that next week.


Break a Pencil,

Michael


P.S. If you're ready to build systematic AI capabilities that start with honest assessment of what mental models need to evolve, my next private cohort approach creates exactly the safe space teams need for this kind of identity-level transformation. This isn't about learning tools—it's about evolving the strategic thinking that makes tools effective. [Learn more here.]


P.P.S. The month of Elul teaches that honest self-examination isn't punishment—it's preparation for becoming who you're meant to be. The same applies to AI transformation. The discomfort you feel about releasing old patterns isn't resistance to growth; it's the signal that real transformation is finally possible.

 
 
 

Comments


bottom of page