top of page

What Kind of Year Has It Been? (Part 2 of 2)

  • mbhirsch
  • Dec 15
  • 6 min read

The Path Forward

Last week I laid out three hidden costs of AI adoption that emerged in 2025: architectural limitations we can't engineer away, cognitive capabilities we're quietly losing, and organizational judgment we're dismantling just when we need it most.


The question I left you hanging with was, "what do we do about them?"


The answer starts with understanding why this is harder than everyone expected.


ree

Why Transformation Is Harder Than Expected

AI transformation isn't a technology problem. It's a psychology problem.


I saw this most clearly through Mike, a product leader who changed companies mid-year. His previous organization had been skeptical of AI. Before some leadership changes, AI was seen as a competitor, something that undermined their value as custom software developers. They've since come around, but the cultural shift took deliberate work.


His first day at the new company, he got pulled into a leadership meeting about AI strategy. Their big bet through 2030. Agentic platforms. AI pods replacing traditional engineering teams. A completely different starting point.


Same person. Same month. Radically different contexts.


This isn't unusual. I've seen it repeatedly this year—organizations at completely different points on the AI adoption spectrum, often in the same industry, sometimes competing for the same customers. The technology is identical. The organizational responses are wildly divergent.


The variance isn't explained by resources, technical sophistication, or even leadership vision. It's explained by psychology. Fear disguised as skepticism. Resistance disguised as "healthy caution." Identity threats masquerading as strategic concerns.


Most organizations treat this as a training problem. Run more workshops. Share more tips. Hope adoption spreads.


But it's not a training problem. It's a configuration problem.


On a call with product managers last week, Jen described a situation that's becoming common: her company bought AI tool licenses, mandated their use, and now the PMO demands artifacts from every project—not because anyone will read them, but because AI makes them fast to produce. Engineers refuse to look at the outputs. Leadership never consumes them. But the dashboard shows adoption metrics, so the initiative is "succeeding."


I asked her a question: when your company implemented Salesforce, did they just hand out licenses and tell salespeople to figure it out? Of course not. There was configuration, training, professional services. The tool was adapted to their specific sales process before anyone was expected to use it.


AI gets treated differently. Organizations that would never deploy Salesforce without configuration are handing out ChatGPT licenses and wondering why adoption stalls at 15% or produces garbage outputs that waste everyone's time.


The companies struggling with AI adoption haven't failed to teach people how to use chatbots. They've failed to build the organizational infrastructure that makes AI actually useful in their specific context. People aren't resistant because they don't understand the tools. They're resistant because using the tools within their current environment doesn't produce results worth the effort.

"People aren't resistant because they don't understand the tools. They're resistant because using the tools within their current environment doesn't produce results worth the effort."

What Organizations Should Actually Build

Most AI committees spend their time on governance. Which tools to approve. What policies to enforce. How to measure adoption rates.


These aren't wrong. They're insufficient.


The real work is building what I call the Capability Core—the configuration layer of context, workflows, and learning systems that makes AI useful for your specific business.


Three components:


Context Engineering. What information would make AI 10x more useful for your people? Not generic training data—your product strategy, your customer segments, your competitive dynamics, your decision history. The institutional knowledge that experienced employees carry in their heads. Until you build this layer explicitly, every AI interaction starts from zero.


Workflow Discovery and Validation. Your organization already has people who've figured out systematic AI workflows. Maybe 10-15% of users. They're not magical—they've accidentally built what works in your environment. Your job is finding them, extracting what they built, validating it works for others, then scaling deliberately. This is fundamentally different from lunch-and-learns where people share tips and nothing changes.


Learning System Architecture. Not "everyone should share AI wins in Slack." Rather: what's your systematic process for continuously identifying working workflows, extracting what makes them work, validating transferability, packaging for deployment, and feeding learning back into refinement? Without this, you're dependent on accidental discovery.


Notice what's missing: tool selection, access policies, adoption metrics. Those are necessary governance tasks, but they are not where the value comes from.


AI committees think their job is governance. It's not. Their job is configuration—building the organizational infrastructure that makes AI capable of producing useful results in your specific context.

"AI committees think their job is governance. It's not. Their job is configuration."

The alternative is productivity theater: measuring whether people used the tool rather than whether the tool produced value. Dashboards full of adoption metrics. Artifacts created because AI made them fast, not because anyone needed them. The appearance of transformation without the substance.


The companies that figure this out will spend 2026 compounding capability. The ones that don't will keep celebrating adoption rates while their people quietly resent being asked to create deliverables no one reads.


What Individuals Should Do

You don't have to wait for your organization to build the Capability Core. You can build a personal version.


But that requires making better decisions about what to delegate and what to protect. The product leaders I've seen succeed this year share a common discipline: systematic thinking about when AI helps and when it hurts.


I've distilled this into four questions:


Does human effort create leverage on the outcome? If improving this task by 20% won't change what happens next, let AI handle it. Weekly status reports? Delegate. Stakeholder communication during a crisis? Keep it human.


What's the upside and downside of getting it wrong? High stakes require human judgment. Low stakes should be automated. Board presentations need your attention. Internal process documentation doesn't.


Can humans actually improve the outcome by 20% or more? Even if quality would matter, can you actually deliver that improvement? AI has the advantage in pattern analysis across large datasets. You have the advantage in interpreting emotional nuance and reading political dynamics.


How much cognitive energy does human improvement require? Is this the best use of finite cognitive capacity? Protect your cognitive energy for strategy, conflict resolution, negotiation, creative problem-solving. Spend AI energy on routine communications, data formatting, documentation.


Strategic cognitive resistance means deliberately choosing cognitive friction over cognitive efficiency in select areas. Not because you're a Luddite—because you understand that some struggles build capability while others just consume time.


Knowing when to delegate is only half the challenge. The other half is using AI effectively—which most people don't. Three principles separate productive AI collaboration from frustrating one-off attempts:


First, provide sufficient context. The same prompt produces wildly different results depending on what you've told AI about your situation, constraints, and goals. This is the individual version of Context Engineering—building a foundation that makes outputs useful.


Second, think in personas. AI can be a thought partner, a devil's advocate, an editor, a researcher, or a first-draft generator. Choosing the right role for the task shapes what you get back.


Third, work in conversation. One prompt, one output, done—that's how most people use AI, and it's why they get mediocre results. The value emerges through iteration: refining, challenging, redirecting. The people capturing disproportionate value treat AI as a collaboration, not a vending machine.

"The people capturing disproportionate value treat AI as a collaboration, not a vending machine."

That's the discipline separating product leaders who are becoming more valuable from those becoming more replaceable. Not how much AI they use. What they use it for, and what they deliberately don't.


Looking Ahead

So what kind of year has it been? A year where the real work finally came into focus—not the technology, but the psychology, the configuration, the discipline. The year we stopped asking "should we adopt AI?" and started asking the harder question: how do we do this without destroying what makes us valuable?


2025 was the year the hidden costs became visible. 2026 will be the year we find out who learned from them.


The organizations that succeed won't be the ones with the most AI features or the highest adoption metrics. They'll be the ones that built the judgment infrastructure to use AI strategically—the Capability Core that turns tool access into actual capability.


The individuals who succeed won't be the ones using AI for everything. They'll be the ones who developed systematic frameworks for delegation decisions, protected the cognitive work that builds their strategic capacity, and positioned themselves as the people who understand both what AI can do and what it shouldn't.


I'm still wrestling with where the lines are. How much cognitive struggle is necessary for development versus how much is just inefficient habit. When organizational resistance is genuinely strategic versus when it's fear wearing a reasonable mask. How to help teams build capability infrastructure when their leadership is still focused on adoption metrics.


These questions will drive the work in 2026. More frameworks to test. More organizations to observe. More attempts to separate what's actually working from what just generates impressive demos.


If you've been reading along this year, thanks for thinking through this with me. The conversation is getting sharper because you're part of it.




The Four-Question Framework for delegation decisions is available as a downloadable reference—send me an email (michael@breakapencil.com) if you'd like a copy.


If this resonated, share this with a product leader who's navigating the same questions. The year-end reflection is more useful when you're not thinking through it alone.


Break a Pencil,

Michael

 
 
 

Comments


bottom of page