top of page

Why 85% of Your Team Isn't Using the AI You Bought Them

  • mbhirsch
  • Nov 17
  • 6 min read

(And Why More Training Won't Fix It)


My daughter's robotics team has a constraint most billion-dollar product organizations don't recognize as a competitive advantage: they can't afford to waste a single experiment.


With a budget that wouldn't cover one engineer's monthly AWS bill, they've mastered something that resource abundance obscures. When you can't give every team member unlimited parts and hope something emerges, you're forced to build systematic capability in a small core that learns deliberately, extracts transferable patterns, and expands only after proving what actually works.


Resource constraints don't limit capability development - they force the discipline that makes it possible.


Meanwhile, companies with seemingly unlimited budgets buy 500 AI seats, measure "success" by how many people have access, and wonder why usage flatlines at 15% six months later. Their abundance has blinded them to the most effective approach.


We've seen this movie before. The ending isn't pretty.


ree

The ERP Disaster You've Forgotten (And Are Currently Repeating)

In the late 1990s and early 2000s, companies spent tens of millions on SAP and Oracle implementations. The playbook: buy enterprise licenses, deploy infrastructure, train everyone, wait for productivity gains.


What happened:

  • Adoption cratered within months

  • Power users couldn't transfer their knowledge

  • Everyone else built workarounds (hello, shadow spreadsheets)


The companies that succeeded built "centers of excellence" first - small teams developing deep capability, extracting repeatable patterns, then deliberately expanding through systematic transfer.


We're making the same mistake with AI, just with fancier technology. Whether companies built private "safe" AI instances or bought enterprise licenses to cutting-edge models - the failure mode is identical: treating organizational capability as something that emerges spontaneously from tool access.


The Capability Core Principle

Here's what my daughter's robotics team understands that most enterprises don't:


Resource constraints force systematic capability development. Abundant resources enable expensive chaos.


When you can't afford wasted experiments, you develop systematic learning extraction, deliberate skill transfer, shared organizational knowledge, and accountability for building capability.


Enterprise AI adoption follows the opposite logic. Companies think: "We have resources, so let's give everyone access and let experimentation flourish."


What actually happens: A thousand individual experiments producing zero transferable knowledge and a CFO asking why you're paying for 500 seats when 75 people actually use them.


"Resource constraints force systematic capability development. Abundant resources enable expensive chaos."

Why Access ≠ Capability

Companies are confusing two fundamentally different challenges:


Access provisioning: Give everyone tools, measure adoption rates

Capability development: Build systematic learning, extract transferable patterns, expand deliberately


Slack succeeded through access provisioning. Give people access, adoption spreads naturally, organizational value emerges from network effects.


AI doesn't work this way. It's a capability multiplier, not a communication tool. Capabilities don't diffuse organically - they're built systematically in high-functioning cores, then transferred through deliberate structure.


Companies treating AI adoption like Slack deployment should be treating it like ERP transformation or Six Sigma implementation - methodologies that required capability cores, systematic learning extraction, and deliberate transfer before they created organizational value.


What's Actually Happening in the Market

I had conversations last week with a company building AI infrastructure for mid-market firms. They're seeing a pattern that should concern every executive who just bought 500 AI licenses.


Some customers deploy successfully. Others struggle post-sale despite identical technology.


The difference isn't the tools. It's whether they built organizational structure first.


Successful customers arrive "super well organized, very tightly aligned as a leadership group. They've got an AI council. There's organization around AI." They built the capability development infrastructure - governance, learning extraction, transfer processes - before deploying tools.


Failed customers do it backwards. They buy tools first, then realize they have no structure to actually build capability. The infrastructure company often has to "wrangle" executives post-sale, essentially forcing them to create the AI committee and organizational processes they should have built upfront.


Some customers even pause after signing contracts: "We're almost a little too far ahead. We need to just get organized first."


Translation: We bought infrastructure hoping organizational capability would emerge spontaneously. It didn't.


The companies that succeed don't need post-deployment rescue missions. They built the structure for systematic capability development before anyone touched the tools.


"You don't have 500 AI users - you have 75 experimenters producing nothing transferable."

What To Do If You Already Bought 500 Seats

Fortunately, if you're already paying for 500 AI seats with 15% usage, there's a path forward. The answer isn't switching pricing models or running more "lunch and learn" sessions. It's acknowledging you don't have 500 AI users - you have 75 experimenters producing nothing transferable.


Here's how to fix it:


Step 1: Identify Your Actual Capability Core (Not Your Most Enthusiastic Users)

Look for people who demonstrate:

  • Consistent usage with improving outcomes - not just high volume, but measurable progress over time

  • Ability to articulate why their approaches work - they can explain their reasoning, not just show results

  • Willingness to share learning systematically - they document approaches, not hoard clever tricks

  • Genuine workflow integration - AI is solving real problems, not just exploration


This might be 10-20 people across your organization. Good. That's your starting point, not a failure metric.


Step 2: Formalize Learning Extraction

Most companies assume "successful people" automatically creates "successful organization." It doesn't.


Create structure for your capability core to:

  • Document what they're learning, not just what they're building

  • Identify which patterns transfer across roles vs. which are context-specific

  • Articulate capability gaps preventing others from succeeding

  • Build a library of proven approaches, not a showcase of individual projects


Think like my daughter's robotics team: when something works, it becomes team knowledge with transferable principles, not "that cool thing Jamie built."


Step 3: Identify and Empower AI Champions

Your capability core includes people building genuine AI capability. But not all of them can transfer that capability to others effectively.


You need AI Champions - the subset who can:

  • Develop their own capability at a high level

  • Articulate the reasoning behind their approaches

  • Translate tacit knowledge into frameworks others can apply


That third skill is rarer than you think. Being brilliant at using AI is fundamentally different from being able to teach others how to use it well. The person extracting the most value from AI often can't explain their decision-making process in ways that help others replicate their success.


Find the people who can do both. Give them time and organizational authority to build capability in others, not just execute their own work more efficiently.


Step 4: Expand Through Capability Transfer, Not Access Diffusion

Don't open the floodgates or announce "AI for everyone, round two!"


Deliberately add teams who can apply extracted patterns to their workflows, contribute back to the learning system, and develop additional capability that compounds organizational knowledge.


The robotics team doesn't let every new kid just start building. They teach proven approaches first, then encourage innovation within that foundation.


Step 5: Let Usage Metrics Become Diagnostic, Not Aspirational

Now your usage-based pricing (if you switched) or seat utilization (if you didn't) becomes a health indicator:

  • Steady usage growth? You're building capability systematically

  • Flat usage despite "more training"? You're still confusing access with capability

  • High usage but no improvement in outcomes? You're measuring activity, not learning


The goal isn't 100% adoption. The goal is systematic capability development that creates measurable organizational value.


Why This Matters Right Now

AI is moving from premium feature to baseline expectation. If AI capability is becoming baseline, you can't maintain that expectation with premium-only capability development.


"If only 10-20% of your team can use AI effectively, you don't have an AI-capable organization. You have a few AI-capable individuals and an expensive support system."

The companies that recognize this aren't asking "should we switch to usage-based pricing?" or "should we run more training?"


They're asking: "How do we build the organizational structure for systematic capability development that justifies the investment we've already made?"


The Resource Paradox

My daughter's robotics team would fail instantly with unlimited resources. The abundance would destroy the discipline that makes them effective.


Your organization has the opposite problem: abundant resources but no discipline for systematic capability development.


The solution isn't constraining resources. It's building the organizational structure that resource-constrained teams develop by necessity: small capability cores with genuine learning discipline, systematic extraction of transferable patterns, deliberate expansion based on proven approaches, and accountability for building organizational knowledge.


You can't buy this with 500 AI seats. You build it deliberately, starting with far fewer people than your executive team wants to hear about.


But that's how capability actually scales - through systematic development in high-functioning cores, then deliberate transfer through people who can teach what they've learned.


The robotics team figured this out with a $3,000 budget and a bunch of teenagers.


How much longer will it take your organization?



If this resonated, forward it to a leader struggling with AI adoption. The most valuable conversation you can have isn't about which tools to buy - it's about the organizational structure needed before anyone touches them.


Ready to build systematic AI capability in your team? Most companies already own the infrastructure - they just need the organizational structure to actually use it. Schedule time to discuss your team's capability development approach.



Break a Pencil,

Michael

 
 
 

Comments


bottom of page