top of page

Your AI Adoption Is Working. That's Not Enough.

  • 4 days ago
  • 5 min read

A few weeks ago I was talking to Caroline, a leadership coach who works with senior executives on decision-making under uncertainty. She's spent nearly two decades helping leaders make high-stakes calls with incomplete information.


We were talking about AI, and she told me: "I don't know what to invest my time in because I don't know what's going to be around in a month."


This is someone who coaches CEOs through ambiguity for a living. And she's doing what a lot of smart leaders are doing right now: nothing. Not because she's lazy, but because she can't figure out where to place the bet.


She's not alone. Anthropic recently published interviews with 81,000 AI users across 159 countries. The finding I keep coming back to is that hope and alarm about AI don't divide people into opposing camps. They coexist within the same person, often in the same week. That's not confusion, it's a rational response to a landscape that keeps shifting underfoot.


And the conventional response is making it worse.



There's a widespread assumption driving most AI adoption strategies right now: if you get individuals using AI, organizational capability will follow. It sounds reasonable, but it's wrong.


Ethan Mollick, arguably the most thoughtful voice on AI adoption, tells individuals to pick a model, spend $20 a month, and spend 8-10 hours mapping what he calls the "jagged frontier" of their own work. Figure out where AI helps and where it doesn't. That's excellent advice for an individual. Some organizations have gone further: approving tools, setting up governance, creating training materials so people can learn at their own pace. That's all genuinely smart work.


But none of it solves the organizational problem. Fifty people experiment independently. Some get excited, some get frustrated, some use it daily, and some forget about it after a week. None of them share what they learned with anyone else. Mollick himself has cited research suggesting roughly half of American workers are already using AI but hiding it from their employers. Why reveal efficiency when it might cost you your job or your easy workload?


So you end up with 500 AI licenses and maybe 75 people genuinely getting value from them. That's not a failure of adoption. That's actually a decent hit rate for any new tool. The problem is that those 75 people are each solving their own problems in their own way, and none of that learning is visible to anyone else. The company is measuring adoption by seat count while the real value is invisible, unstructured, and locked inside individual heads.


Most of these companies were already hesitant because they weren't sure AI would deliver a return. So they hedged, kept it low-commitment, and let people figure it out on their own. Then someone asks for the ROI on those 500 licenses. The answer is there isn't one. Not because individuals aren't benefiting, but because individual benefit was never going to produce an organizational result. The cautious approach created the very outcome it was trying to avoid, and that outcome becomes the justification for more caution.


Individual adoption isn't an organizational strategy. It's an expensive way to stand still.


Individual adoption isn't an organizational strategy. It's an expensive way to stand still.


The organizations I've seen make real progress did something counterintuitive. They stopped trying to move everyone at once.


I worked with a company that's a Google shop. Every employee already had access to Gemini and the platform decision was made, but adoption was still meager. Access wasn't the bottleneck. Direction was.


One team reached out looking for direction and support. We started with something specific: how their existing workflows could actually use AI, where the judgment calls were, what "good output" looked like for their particular work. Within weeks, they'd moved from sporadic individual experimentation to a shared understanding of where AI genuinely helped and where it didn't. Then another team saw what was happening and wanted the same.


That's the pattern. Not a company-wide rollout or a mandate. One team builds real capability, other teams see it, and momentum builds from proof rather than from a memo.


Once a team genuinely understands how to work with an AI platform, how to frame problems for it, how to evaluate its output, how to weave it into existing processes, that understanding becomes visible to the rest of the organization. Other teams can see what "good" looks like. They have someone to ask. That's how capability spreads. Not from the surface-level familiarity that comes from giving everyone access and hoping for the best.


AI adoption isn't a rollout. It's a capability. And capabilities are built through depth and concentration, not broad access.



There's a problem upstream of all of this. Many of the leaders making organizational AI decisions haven't used it themselves for anything meaningful. They're evaluating proposals and approving budgets for something they've never personally experienced.


A colleague of mine, Geraldine, did pick it up. She'd been struggling with meal planning, one of those mundane problems that eats cognitive energy without ever feeling important enough to actually solve. She started a conversation with Claude, not to get a meal plan, but to figure out what was actually making it hard. That conversation surfaced the real constraint, which wasn't what she assumed. Within weeks she had a system that worked. Six weeks in, she was eating healthier and actually sticking to it. Not because AI is a nutritionist, but because it helped her think through the problem clearly enough to act.


Plan a trip. Organize a renovation. Work through a decision you've been avoiding. The task doesn't matter. What matters is developing your own feel for when AI adds real value versus when it's just generating plausible filler.


What matters is developing your own feel for when AI adds real value versus when it's just generating plausible filler.

That feel changes everything about how you lead the organizational conversation. You stop asking "should we adopt AI?" and start asking "where specifically would this make us better?" You can't get there from a vendor demo or a McKinsey deck. You get there from using it yourself.



Caroline's paralysis is rational. The landscape is genuinely uncertain, and nobody has a proven playbook. Not the consultants, not the analysts, and not the AI companies themselves.


But paralysis compounds. Every month you wait for clarity is a month your people aren't building capability. The dust isn't going to settle. You have to move anyway.


Start personally. Then deliberately. Then organizationally.


The companies that pull ahead won't be the ones who picked the right platform. They'll be the ones who picked a platform, built real depth with a focused team, and turned individual learning into organizational capability before their competitors stopped deliberating.


Pick one. Go deep. Build the muscle.


The cost of choosing wrong is real, but it's recoverable. The cost of choosing nothing is falling behind while feeling responsible about it.


I'm curious: if you're leading a team or an organization, where are you in this progression? Still in personal exploration? Trying to get a team engaged? Somewhere else entirely? Hit reply and tell me. I'm collecting these stories because I think the patterns are more universal than most people assume.


Break a Pencil,

Michael


PS: If this resonated, forward it to someone who's wrestling with the same question. This isn't just a tech problem. And if your team is stuck somewhere in this cycle, I'm always happy to think through it together.

 
 
 

Comments


bottom of page