top of page

The AI Leadership Bargain

  • Jan 12
  • 5 min read

Hey there,


Marc Zao-Sanders has spent two years analyzing tens of thousands of Reddit posts to understand how people actually use AI. The top use cases? Therapy. Navigating divorce. Rewriting angry texts to in-laws. Meal planning.


His conclusion for enterprise leaders is "don't panic." Personal AI use builds fluency that eventually benefits the business.



I think he's half right. And the half he's missing is costing organizations more than they realize.


Here's what complicates his argument. A recent MIT Technology Review / Infosys study found that 22% of business leaders have hesitated to lead an AI project because they feared being blamed if it failed. Not individual contributors hiding ChatGPT usage. Leaders. The people supposedly driving AI transformation are afraid to stick their necks out.


The study found that 73% of respondents feel safe to provide honest feedback at work. That sounds encouraging until you see the next number. Only 39% rate their organization's psychological safety as "very high." Another 48% say it's "moderate." That's a lot of organizations pursuing AI adoption on cultural foundations that aren't fully stable.


We've been told the bottleneck to enterprise AI adoption is fluency. People just need to get comfortable with the tools.


But people are comfortable with the tools. At home. In private. With the screen angled away from the door.


The bottleneck isn't capability. It's exposure.



I informally polled some friends this week about their AI usage. The responses were revealing, not for what tools they use, but for how differently they've integrated AI into their thinking.


Tom uses AI 5-7 times per day. His favorite prompt is "What questions about this topic have I missed, and what are opposing viewpoints to my perspective?" That's not "better Google" behavior. That's someone who's figured out how to stay in the loop, using AI to challenge his thinking rather than replace it.


Mike uses ChatGPT once or twice a week for research. He added, almost wistfully: "Thinking of redoing my whole productivity system as AI native."


Same tools. Same general fluency. Completely different relationships with AI.


The difference isn't skill. It's something harder to name: a combination of permission, integration, and judgment that personal use alone doesn't build.



Here's what I think product leaders are getting wrong.


Marc's advice to enterprises is essentially to give people space to experiment. Don't be a "token pincher." Let personal wins become organic pipelines for professional discovery.


That's the give side of the equation. And it's necessary. But it's not sufficient.


What's missing is the expect side. And without both sides, you don't get true AI fluency.


The AI Leadership Bargain looks like this.


Give psychological safety. Make it safe to use AI visibly. To be seen in process. To think out loud with a machine in front of colleagues. To fail publicly while learning. Kill what I've started calling "alt-tab culture," the reflexive hiding of AI usage when someone walks by.


Expect judgment. Make it unacceptable to forward AI output you can't defend. If someone on your team sends a strategy document and gets asked "Why did you recommend this approach?" the answer cannot be "Claude said so." Not because AI is unreliable. Because the human left the loop.


The skill that matters isn't how to use AI. It's how to stay in the loop while AI does the work.



Most organizations get this bargain backwards.


Some give licenses without giving permission. They'll pay for enterprise ChatGPT seats while maintaining a culture where visible AI use signals incompetence. The tools are available; the safety isn't. Result: shadow AI. People experiment in private, never share learnings, and organizational capability never compounds.


Others give permission without expecting ownership. They celebrate AI adoption metrics (prompts per user, sessions per week) without asking whether the outputs are any good. Someone forwards an AI-drafted customer email that misses crucial context? Well, at least they're using the tools. Result: a workforce that's increasingly fluent at generating content they can't defend.


Both failure modes look like progress from a distance. Neither builds real capability.



The product leader's responsibility is to model this bargain, not just mandate it.


That means using AI visibly yourself. Thinking out loud with your team about what you're prompting and why. Sharing when AI got it wrong and how you caught it. Demonstrating what "staying in the loop" actually looks like in practice.


It also means holding the line on judgment. When someone presents AI-assisted analysis, ask follow-up questions. Not to catch them out, but to reinforce that the human's job is to own the output. The question "What did you change from what AI gave you?" should become as routine as "What's your confidence level on this estimate?"


This isn't micromanagement. It's modeling the relationship with AI that you want your team to develop.



One more thing for product leaders specifically:


This isn't just about team management. It's about product intuition.


If you can't figure out what healthy human-AI collaboration looks like on your own team, you can't design it into your products. Every product leader is now, whether they realize it or not, designing human-AI interactions. The judgment you develop internally—when AI adds value, when it misleads, when humans need to stay in the loop—directly informs the AI features you'll build for customers.


The leaders who get the bargain right won't just have more capable teams. They'll have better instincts for what AI-native products should actually feel like.


And the leaders who skip straight to "give everyone Copilot and hope for the best" will keep wondering why adoption metrics look great while actual value remains elusive.



The bargain, one more time.


Give: Safety to use AI visibly, to be seen learning, to fail in public.


Expect: Ownership of output. Judgment about when AI helps and when it misleads. Humans who stay in the loop.


Personal AI use is fine. It's just not the pipeline to professional capability that we've been promised. That pipeline requires something more deliberate: a trade between leaders and teams that most organizations haven't explicitly made.


Marc is right that we shouldn't panic about personal AI use. But we should stop assuming it solves the enterprise problem. It doesn't. The enterprise problem is a leadership problem.


And leadership problems require leadership solutions.


Break a Pencil,

Michael



P.S. Helping product teams navigate this bargain—building psychological safety while establishing judgment expectations—is exactly what I do. If your organization is stuck in "everyone has licenses but nothing's changing" mode, let's talk.


P.P.S. Know a product leader who's wrestling with AI adoption? Forward this to them. Sometimes naming the problem is the first step to solving it.



Sources:

  1. Marc Zao-Sanders, "Personal AI use cases are good for business," Section AI: https://www.sectionai.com/blog/personal-ai-use-cases-are-good-for-business

  2. "Creating psychological safety in the AI era," MIT Technology Review / Infosys: https://www.technologyreview.com/2025/12/16/1125899/creating-psychological-safety-in-the-ai-era

 
 
 

1 Comment


sjohnson717
Jan 12

Excellent post, my friend. Another technique I'm pleased to see: some of my clients are having weekly lunch-and-learns. Each week, people demonstrate what they've done with AI, including the prompts or code they used, so everyone can benefit from learning.


I've seen this approach extended to any aspect of product management. It's amazing how little product managers share with one another.

Like
bottom of page