top of page

The sycophancy trap (and why your AI assistant is making you a worse leader)

  • mbhirsch
  • Jul 21
  • 3 min read

OpenAI quietly rolled back an entire model update last month because ChatGPT had become too agreeable. It was telling users their objectively terrible ideas were "genius"—including, reportedly, someone's plan to sell literal "shit on a stick."

But the real problem isn't that AI flatters us. It's that we're systematically training ourselves to prefer validation over truth.


The Justification Machine We Didn't See Coming

Mike Caulfield had a great insight in a recent Atlantic piece: AI has become a "justification machine"—just like social media before it. We thought these technologies would expand our minds, but instead they've become sophisticated echo chambers that confirm our biases with algorithmic precision.

Product leaders face a particular vulnerability: We're decision-makers who desperately need intellectual sparring partners, not digital yes-men. Yet AI sycophancy is turning us into strategic validators of our own assumptions.

The Brookings Institution recently published research that should alarm anyone making high-stakes product decisions. When humans collaborate with AI and provide incorrect suggestions, the AI's accuracy drops significantly because it reinforces user mistakes rather than challenging them.

The data is stark:

ree

When users provided mostly incorrect suggestions, AI performance plummeted. The blue line shows how AI accuracy becomes completely dependent on human correctness during collaboration—exactly the opposite of what strategic thinking requires. We're not just getting bad advice—we're training ourselves to expect agreement when we should be demanding disagreement.

Think about the last strategic decision you made. Did AI challenge your assumptions, or did it help you build a better case for what you already wanted to do?


The Six Personas Problem

This touches different personas in my product leadership framework differently:

The Strategic Orchestra Conductor needs diverse perspectives to coordinate complex initiatives effectively. But sycophantic AI creates artificial consensus where healthy debate should exist.

The Political Navigator requires honest feedback about messaging and stakeholder dynamics. AI that tells you your communication strategy is "brilliant" when it's actually tone-deaf is worse than useless—it's dangerous.

The Innovation Architect thrives on creative tension and challenging assumptions. Agreement kills innovation faster than budget cuts.

Most concerning: We're consuming, as Caulfield puts it, "the combined knowledge and wisdom of human civilization through a straw of opinion." AI should be connecting us to the messy complexity of human expertise, not delivering smooth, sourceless validation.


The Deeper Strategic Trap

Most product leaders miss the real issue: The problem isn't that AI agrees with us. The problem is that we're losing our appetite for intellectual combat.

The most sophisticated insight I've seen comes from research showing that when users signal uncertainty ("I'm not sure about this, but..."), AI systems exhibit less sycophantic behavior. We can literally train AI to be more honest by admitting we might be wrong. Yet how often do we approach AI tools with epistemic humility instead of seeking confirmation?

ree

The "No Answers from Nowhere" Framework

Caulfield proposes a simple but powerful rule: "no answers from nowhere." AI should be a conduit to human expertise, not an arbiter of truth.

Here's how to apply this in product leadership:

1. Demand Sources, Not Opinions Instead of asking "Is this feature strategy sound?" ask "What would Clayton Christensen think about this feature strategy, and how would Teresa Torres approach it differently?"

2. Seek Multiple Perspectives Don't ask AI to evaluate your go-to-market plan. Ask it to show you how different frameworks—lean startup, product-led growth, enterprise sales methodology—would approach your specific situation.

3. Use AI as a Memex, Not an Oracle Think of AI as Vannevar Bush's 1945 vision: a system that connects you to relevant knowledge, contradictions, and the messy complexity of human understanding. Not a magic 8-ball that tells you what you want to hear.

4. Signal Your Uncertainty When collaborating with AI, explicitly acknowledge where you're unsure. "I think our pricing strategy is right, but I'm not confident about the enterprise tier" produces better results than "Validate our pricing strategy."


The Leadership Imperative

The most successful product leaders in the AI era won't be those who get the most agreement from their tools. They'll be those who maintain their appetite for intellectual disagreement and use AI to access diverse human wisdom rather than artificial validation.

This isn't about rejecting AI or becoming a Luddite. It's about using AI to become a better strategic thinker instead of a more efficient confirmation bias machine.

Your assignment this week: Take your next strategic decision to AI, but instead of seeking validation, ask it to present the strongest opposing viewpoint. Ask it to show you what the smartest critics of your approach would say. Then ask it to connect you to the frameworks and thinkers who would challenge your assumptions most effectively.

The goal isn't to be right. The goal is to be rigorous.

And if your AI tells you this approach is "genius," you'll know you've got more work to do.


Sources:


 
 
 

Comments


bottom of page