Your AI Policy is Killing Innovation
- mbhirsch
- Sep 29
- 6 min read
Why PMs Should Write it Instead of Lawyers
Hey there,
I attended Section AI's virtual conference this week, where a presenter shared something truly insightful: "Your AI policy doesn't just solve a compliance problem—it should solve adoption as well."
Her evidence? The #1 reason employees limit AI use isn't capability gaps or tool confusion. It's fear of data security violations.
Think about that for a moment. Your most creative people are self-censoring not because they can't figure out how to use AI effectively, but because they're terrified of accidentally crossing an invisible line that could torpedo their careers.
This is a Product Management problem disguised as a Legal problem.
The Expertise Trap
Most AI policies fail because they're written by the wrong people. Legal teams write for liability protection—understandably creating Byzantine frameworks that optimize for "never getting sued" rather than "enabling intelligent risk-taking." IT teams write for security compliance, building digital fortresses that treat every AI interaction like a potential breach waiting to happen.
What neither group understands, though, is that productive constraint is not the same as protective constraint.
What neither group understands, though, is that productive constraint is not the same as protective constraint.
Product Managers live in the space between user value and business risk. We're professionally trained to find that sweet spot where constraints enable better decision-making rather than just safer decision-making. Every feature release, every roadmap prioritization, every scope negotiation is practice in systematizing intelligent risk-taking.
Yet somehow, when it comes to AI adoption, we've abdicated this responsibility to teams who've never had to balance innovation velocity against risk management in real-time.
The Power User Insight
The most effective AI policies I've encountered weren't written by compliance teams. They were written by the creative AI power users—the people who had already navigated the gray areas and understood which guardrails actually matter versus which ones just create security theater.
These are the employees who've already felt the friction points. They know where you need flexibility to maintain creative flow, and where bright lines prevent genuine problems. They understand the difference between "this could theoretically be risky" and "this will definitely cause problems."
When your most innovative AI users write the policy—with legal review, not legal leadership—you get frameworks that enable experimentation rather than suffocate it.

Psychological Safety as Infrastructure
Here's where this connects to systematic AI adoption: policies don't just establish rules, they create psychological safety. But the wrong kind of policy creates psychological safety through prohibition ("you can't get in trouble if you don't use AI"), while the right kind creates psychological safety through clarity ("you know exactly how to use AI without getting in trouble").
As Ethan Mollick observed in his book Co-Intelligence, organizations that get the biggest AI benefits share key characteristics: they decrease fear around AI use, incentivize people to come forward with successful experiments, and expand the number of people using AI overall.
Policy is the infrastructure that makes all three possible.
When employees understand not just what they can't do, but why the constraints exist and how to navigate edge cases, they stop burning cognitive cycles on risk assessment and start focusing on value creation.
The PM Opportunity
Product Managers are uniquely positioned to lead AI governance because we already excel at the core competency required: creating frameworks that balance user needs against business constraints.
Consider how you approach feature flagging—you don't ban experimentation, you create systematic ways to experiment safely. You don't eliminate risk, you make risk visible and manageable. You don't slow down innovation, you create sustainable velocity.
AI adoption should follow the same framework.
The best AI policies I've seen read like good product requirements: specific enough to prevent confusion, flexible enough to handle edge cases, and written in language that helps people make good decisions rather than just avoid bad ones.
They explain the "why" behind constraints so employees can extend the reasoning to novel situations. They distinguish between hard boundaries (never upload customer PII to public AI tools) and soft guidelines (when in doubt about competitive sensitivity, check with your manager). They provide escalation paths for ambiguous cases instead of defaulting to prohibition.
Most importantly, they're written by people who understand that the goal isn't perfect compliance—it's productive experimentation within intelligent constraints.
The Policy Spectrum: A Tale of Three Approaches
I recently analyzed AI policies from three major tech companies, and the contrast perfectly illustrates the divide between protective constraint and productive constraint.
Lenovo: The Legal Fortress
Five pages of dense, defensive language focused entirely on "we will not" rather than "here's how you can." The policy boldly states that "employees may not use publicly-available versions of any generative AI tool" and requires navigating multiple approval processes for basic usage. This is psychological safety through prohibition—legal cover that kills innovation.
Imagine being a Lenovo PM trying to use AI to analyze user feedback. You'd need to parse five pages of legal language, find three approval processes, and schedule meetings with Legal, IT, and Security just to get started. Meanwhile, your competitor's PM already shipped the insight.
HERE Technologies: The Marketing Manifesto
All aspirational language about "responsible stewardship" and "trustworthy AI systems" without a single actionable guideline. Beautiful for the website, useless for someone trying to figure out if they can use Claude to draft requirements. This is policy as performance art.
Logitech: The Sweet Spot
Clear principles that actually help employees make decisions. Notice the language: "Our aim is to provide users the information they need to understand and navigate our AI-enabled offerings confidently." That's enablement thinking, not protection thinking. This policy reads like it was written by people who understand that employees need frameworks for intelligent decision-making, not just rules to follow.
Only one of these sounds like it was crafted by someone who's actually used AI for real work. The others read like they were written by people optimizing for different outcomes entirely—legal protection and brand positioning rather than competitive advantage.
The Competitive Advantage
While your competitors are still having their legal teams write AI policies that optimize for "never getting in trouble," you have an opportunity to build policies that optimize for "systematically creating value while intelligently managing risk."
The companies that figure this out first will develop two critical advantages: they'll have employees who aren't afraid to experiment with AI, and they'll have frameworks for turning individual experimentation into institutional learning.
The real competitive moat isn't having better AI tools—it's having people who aren't terrified to use them creatively.
The real competitive moat isn't having better AI tools—it's having people who aren't terrified to use them creatively.
This is why I'm such a strong advocate for Product Management to lead AI governance initiatives. We're the group that should be most creative in using this technology, and we're professionally incentivized to find that productive middle ground between constraint and chaos.
The question isn't whether your organization will develop AI governance frameworks—it's whether those frameworks will enable the innovation you need to stay competitive, or just provide legal cover while your most creative people learn to avoid the tools that could transform your business.
Your choice: policies that create security theater, or policies that create systematic competitive advantage.
Your choice: policies that create security theater, or policies that create systematic competitive advantage.
The companies that figure this out first won't just have better AI adoption—they'll have built the organizational muscle for turning any emerging technology into strategic advantage. While their competitors are still writing policies to avoid getting in trouble, they'll be systematically creating value through intelligent experimentation.
Ready to build the AI governance framework your organization actually needs? My private cohorts are designed around the strategic frameworks from systematic AI adoption, combining practical policy development with the psychological insights that make transformation sustainable. [Learn more here.]
Or join my public "Build an AI-Confident Product Team" course starting October 14 on Maven—where product leaders learn to navigate AI transformation with both strategic frameworks and hands-on policy development.[Learn more here.]
Break a Pencil,
Michael
P.S. If you're a PM who's been watching your legal team write AI policies while thinking "this doesn't feel right," trust that instinct. You're not imagining the disconnect—you're seeing the difference between protective constraint and productive constraint. The organizations that figure out how to leverage PM thinking for AI governance will leave everyone else behind.

Comments