top of page

Your AI Committee Thinks Its Job Is Governance. It's Not.

  • mbhirsch
  • Nov 24
  • 9 min read
ree

Last week I wrote about why 85% of AI seats go unused despite all the training, tools, and executive enthusiasm. The core argument: organizations confuse access provisioning with capability development.


That argument assumed you haven't built the infrastructure yet. But what about the organizations that actually did it—the ones that formed committees, created governance structures, survived Legal's seventeen-page AI policy review? They're discovering that infrastructure doesn't automatically translate to capability. Most AI committees are solving exactly the wrong problems.


Here's what typically happens. You create the AI committee. Legal presents the risk framework. IT discusses security protocols. HR talks training. Finance wants ROI metrics.


Six months later: governance document, tool approval process, adoption dashboards showing 15% usage.


The committee did its job. It governed. It measured. It created structure.


And completely missed the actual work.


What Three Different Sources Revealed

I recently came across three different perspectives on AI adoption that, despite coming from completely different angles, converged on the same insight.


A CEO building AI products talks about "context engineering"—AI needs accumulated knowledge, not just one-off prompts. His analogy: regular chatbots make you order ingredients from Whole Foods every time you cook. Context engineering puts everything on the table, pre-prepared. His company hit $10M ARR in 60 days by solving this problem.


A product leader at a major tech company shared what he's building internally: a maturity framework for moving teams beyond basic chatbot usage. Organizations get stuck at basic task automation (small productivity gains) because they skip building what he calls the "configuration layer"—the workflow-specific knowledge bases, system integrations, and quality frameworks that unlock larger gains. His comparison: Salesforce works out of the box, but you don't get real value until you configure it for your specific business.


A company building AI infrastructure for mid-market firms sees the pattern with their customers. The successful ones arrive "super well organized, very tightly aligned as a leadership group. They've got an AI council. There's organization around AI." The failures buy infrastructure, then months later admit: "We're almost a little too far ahead. We need to just get organized first."


Three different sources. Same revelation.


The infrastructure everyone's missing isn't technical. It's not about better models, fancier integrations, or more powerful features. It's what I call the Capability Core—the configuration layer of context, workflows, and learning systems that makes AI actually useful for YOUR business, YOUR workflows, YOUR context.


"Salesforce works out of the box, but you don't get real value until you configure it for your specific business."

Understanding the Capability Core

The framework from that product leader I mentioned clarifies what most organizations miss. There are three maturity levels:


Level 1: Task-Based AI Individual chatbot usage. No infrastructure investment. People use AI for one-off tasks—drafting emails, cleaning up documents, answering questions. This produces small, incremental productivity gains but nothing systematic.


Level 2: Workflow-Level AIThis is where meaningful productivity gains emerge. But it requires the Capability Core: workflow-specific knowledge bases, system integrations, quality frameworks, and governance structures. Most organizations are stuck at Level 1 because they don't know this is what they need to build.


Level 3: Orchestration & Autonomous Agents AI coordinating across systems, operating with minimal human intervention. This requires mature data infrastructure, cross-system integration, and sophisticated governance. Very few organizations are here yet.


The insight: you can't skip Level 2. Organizations want Level 3 results with Level 1 investment. The gap is the Capability Core—the organizational infrastructure that makes AI actually useful in your specific context.


I'm adapting this framework more broadly because the principle applies beyond any single company or function. Every product organization faces the same challenge: how do you move from individuals using chatbots to teams with systematic AI capability?


The answer is always the same: build the Capability Core first.

click to enlarge
click to enlarge

The Work Your AI Committee Should Actually Be Doing

Most AI committees spend their time on governance:

  • Which tools should we approve?

  • What's our acceptable use policy?

  • How do we measure adoption?

  • What security protocols do we need?


These aren't wrong. They're insufficient. They're the equivalent of IT choosing which Salesforce edition to buy while completely skipping configuration.


Your AI committee has three actual jobs. None are primarily about governance.


Job #1: Context Engineering

Not this: "We need a knowledge base for AI to access."


This: "What specific 20 documents would make AI 10x more useful for our product managers? Who owns creating them? How do we keep them current?"


The test: Can someone ask your AI deployment "Why did we choose this pricing model for our enterprise tier?" and get an answer grounded in your actual decision history?


If not, you haven't built the context layer yet.


What this looks like:

Your best product manager uses AI to draft stakeholder updates. But every time, she has to re-explain:

  • The product strategy

  • Current customer segments

  • Recent feature decisions and rationale

  • Competitive positioning

  • This quarter's priorities


She's ordering ingredients from Whole Foods every single time.


Meanwhile, your AI committee is debating whether to approve Gemini in addition to ChatGPT.


That context she keeps re-explaining? It's not just in documents. It's in her brain. She's been at the company three years. She knows why you pivoted from enterprise to mid-market in Q2 2023. She remembers the customer research that killed the mobile app feature. She understands the unwritten competitive dynamics with your main rival. New product managers take six months to build that context. They sit in meetings, ask questions, read old Slack threads, absorb institutional knowledge through osmosis.


AI doesn't have six months. It doesn't absorb through osmosis. Every interaction starts from zero unless you build the context layer explicitly.


What the context layer requires:


Identify knowledge that compounds: What information, if available to AI, would eliminate repetitive re-explanation? For product teams:

  • Product strategy documents and their evolution

  • Customer research synthesis

  • Competitive intelligence reports

  • Past decision records with rationale (not just what was decided, but why)

  • Current roadmap with priorities explained


Assign ownership: Who maintains each knowledge domain? Product leaders maintain product context. Sales leaders maintain customer context. Strategy maintains competitive context.


Build update rhythms: Context goes stale. Quarterly strategy refresh. Monthly competitive intelligence update. Weekly roadmap sync.


Capture the informal context: What do your experienced people know that isn't written down anywhere? That's what separates useful AI from expensive chatbots.


Job #2: Workflow Discovery & Validation

This is where most committees confuse activity with progress. They run lunch-and-learns where people share tips. Everyone nods. Nobody changes behavior. Usage stays at 15%.


Lunch-and-learns are fun. They're not effective.


Not this: "Let's have everyone share their AI wins and hope adoption spreads."


This: "Who's already operating at Level 2? What workflows did they build? Do those workflows actually work when other people try them—and are they worth scaling?"


Your committee's job is finding the 10-15% who've accidentally figured out systematic workflows, then validating whether what they built is transferable before you scale it.


What this looks like in practice:


Audit current usage—not what you hoped for, what's actually happening. You'll likely find:

  • 85% using AI like a better Google search (Level 1)

  • 10% who've figured out repeatable workflows (Level 2)

  • 5% experimenting with orchestration (Level 3)


Most committees see this data and think: "We need better training to move the 85% forward."


Wrong move.


Find the 10%. Interview them. Document what they've systematically built. They're not magical. They accidentally built the configuration your organization needed—something that works in your environment with your constraints.


Then validate before scaling.


AI amplifies what you give it. A flawed workflow executed 10x faster just produces flawed results faster. Your committee's job is ensuring the workflows you're extracting are actually worth scaling.


Test the extracted workflow with 2-3 people who weren't in the core group. Does it work for them? What modifications were needed? What failure modes appeared? What worked universally vs. what was person-specific?


"Find the 10%. They're not magical. They accidentally built the configuration your organization needed."

Only after validation do you scale. Extract proven patterns. Validate transferability. Scale deliberately.


This is fundamentally different from lunch-and-learns. You're not asking people to share tips. You're conducting structured extraction of working systems, then validating before deployment.


Job #3: Learning System Architecture

Job #2 finds and validates individual workflows. Job #3 builds the machinery that makes workflow discovery and validation repeatable.


Without this, you're dependent on accidental discovery. Someone figures out a useful workflow. Maybe you hear about it. Maybe you don't. Maybe it spreads. Probably it doesn't.


That's not a system. That's hope.


Not this: "Everyone should share their AI wins in Slack."


This: "What's our systematic process for continuously identifying working workflows, extracting what makes them work, validating transferability, packaging for deployment, and feeding learning back into refinement?"


The companies that succeed build the organizational machinery for systematic capability development before they deploy tools. The ones that fail create committees that govern access while capability development remains completely ad hoc.


What the learning system requires:


1. Discovery mechanism: How do you continuously identify who's operating at the next maturity level? Not through self-reporting or attendance records. Through actual usage patterns and measurable results.


2. Extraction protocol: Structured interviews with consistent questions:

  • What workflow did you build?

  • What context does it require?

  • What failure modes did you discover?

  • What constraints make it work?

  • What would break if someone else tried it?


3. Validation process: Standard approach for testing with new users before broader deployment. This is where you catch the flawed workflows before they get amplified.


4. Packaging standards: How do you document validated workflows so others can implement them? Required context, known failure modes, success criteria, when to use vs. when not to.


5. Feedback loops: As more people use validated workflows, what do they discover? How does that learning feed back into refinement? Your Capability Core should compound, not ossify.


This is fundamentally product operations work. If you have a product operations function, this is their domain. They already own systematic process improvement, knowledge management, cross-functional coordination. The Capability Core is product ops applied to AI capability development. If you don't have product ops, your AI committee needs to build these capabilities from scratch.


The difference between Jobs #2 and #3: Job #2 is "find and validate this specific workflow." Job #3 is "build the system that finds and validates workflows continuously without depending on accidental discovery."


The Three-Month Configuration Roadmap

If your AI committee formed last quarter and you're wondering "what should we actually be doing":


Month 1: Context Engineering

  • Identify the 20 documents/data sources that would 10x AI usefulness

  • Assign ownership for creating/maintaining each

  • Build initial versions (they'll evolve)

  • Capture the informal institutional knowledge that lives in people's brains


Month 2: Workflow Discovery & Validation

  • Audit where you actually are vs. where you think you are

  • Identify the 10-15% operating at Level 2

  • Extract their workflows through structured interviews

  • Validate with new users before scaling


Month 3: Learning System Architecture

  • Design the systematic process for continuous workflow discovery

  • Build feedback mechanisms for refinement

  • Establish product ops ownership (or equivalent)

  • Scale first validated workflows to next cohort (20-30 people)


Notice that none of this is "buy more tools" or "write stricter policies."


It's configuration work. Building the Capability Core—the organization-specific layer that makes AI actually useful in YOUR context.


Why This Matters

A junior product manager at an ed tech company sent me this email after attending her company's AI lunch and learn:


"It was great hearing from others how they're using AI. It was similar as to what I was doing with email creation and a search engine. It was very basic though... I didn't learn anything but it gives me an idea of where the company is."


That's what happens without a Capability Core. People share basic tactics. Everyone stays at Level 1. Nobody builds the systematic infrastructure that would actually unlock value.


She then said something more interesting: "Things I want to explore more: data and market analysis are two big ones."


That's a product manager recognizing that AI could help with strategic work—if she had the Capability Core built. Customer data synthesized. Market intelligence structured. Decision history accessible.


But her company's lunch and learn focused on "email creation and search engine" usage. Because they don't have the Capability Core that would make strategic AI use possible.


Companies buy 500 AI seats, measure adoption rates, wonder why usage stalls at 15%, then conclude "people need more training."


Training isn't the gap. Configuration is.


You wouldn't buy 500 Salesforce licenses, skip all configuration, then blame salespeople for not using it effectively.


AI is the same. Access without configuration is just expensive chaos.


The companies getting this right aren't smarter or better resourced. They just recognized that infrastructure for AI isn't primarily technical. It's organizational.


Last week I wrote about why 85% of seats go unused—the diagnosis: organizations confuse access with capability development. This week is the prescription: build the Capability Core. Context engineering. Workflow discovery & validation. Learning system architecture.


Your AI committee's job isn't governance. It's configuration.


The question is whether they know it yet.



Need help building your Capability Core? I work with product teams and executives to design the systematic infrastructure that makes AI adoption actually work. Not tool training. Not policy writing. The organizational capability development that turns 15% usage into sustained transformation.


Schedule a conversation to discuss what configuration looks like for your team.



Break a Pencil,

Michael

 
 
 
bottom of page