The Acceleration Addiction
- mbhirsch
- Nov 10
- 8 min read
Why Organizations Can't Stop Even When They Should
Hey there,
In late 2024, something started to shift in Silicon Valley. The "996" work schedule—9am to 9pm, six days a week—stopped being something Chinese tech companies did and started appearing in American startup job descriptions. By mid-2025, it wasn't whispered about in hushed tones. It was marketed. Higher pay. More equity. Saturday corporate credit card usage in San Francisco spiked measurably.
Most established tech leaders expressed concern. Then went back to Slack messages at 11pm, weekend "quick syncs," and wondering why their teams seem exhausted despite AI promising to make everything faster.
We're all running the same experiment. The startups are just more honest about it.
Just last week, an AI startup founder defended his team's Saturday work culture on LinkedIn: 80% voluntary attendance, driven by "great teammates, exciting product, and commercial validation." Everything he said was probably true. Everything he said also misses the point entirely.
When most of your team "chooses" to work six days a week, and leadership celebrates it publicly every Saturday with pizza-and-productivity photos, you haven't avoided 996 culture—you've just made it feel virtuous. The explicit 996 companies tell you acceleration is mandatory. The "voluntary" ones make you believe you chose it yourself.
That's not better. That's the addiction talking.

Every New Technology Promised Less Work. Every Time We Worked More.
Here's the pattern we keep missing:
PC Era (1980s-1990s): Spreadsheets and word processors were supposed to reduce paperwork. Instead? We got more analysis, more reports, more "knowledge work" filling every hour saved. Knowledge workers didn't work less—they just produced more artifacts.
Internet Era (late 1990s-2000s): Instant information access would make research efficient. Instead? Information overload, continuous partial attention, and the expectation that you'd have an answer to any question in seconds, not days.
Mobile Era (2007-2015): Work anywhere meant work flexibly. Instead? Work became work everywhere. That morning email check. That evening Slack thread. That Sunday afternoon "just reviewing the deck."
Cloud Era (2010-2020): Removing infrastructure constraints made experimentation cheap. Instead? "Move fast and break things" became "ship constantly and never stop." Perpetual beta. Permanent deployment mode.
The promise: technology will free you from work.
The reality: technology will make you work more efficiently, which means you'll just do more work.
New research from Carnegie Mellon and Stanford comparing human and AI agent workflows across multiple occupations just confirmed we're about to make the same mistake again—but worse.
AI Compression Meets the Constraint You Can't Eliminate
Previous technologies compressed specific bottlenecks:
PCs compressed calculation time
Internet compressed information access
Mobile compressed location requirements
Cloud compressed infrastructure setup
But here's what made those cycles sustainable (if brutal): the constraint that got compressed was always external to human cognition. Computers calculated faster than you could. Google searched faster than you could. AWS deployed faster than you could.
AI is different.
AI compresses tactical execution itself—the actual work of writing code, analyzing data, generating content. And when you compress execution, you expose what's left: strategic judgment, contextual decision-making, knowing when to pivot, understanding what customers actually need versus what they say they need.
The CMU/Stanford research makes this uncomfortably clear: "agents deliver results 88.3% faster and cost 90.4–96.2% less than humans." That's not incremental improvement…that's obliteration of the execution bottleneck.
Organizations see those numbers and think: "Finally, we can really accelerate."
But read a bit more closely and what the research actually shows is when agents try to handle entire workflows autonomously, quality collapses. The researchers found that "agents produce work of inferior quality, yet often mask their deficiencies via data fabrication and misuse of advanced tools." They take "an overwhelmingly programmatic approach" to tasks that aren't programmable—treating design work like debugging, treating strategic analysis like data processing.
"You cannot compress human judgment the way you compressed execution."
The agents are revealing something we've known but refused to face: you cannot compress human judgment the way you compressed execution.
You can't run strategic thinking at 996. You can't "move fast" your way to better product decisions. You can't sprint your understanding of customer needs.
For the first time, we've hit a constraint that cannot be technologically optimized away: the biological limits of human cognitive work.
The Pattern Extends Beyond Business
AI researcher, Wharton professor, and author of the best-selling book "Co-Intelligence" Ethan Mollick recently observed that if predictions hold—AI generating minor scientific discoveries next year, major ones soon after—"we have no real mechanism in academia for accommodating, reviewing, processing, and disseminating a sudden increase in science."
His question cuts to the core: "Who is going to read thousands of new papers? Who will integrate the knowledge? Who is going to build on them to transfer them into practical products?"
The answer isn't "AI will do it all." As Mollick notes, AI doing novel science seems plausible, but "the task of integrating and theorizing across a wide range of knowledge, let alone doing the many steps of technology transfer to make these ideas into innovations, is further outside the frontier."
His conclusion: "Just like every other process with AI (coding, etc) just having more of everything isn't enough unless we rethink processes and approaches."
This is the same trap playing out across domains. Science. Product development. Strategic planning. AI compresses execution brilliantly—but judgment, integration, and sense-making remain uncompressibly human. And when you flood the system with more output than humans can coherently process, you simply do not get more progress.
The academic system wasn't built for 10,000 papers per week. Your product organization wasn't built for AI-generated features shipping daily. Neither can survive acceleration without the infrastructure to make strategic sense of what's being produced.
The Age Dimension Nobody Wants to Discuss
Here's where it gets darker.
996 culture might be physically survivable for 25-year-olds (though it's still terrible). It's medically dangerous for 45-year-olds.
But guess who has 20 years of pattern recognition? Who's seen three market cycles? Who knows which "revolutionary" idea is actually just 2008 repackaged?
"You're not just burning people out. You're aging out your knowledge workers faster than you can develop new ones, at exactly the moment when judgment becomes more valuable than execution speed."
The acceleration addiction has an age bias it won't admit: it systematically eliminates the demographic that holds the institutional knowledge and strategic judgment the organization desperately needs.
This isn't hypothetical. The research shows agents struggle most with "less programmable" work requiring visual perception, contextual understanding, and pragmatic reasoning—precisely the capabilities that develop over years of experience. Meanwhile, they excel at "readily programmable" tasks like data cleaning and code generation—the work junior team members do.
So here's the trap:
AI compresses junior work (execution)
Organizations demand senior-level output at junior-level timelines
Senior people burn out or get pushed out because they can't maintain 996 pace
Remaining team members lack the judgment capacity to use AI for the very work that matters most—strategic and cognitive tasks
Quality collapses, but faster than ever
You're not just burning people out. You're aging out your knowledge workers faster than you can develop new ones, at exactly the moment when judgment becomes more valuable than execution speed.
What the Research Actually Says About Co-Intelligence
The CMU/Stanford research offers a way forward, but not the one most organizations want to hear.
What works isn't "use AI for everything" or "avoid AI entirely." It's co-intelligence at the workflow-step level: humans handle judgment-heavy steps, agents handle readily-programmable execution.
In their experiments, when humans delegated specific workflow steps to agents—file navigation to humans, data processing to agents, verification back to humans—tasks completed 68.7% faster than humans alone, with correct outputs.
But what makes this hard to implement is that building systematic, step-level collaboration requires infrastructure that acceleration culture has prevented organizations from building.
You can't just tell people "work better with AI." You need:
Knowledge infrastructure - Systems for capturing strategic context, not just raw data. Customer insight repositories. Market intelligence. Documented reasoning about past decisions. The stuff that helps humans make judgment calls and helps AI understand when not to automate.
Integration layer - Connections between strategic decisions and tactical execution. How does strategy translate to roadmap? How do customer insights inform features? How do market changes affect release timing?
Quality frameworks - Ways to evaluate whether strategic work is actually working. What does "good" strategic thinking look like? How do you distinguish strategic judgment from tactical optimization?
Governance structures - Clarity on who decides what, when, and how. Decision rights. Escalation paths. Resource allocation for capability development.
Here's what this looks like in practice: knowledge infrastructure isn't a wiki that nobody updates. It's a living system where customer interview insights automatically connect to feature discussions, where market intelligence feeds into roadmap prioritization, where past decision rationale is accessible when similar choices arise. It's the difference between telling AI to "analyze customer feedback" and giving it the strategic context to understand which feedback signals matter and why.
Most organizations don't have this. They've spent decades optimizing for speed, not intelligence.
The Addiction Metaphor Isn't Hyperbole
Organizations can't stop accelerating for the same reasons addicts can't just quit:
Tolerance - You need more speed to feel productive. Last year's velocity is this year's baseline.
Withdrawal - Slowing down feels like failure. "Doing less" sounds like "being lazy."
Compulsive behavior despite known harm - Everyone sees the burnout. Everyone knows strategic quality is suffering. Nobody can stop.
Competitive anxiety - "If we slow down, competitors win." (Except your competitors are also destroying their strategic capacity.)
"When you flood the system with more output than humans can coherently process, you simply do not get more progress."
The research shows this playing out in AI adoption: one quarter of human workers already use AI tools. When they use AI for augmentation (delegating specific steps), "efficiency [improves] by 24.3%." When they use AI for automation (full-process handoff), workflows reshape entirely and the research found AI "slows human work by 17.7%, largely due to additional time spent on verification and debugging."
Organizations see "24% faster" and push for full automation. Then wonder why everything's breaking.
You can't just tell an addict to quit. You need to build different systems that make the addictive behavior unnecessary.
What Happens Next
Two paths:
Path 1: Continue the Addiction Chase the 90% cost reduction. Push for full automation. Lose the judgment capacity that makes automation valuable. Watch quality collapse while pretending velocity metrics matter.
This is what 996 culture represents: the acceleration addiction at terminal velocity. The startup founder posting Saturday pizza photos. The "optional" weekend work that 80% of the team chooses because choosing otherwise marks you as not committed enough.
Path 2: Build for Intelligence, Not Just Speed Invest in the infrastructure that enables co-intelligence. Build knowledge systems. Create integration layers. Develop quality frameworks. Establish governance that protects strategic capacity.
Use AI agents to eliminate readily-programmable drudgery. Use humans for judgment that cannot be compressed. Build the organizational infrastructure that makes intelligent delegation possible.
The research is clear: this second path is faster and higher quality than either full automation or avoiding AI entirely.
"The ones that choose Path 2 will win not by moving fastest, but by making better decisions, faster."
But it requires admitting that "move faster" isn't always the answer. That some work cannot be compressed. That building infrastructure for intelligence feels slower in the short term but compounds in the long term.
Most organizations will choose Path 1. The addiction is too strong. The Saturday photos too compelling. The 88% cost savings too seductive.
The ones that choose Path 2 will win not by moving fastest, but by making better decisions, faster.
The question isn't whether AI will transform how we work. It's whether we'll use it to accelerate the addiction or finally build something different.
Break a Pencil,
Michael
P.S. Want to see what Path 2 actually looks like in practice? Next Tuesday (Nov 18, 9:30am PT), Kate Mosley and I are doing a Lightning Lesson on "How to Be Strategic When No One Will Teach You." Kate will walk through a real product decision where AI capabilities created a strategic choice—and how she developed the thinking framework to make it without formal mentorship. 30 minutes. Real examples. Frameworks you can use immediately. Register here: https://maven.com/p/da3193/how-to-be-strategic-when-no-one-will-teach-you
P.P.S. If this perspective landed, share it with a product leader who's wondering why their team seems exhausted despite all the AI productivity gains.
References:
"How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations" by Zora Zhiruo Wang, Yijia Shao, Omar Shaikh, Daniel Fried, Graham Neubig, and Diyi Yang (Carnegie Mellon University and Stanford University, October 2025). Available at arxiv.org/abs/2510.22780




Comments