a16z
December 1, 2025

The $700 Billion AI Productivity Problem No One's Talking About

This conversation features Laridan founder Russ Fradin, who draws on his deep experience building measurement tools in ad-tech to tackle the biggest question in enterprise AI: we’re spending billions, but is any of it actually working?

The Measurement Black Hole

  • “Every board meeting I go in, for my other four metrics I have some report of how are we doing… and on AI, all I have is the amount of stuff we bought.”
  • “When a measure becomes a target, it is no longer accurate as a measure.”
  • Enterprises are pouring a projected $700 billion into AI, yet leaders confess they have no system to measure its impact beyond tracking software spend. A staggering 70% believe their AI projects are wasting money, a feeling amplified by the total lack of ROI visibility.
  • The traditional method of measuring productivity—surveying employees—is broken. It’s prone to bias and, critically, isn’t tied to behavioral data showing if employees actually use the tools they’re being asked about.
  • True productivity measurement requires a new stack: passively tracking tool usage, correlating it with departmental output (like inter-departmental responsiveness), and establishing a clear baseline to measure against. Without this, companies are flying blind.

The Adoption Paradox

  • “You have to make employees feel safe so they won't look dumb. And you have to make them understand that they can use this safely without getting fired.”
  • “A 28-year-old guy was using Chat GPT really well... they had him create a 30-slide deck and did a global call... that's an absurd way to hope people adopt world-changing technology.”
  • Despite the hype, employee adoption of AI tools is lower than expected. The primary blockers are psychological: fear of looking incompetent with new technology and fear of being fired for misuse, like uploading sensitive data.
  • Top-down, one-off training initiatives are failing. The key is to create a "safe space" where employees can experiment with AI tools that have built-in guardrails, preventing them from making career-ending mistakes.
  • The biggest unlock lies in identifying grassroots "power users"—employees who have discovered how to turn an 8-hour task into a 1-minute workflow—and systematically diffusing their knowledge across the entire organization.

The Future of Work is Augmentation

  • “Cursor has taken mediocre engineers and made them good, but it's taken amazing engineers and made them gods.”
  • “Your competitor across the street is not going to fire all those employees. He's just going to do more with those employees and he's going to kill your business.”
  • The widespread fear of AI-driven job loss ignores the realities of capitalism. Companies will reinvest productivity gains to innovate and out-compete rivals, not simply to shrink their workforce and bank the profits.
  • AI isn't a job replacement machine; it's an augmentation engine. It closes the gap for average performers while giving top performers an almost unfair advantage, creating a new class of hyper-productive workers.

Key Takeaways:

  • Measure Usage, Not Just Spend. The biggest failure in enterprise AI is tracking software purchases as a proxy for progress. The focus must shift to measuring actual tool usage correlated with output.
  • Solve for Fear, Not Features. Employee adoption hinges on psychological safety. The most powerful tools will fail if users are afraid of looking incompetent or getting fired for making a mistake.
  • Competition Drives Augmentation, Not Unemployment. The "AI will take our jobs" narrative is a red herring. Companies will reinvest AI-driven productivity gains to crush competitors, not just to cut headcount.

Link: https://www.youtube.com/watch?v=VMv00WR8EaA

This episode reveals the $700 billion productivity paradox in enterprise AI—why massive spending isn't translating into measurable gains and how solving this measurement problem is the key to unlocking the next wave of growth.

The AdTech Parallel: Measuring AI's True Impact

  • Russ Frerieden, drawing from his experience building early AdTech infrastructure at companies like Flycast and Comscore, frames the current AI boom as a direct parallel to the internet's rise in the 1990s.
  • Just as the early internet required a new stack of tools from companies like Nielsen and DoubleClick to measure advertising effectiveness and justify budget shifts from TV, the AI industry now needs a similar infrastructure for measurement and governance.
  • The core challenge then, as now, is attribution. In AdTech, it was about proving which ad led to a sale. In AI, it's about proving that expensive new software tools actually yield a tangible productivity benefit.
  • Russ argues that this infrastructure isn't about restriction but acceleration. Large enterprises need reliable measurement to confidently scale their AI investments beyond initial experiments. As he puts it, the goal is to build tools not to be a gatekeeper but to empower more of this spending.

From Labor Budgets to Software Spend: The New Corporate Balance Sheet

  • The conversation highlights a fundamental economic shift where companies are beginning to substitute labor expenses with software budgets, creating an urgent need to measure the return on this new, massive spend.
  • Historically, labor budgets have dwarfed software spending. For example, JPMorgan Chase spends a couple of hundred billion on people versus roughly $18 billion on IT.
  • As AI tools begin augmenting or replacing human tasks, that software budget is set to explode. The bull case for AI leaders like NVIDIA and OpenAI is that global IT spend could grow from $1 trillion to $10 trillion.
  • This shift forces a new question for CFOs: Is this new, enormous software expenditure actually productive? Companies need to know if they are getting their money's worth before they can justify moving from an $18 billion IT budget to a $40 billion one.

Laridan's Three-Pronged Approach to AI Measurement

1. Discovery: What AI is Actually Being Used?

  • The first step is creating a baseline inventory of all AI tools active within an organization.
  • Most companies are surprised by the results, with over 80% of Laridan's customers discovering far more shadow AI usage than they had officially licensed or were aware of. This creates both security risks and opportunities to identify popular, effective tools.

2. Engagement: Driving Safe and Effective Adoption

  • Simply buying licenses isn't enough; enterprise software rollouts notoriously suffer from low usage.
  • To drive adoption, employees need to feel safe—both from looking foolish and from accidentally violating company policy or regulations (like EU AI rules) and getting fired.
  • Laridan provides tools that act as a "safe space," guiding users and blocking them from uploading sensitive data or asking prohibited questions, thereby encouraging experimentation and increasing usage.

3. Productivity: Connecting Usage to Outcomes

  • The final and most complex step is measuring whether AI usage leads to increased productivity.
  • Laridan's current approach marries traditional productivity surveys with its proprietary, passive data on actual tool usage. This allows them to compare the self-reported productivity of heavy AI users versus non-users in the same department.
  • Russ states, "The worst way to measure productivity is I'm going to send a survey to my employees and say do you feel more productive... you have no idea if they're actually using the tools."

The Measurement Conundrum: Goodhart's Law and Defining Baselines

  • Goodhart's Law is introduced as a key challenge: "When a measure becomes a target, it is no longer accurate as a measure." If you start rewarding engineers for lines of code, you'll get more code, but not necessarily better code.
  • The principal-agent problem is also at play. An individual employee (the agent) might use an AI tool to do their 8-hour job in 4 hours and spend the rest of the day on personal tasks. This is a productivity win for the employee but not for the company (the principal).
  • To counter this, Laridan focuses on aggregate, departmental-level metrics rather than individual performance. One effective proxy for productivity is measuring interdepartmental responsiveness—for example, does the legal team respond to requests from the product team faster after adopting an AI tool?

The State of Enterprise AI: Anxiety and Urgency

  • Russ shares findings from a Laridan survey of 350 heads of IT, revealing a climate of high stakes and deep uncertainty among corporate leaders.
  • Enterprises are projected to spend $700 billion on AI, yet 70% of leaders believe a significant portion of this money is being wasted.
  • This feeling of waste is driven by a lack of measurement systems. A customer told Russ, "Every board meeting I go in for my other four metrics, I have some report of how are we doing... and on AI all I have is the amount of stuff we bought."
  • Despite the uncertainty, there is immense pressure to act. A striking 85% of leaders believe they have only the next 18 months to become an AI leader or risk falling permanently behind.

From Top-Down Mandates to Bottom-Up Discovery

  • The conversation concludes that true AI diffusion won't come from top-down mandates but from identifying and amplifying the "heroes" within an organization who discover groundbreaking use cases.
  • The most significant productivity gains often come from an individual employee figuring out how to do an 8-hour task in one minute. The worst outcome is if that employee keeps their method a secret.
  • Companies need a system to identify these power users and their workflows, celebrate them, and systematically share that knowledge across the organization.
  • This is where AI's "product marketing problem" becomes clear. Selling a tool that "does anything" is ineffective. Success comes from identifying specific, high-value use cases, like the "tip calculator" on the Sharp Wizard, that provide an immediate and obvious benefit.

Conclusion

The episode argues that enterprise AI's growth is bottlenecked by a profound measurement crisis. Investors and researchers should focus on the emerging AI governance and analytics layer, as tools that can prove ROI are essential for unlocking the next trillion dollars in corporate AI spending and transforming potential into profit.

Others You May Like