a16z
December 12, 2025

AI Eats the World: Benedict Evans on the Next Platform Shift

Benedict Evans, a veteran tech analyst, cuts through the AI hype to ask the uncomfortable questions: Is this just another platform shift, or something more? And if it's a bubble, what kind? He argues that while AI is as big as the internet or smartphones, its ultimate trajectory is fundamentally unknowable, creating a "schizophrenia" between AGI dreams and the messy reality of productizing this powerful, yet often unhelpful, technology.

The Unknowable Ceiling of AI

  • “We don't know the physical limits of this technology because we don't really have a good theoretical understanding of why it works so well. Nor indeed do we have a good theoretical understanding of what human intelligence is, and so we don't know how much better it can get.”
  • A Familiar Pattern: AI is following classic platform shift dynamics: new winners, old giants struggling, and an inevitable bubble. Think of it like the dot-com boom, but with a twist.
  • The "Elevator Test": Just as automatic elevators became "just a lift," AI will eventually be so embedded it's no longer called "AI." The term itself only applies to "new stuff."
  • Vibes-Based Forecasting: Unlike previous tech shifts where we could model physical limits (e.g., modem speeds), we lack a theoretical understanding of AI's capabilities. This makes long-term predictions speculative, driven more by "feelings" than data.
  • Compute's Black Box: Forecasting AI compute demand is like predicting internet bandwidth in the late '90s – a complex, multi-variable problem with a massive range of outcomes. Hyperscalers are overinvesting because the downside of not investing is perceived as greater.

AGI Dreams vs. Software Reality

  • “I watched this one of the OpenAI live streams... they spend the first 20 minutes talking about how they're going to have like human-level PhD-level AI researchers like next year and then the second half of the stream is, 'Oh, and here's our API stack that's going to enable hundreds and thousands of new software developers just like Windows.' And you think, well, those can't both be true.”
  • The Schizophrenic Narrative: There's a fundamental tension between the pursuit of human-level AGI and the practical goal of building API-driven software. If models truly scale to AGI, why would anyone need to write code or build software companies?
  • AGI: Always 5 Years Away: The concept of AGI often feels like a theological joke: either it's already here and just "small software," or it's "5 years away and will always be 5 years away."
  • The "God in a Box" Paradox: If AI becomes a "god in a box" that can do everything, the entire software industry's premise is challenged. This creates a strategic dilemma for anyone investing in or building traditional software.

The Productization Challenge: Beyond the Chatbot

  • “Chat GPT has got 8 or 900 million weekly active users... ask yourself why five times more people look at it, get it, know what it is, have an account, know how to use it, and can't think of anything to do with it this week or next week. Why is that?”
  • The Usage Gap: Despite high awareness, many users struggle to find daily, compelling use cases for raw chatbots. A powerful tool is not a product until it solves a specific problem.
  • Solutions, Not Technologies: People buy solutions. A raw LLM is like a powerful spreadsheet for a lawyer – impressive, but not immediately integrated into their daily workflow, unlike for an accountant.
  • The GUI's Enduring Value: GUIs abstract complexity and embed institutional knowledge, guiding users through workflows. A raw chatbot, by contrast, demands users think from first principles, asking "literally everything."
  • "Infinite Interns" Need Supervision: AI offers "infinite interns," but they lack domain knowledge and validation. If human checking is still required for accuracy (e.g., data entry), the efficiency gains diminish.
  • New Behaviors, New Products: The true "iPhone-esque" products of AI will enable entirely new behaviors and solve problems that couldn't be addressed before, much like mobile enabled Uber and Airbnb.

Key Takeaways:

  • Strategic Implication: The AI bubble is inevitable. Focus on defensible positions: deep product integration, proprietary data, and distribution, rather than just raw model performance.
  • Builder/Investor Note: The opportunity lies in productizing AI for specific "jobs to be done" within niche industries, creating intuitive UIs, and building in validation, not just building another foundational model.
  • The "So What?": We're about to figure out the true "job to be done" for many industries. AI will unbundle existing businesses by exposing their hidden inefficiencies or non-obvious defensibilities.

Podcast Link: https://www.youtube.com/watch?v=RH9vJNxFKDA

This episode dissects AI's true impact, revealing why current models are not AGI and exposing the strategic fault lines emerging between hyperscalers and application layers.

AI as the Next Platform Shift: Patterns and Precedents

  • Benedict Evans, a renowned technology analyst, contextualizes AI within historical platform shifts, arguing that while it's a monumental change, its scale might be comparable to the internet or smartphones, not necessarily a foundational shift like electricity. He challenges the notion that AI is an entirely unprecedented phenomenon.
  • Evans asserts that platform shifts follow predictable patterns, including market bubbles, the rise and fall of dominant players, and the creation of new trillion-dollar companies.
  • He highlights the varied impact of past shifts: transformative for industries like newspapers, but merely "useful" for others like cement. AI's effect will similarly bifurcate.
  • The term "AI" itself is fluid; it typically refers to "new stuff." Once a technology becomes ubiquitous (like databases or machine learning), it ceases to be called AI in common parlance.
  • “I’m a centrist. I think this is as big a deal as the internet or smartphones, but only as big a deal as the internet or smartphones.” – Benedict Evans

The AGI Disconnect and Unknowable Limits

  • Evans identifies a significant "schizophrenia" in the AI discourse, contrasting the hype around near-term AGI (Artificial General Intelligence – hypothetical AI with human-like cognitive abilities across a wide range of tasks) with the practical reality of API-driven software development. The lack of a theoretical understanding of AI's capabilities makes forecasting its future trajectory inherently "vibes-based."
  • Sam Altman's claims of "PhD-level researchers" are directly contradicted by Demis Hassabis, underscoring a fundamental disagreement on current AI capabilities.
  • The paradox: if models scale to human-level intelligence, the need for traditional software development (and thus, software companies) diminishes. Yet, companies are simultaneously building extensive API stacks for developers.
  • Unlike previous platform shifts (e.g., internet bandwidth, smartphone battery life), AI lacks clear physical or theoretical limits, making deterministic predictions impossible.
  • “Either it’s already here and it’s just more software, or it’s five years away and will always be five years away.” – Benedict Evans

The Inevitable AI Bubble and Compute Overinvestment

  • Evans deterministically states that "very new, very, very big, very, very exciting world-changing things tend to lead to bubbles." He warns of potential overinvestment in compute infrastructure, driven by a fear of missing out and an inability to accurately forecast future demand or efficiency gains.
  • Hyperscalers are currently operating under the premise that "the downside of not investing is bigger than the downside of overinvesting," leading to massive capital expenditure.
  • Forecasting AI compute requirements is akin to predicting internet bandwidth usage in the late 1990s – a complex, multi-variable problem with a vast range of possible outcomes.
  • Mark Zuckerberg's assertion that Meta could "resell capacity" if overinvested is challenged, as widespread overcapacity would devalue such assets across the industry.
  • “If we’re not in a bubble now, we will be.” – Benedict Evans

AI's Bifurcated Utility and the Productization Imperative

  • AI's impact is not uniform; it excels in "open-ended" tasks like software development and marketing but struggles with tasks requiring precise validation or complex workflows. The challenge lies in productizing raw AI capabilities into usable, integrated solutions.
  • Current generative AI deployment bifurcates: immediate, obvious utility for software development, marketing, and specific enterprise point solutions, versus a broader user base struggling to find daily applications.
  • Despite 800-900 million weekly active users for ChatGPT, a significant portion cannot identify a weekly use case, highlighting the gap between awareness and sustained utility.
  • Raw chatbots are compared to blank spreadsheets; they offer immense power but require users to "think from first principles" about how to apply them, unlike purpose-built software with curated workflows.
  • “If you’re the kind of person who is using this for hours every day, ask yourself why five times more people look at it, get it, know what it is, have an account, know how to use it, and can’t think of anything to do with it this week or next week.” – Benedict Evans

Competitive Dynamics: Fragile Leads and Strategic Divergence

  • OpenAI's consumer lead is deemed "very fragile" due to a lack of inherent network effects, feature lock-in, or control over its cost base (relying on Microsoft Azure). Hyperscalers (Google, Meta, Amazon, Apple) face distinct strategic questions as they integrate AI into their ecosystems.
  • Benchmark scores for frontier models are converging, commoditizing the underlying AI capability for casual users. Distribution and brand become critical differentiators.
  • Google integrates AI to optimize existing search and ad businesses, viewing it as an evolution rather than a complete disruption.
  • Meta sees AI as transformative for content, social, and recommendation, making proprietary models essential.
  • Amazon faces the question of whether LLMs (Large Language Models – AI models trained on vast text datasets to understand and generate human-like language) can finally enable superior, at-scale recommendations and discovery beyond its commodity retail model.
  • Apple's challenge is whether AI fundamentally changes the nature of computing, making its lack of a proprietary chatbot a problem, or if it remains a service that can be integrated into its device ecosystem.
  • “You’ve got these 800-900 million weekly active users, but that feels very fragile because all you’ve really got is the power of the default and the brand. You don’t have a network effect.” – Benedict Evans

Investor & Researcher Alpha

  • Capital Allocation Shift: Expect a continued surge in compute infrastructure investment, but with increasing scrutiny on ROI as efficiency gains accelerate. Investors should prioritize companies that can demonstrate clear product-market fit for AI, not just raw model performance.
  • Bottleneck Identification: The primary bottleneck is no longer just model capability or compute, but effective productization and integration of AI into specific workflows. Companies that can translate raw AI power into intuitive, validated solutions will capture significant value.
  • Research Direction Obsolescence: Pure "scaling law" research, while foundational, is becoming less actionable for investors. The focus must shift to applied AI, human-AI interaction, and validation mechanisms to address error rates and build trust.

Strategic Conclusion

AI is undeniably a platform shift, but its ultimate impact hinges on productization and integration into specific workflows, not just raw model power. The industry's next step is to move beyond general-purpose chatbots and build specialized, validated AI applications that solve real-world problems.

Others You May Like