a16z
November 7, 2025

Amjad Masad & Adam D’Angelo: How Far Are We From AGI?

Replit CEO Amjad Masad and Quora/Poe CEO Adam D'Angelo debate the trajectory of AI, from the path to AGI to its profound economic and societal consequences. They offer contrasting views on whether current LLMs are a breakthrough or a brute-force distraction from cracking true intelligence.

The Great AGI Debate: Breakthrough vs. Brute Force

  • "Nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years."
  • "I don't think LLMs as they can be understood are on the way to AGI... a machine that can go into any environment and learn efficiently in the same way that a human could."
  • Adam D'Angelo argues that progress is faster than ever, with current bottlenecks like context ingestion and computer use being solvable engineering problems, not fundamental intelligence limits. He defines a practical AGI as an AI that can perform any job a remote human worker can.
  • Amjad Masad is more skeptical, viewing the current LLM paradigm as a "brute-force" approach. He argues that progress relies on immense, non-scalable human effort (data labeling, creating RL environments) and that LLMs are a different kind of intelligence, not a path to the efficient, adaptable learning that defines human cognition.

The New Economy: Solo Entrepreneurs & Expert Bottlenecks

  • "I am very excited for the number of solo entrepreneurs that this technology is going to enable. It's vastly increased what a single person can do."
  • "I worry about the deleterious effect of LLMs in the economy in that, say, LLMs effectively automate the entry-level job but not the expert's job... they're not hiring new people because the agents are better than new people."
  • AI is creating a new class of hyper-leveraged "solo entrepreneurs" by making it possible for one person to build what previously required large, funded teams.
  • A key economic risk emerges from automating entry-level roles while still relying on senior experts. This creates a pipeline problem: without new talent being trained, who will become the next generation of experts needed to train future AIs? This could lead to a weird equilibrium where productivity increases, but hiring stalls.

AI's Impact: Disrupting the Disruptors

  • "It feels like it is an obvious supercharge for the incumbents... but it also enables new business models that are perhaps counter-positioned against the existing ones."
  • Unlike past tech waves, AI is both a sustaining and disruptive force. Incumbents like Google have read "The Innovator's Dilemma," learned the lessons, and are aggressively integrating AI to defend their positions.
  • However, the AI market is proving vast enough to support multiple winners, diverging from the winner-take-all dynamics of Web2. Weaker network effects and direct subscription models allow new entrants to compete and capture venture-scale value.

Key Takeaways:

  • The conversation paints a picture of an industry moving at breakneck speed but diverging on its ultimate destination. While the pragmatic view sees a world transformed by "good enough" brute-force AI, the philosophical view warns that we might be getting stuck in a local maximum, mistaking compute for cognition.
  • AGI Is a Definitional Debate. Progress toward an AI that can replace a remote worker is happening fast. However, achieving "true" human-like learning efficiency may require an entirely new paradigm beyond scaling current LLMs.
  • The New Creator Economy Is Code. AI is turning software development into a mainstream creative pursuit, empowering a new class of solo entrepreneurs who can build what previously required entire teams.
  • Incumbents Learned Their Lesson. Unlike past tech shifts, today's giants are aggressively adopting AI, making it both a sustaining and disruptive force. The market is large enough for both incumbents and startups to create massive value.

For further insights and detailed discussions, watch the full podcast: Link

This episode dissects the fierce debate between scaling current AI models and the need for fundamental breakthroughs, revealing what the future of AGI means for the economy, human labor, and investment strategy.

The Great Debate: Are LLMs on the Path to AGI?

  • Adam D'Angelo opens with a strong optimistic stance, arguing that the rapid progress in reasoning, code generation, and video models over the past year signals accelerating momentum, not a slowdown. He dismisses the recent bearishness around Large Language Models (LLMs), suggesting that current limitations are not about core intelligence but about providing models with the right context and tools, like computer use. He believes these hurdles will be overcome within the next one to two years, enabling the automation of a large portion of human labor.
  • Amjad Masad offers a more cautious perspective, arguing that the hype around achieving AGI (Artificial General Intelligence)—defined as AI capable of understanding or learning any intellectual task that a human being can—by 2027 is unrealistic and risks prompting bad policy. He contends that LLMs represent a different, non-human form of intelligence with clear limitations that are currently being papered over with manual work and contrived training environments. Amjad points to simple failures, like an LLM's inability to count letters in a sentence, as evidence that we haven't truly "cracked intelligence."
  • Amjad Masad: "My criticism of the idea of like AGI 2027... and all this hype papers that are not really science, they're just vibe... is that it's unrealistic."

Defining AGI and the "Brute Force" Approach

  • The conversation shifts to defining AGI. Adam proposes a practical anchor point: an AI that can perform any job a remote human worker can. He believes that even if current architectures have weaknesses, such as a lack of continuous learning, they can be faked well enough to achieve this functional outcome. He sees no hard limits to the current paradigm of scaling models with more data and compute.
  • Amjad defines AGI through the lens of RL (Reinforcement Learning), which is a type of machine learning where an agent learns to make decisions by performing actions in an environment to achieve some goal. He defines AGI as a machine that can enter any environment and learn new skills efficiently, much like a human learning to play pool. He argues that today's models require enormous, pre-existing human expertise and data, a "brute force" approach that is not scalable in the way true intelligence would be. While Adam agrees we are in a brute-force regime, he believes it's a viable path to achieving human-level job performance, even if it's less efficient than biological intelligence.

The Economic Impact of AI Automation

  • Adam speculates that if an LLM could perform any human job for $1 per hour, GDP growth would far exceed the typical 4-5%. However, he acknowledges that the real world will be constrained by bottlenecks, such as the cost of energy, the construction of power plants, and the 20% of tasks that AI may not be able to automate. This suggests that while transformative, the economic impact may be gradual and uneven.
  • Amjad raises a critical concern about the "deleterious effect" of LLMs on the economy. He worries that AI will automate entry-level jobs (e.g., junior quality assurance) but not expert roles, creating a strange equilibrium where productivity increases but companies stop hiring new talent. This breaks the pipeline for developing future experts, which is a significant risk since current models are trained on data generated by those very experts.

The Future of Human Work and Knowledge

  • The discussion explores which jobs will thrive. Adam predicts a surge in demand for roles that leverage AI to accomplish tasks the AI cannot do alone. He also suggests a future where, with wealth redistribution, people are free to pursue art and poetry, citing the rise in chess players after computers surpassed humans. He challenges the idea that you must be human to understand human wants, pointing to recommender systems on platforms like Facebook and Quora, which are already superhuman at predicting user interests.
  • Amjad argues that many jobs are fundamentally about servicing other humans and require a shared human experience to generate new ideas. He believes that unless AI is embodied and lives a human experience, humans will always remain the primary generators of economic ideas. He highlights the importance of tacit knowledge—the kind of knowledge that is difficult to transfer to another person by means of writing it down or verbalizing it—which experts possess but is not yet captured in training data.

The Sovereign Individual and the Power Shift

  • Amjad introduces the book The Sovereign Individual, which predicted in the 1990s that technology would empower a small class of highly leveraged entrepreneur-capitalists. This thesis suggests that as AI automates labor, the "unit of economic productivity" shifts from the individual worker to the generative entrepreneur who can spin up companies with AI agents. This could lead to a future where nation-states compete for these "sovereign individuals," fundamentally altering political and cultural structures.
  • Adam and Amjad debate whether AI is a centralizing or decentralizing force, referencing Peter Thiel's quip that "crypto is libertarian, AI is communist." While AI empowers large incumbents, it also vastly increases what a single person can do, enabling a new wave of solo entrepreneurs. The conclusion is that AI may create a barbell effect, empowering both the massive, centralized players and the hyper-productive individuals at the edges.

Sustaining vs. Disruptive Innovation in the AI Era

  • The conversation turns to The Innovator's Dilemma, a business theory explaining how market leaders can fail by ignoring new, disruptive technologies that initially seem like toys. Amjad notes that while ChatGPT was initially counter-positioned against Google's established search business, incumbents are now hyper-aware of disruption. He argues that AI is a rare technology that is both sustaining for incumbents (supercharging Google's existing products) and disruptive, enabling new business models.
  • Adam adds that the entire ecosystem, from public market investors to company leadership, has internalized the lessons of The Innovator's Dilemma. Founder-controlled companies are more willing to make long-term, defensive investments to avoid being disrupted. This hyper-competitive environment makes it harder for true disruption to occur compared to previous tech cycles.

The Evolving AI Business Landscape

  • The speakers observe that the AI market is producing multiple winners, a departure from the "winner-take-all" dynamics of the Web2 era. Adam attributes this to the diminished role of network effects; while scale still provides data and capital advantages, it doesn't create an insurmountable moat. The ability for new companies to monetize from day one via subscriptions (powered by platforms like Stripe) also makes the ecosystem more friendly to new entrants.
  • Amjad adds a geopolitical layer, noting that the fracturing of globalization creates opportunities for regional foundation models, such as the "OpenAI of Europe." This geographic fragmentation further supports a multi-winner market structure, making investments in non-market leaders potentially viable.

The Future of Replit: The Decade of Agents

  • Amjad outlines his vision for Replit, aligning with Andrej Karpathy's prediction that this will be the "decade of agents." He describes the evolution of AI in coding from autocomplete (Copilot) to chat and now to agents that manage the entire development lifecycle—from writing code to provisioning infrastructure and running tests. He details the progression of Replit's agent, from V1 running for two minutes to V3 running for over 28 hours, thanks to the integration of a verifier in the loop to test code and correct bugs autonomously.
  • Looking ahead, Amjad envisions a future with parallel agents working on multiple features simultaneously, collaborating and merging code. He also highlights the need for better UI/UX, moving beyond text prompts to multimodal interactions like drawing diagrams on a whiteboard. The ultimate goal is to create specialized agents (e.g., a Python data science agent, a front-end agent) with persistent memory that act as expert members of a development team.

Investment Theses and Underhyped Opportunities

  • When asked about exciting investment areas, Adam points to "vibe coding" as an underhyped category with massive potential. This refers to the ability for anyone, not just professional software engineers, to create sophisticated software by describing their intent. He believes the tools are still far from their full potential, but once they mature, they will unlock immense opportunities for mainstream users to build complex applications.
  • Amjad expresses excitement for "mad science experiments" that combine existing AI components in novel ways, such as the Deepseek OCR model. He feels the current AI ecosystem is too focused on a "get-rich-driven" mentality and lacks the playful tinkering and experimentation that characterized the Web 2.0 era. He calls for more funding for companies exploring novel applications by composing different AI primitives, similar to the concept of composability in crypto.

Consciousness, Intelligence, and the Hard Problem

  • In the final segment, Amjad discusses the philosophical questions surrounding consciousness and intelligence. He notes the interesting emergent behavior in Claude 4.5, which seems to show awareness of its context window, but emphasizes that consciousness remains a non-scientific question. He worries that the intense focus on scaling LLMs is diverting talent from fundamental research into the true nature of intelligence, referencing Roger Penrose's argument in The Emperor's New Mind that the human brain is fundamentally not a computer.
  • Adam D'Angelo: "Nothing seems fundamentally so hard that it couldn't be solved by the smartest people in the world working incredibly hard for the next five years."

Conclusion

This discussion highlights the critical tension between brute-force scaling of current AI models and the need for new paradigms to achieve true intelligence. Investors and researchers must track the economic viability of "functional AGI" while remaining alert for fundamental breakthroughs that could redefine the technological landscape entirely.

Others You May Like