This episode dismantles the dominant software-centric view of AI, arguing that true intelligence—and consciousness—emerges from embodied, decentralized, and biologically-inspired systems, not just scaled-up algorithms.
Redefining Intelligence Beyond Computationalism
- Michael Timothy Bennett, a computer scientist focused on the nature of intelligence, begins by challenging conventional definitions. He favors Pei Wang's concept of "adaptation with limited resources" for its succinctness and clarity over more complex academic definitions.
- Bennett contrasts this with François Chollet's definition, which frames intelligence as the "ability to acquire skills." He notes that while Chollet's work is descended from the ideas of Marcus Hutter and uses Kolmogorov complexity (a measure of the shortest computer program needed to produce an object), Chollet himself argues that pure compression is insufficient for intelligence.
- Chollet views Large Language Models (LLMs) as collections of skill programs, where the programs are the output of intelligence, not the intelligence itself. This sets the stage for a deeper critique of purely computational models.
The Limits of Theoretical AI and the Problem of Subjectivity
- The discussion moves to theoretical models like AIXI, a formalization of a general reinforcement learning agent that uses Solomonoff induction (a mathematical version of Occam's razor) to find the simplest model of its environment. Bennett, while finding the overall idea compelling, highlights a critical flaw.
- He argues that in an interactive setting, the agent's notion of "simplicity" is subjective and dependent on its "interpreter" or internal language. An external observer might assess complexity differently, making it possible to design scenarios where the agent performs arbitrarily poorly.
- Bennett states, "You can essentially make simplicity completely disconnected from performance if you like." This subjectivity undermines the claim that such models are truly optimal.
- This leads to the necessity of considering embodiment. Intelligence cannot be understood as abstract software because its performance is fundamentally tied to the hardware and environment it operates within. This aligns with the concept of enactive cognition, where cognition is seen as an interaction between an organism and its environment.
Computational Dualism: The Modern Pineal Gland
- Bennett introduces "computational dualism" as a critique of the modern tendency to separate AI software from its physical substrate. He draws a provocative analogy to Cartesian dualism, the 17th-century idea that the mind (mental substance) and body (physical substance) were separate entities interacting through the pineal gland.
- He argues that today's AI research often treats software as a disembodied "mental substance" that runs on hardware, with the Turing machine acting as the modern-day pineal gland.
- "It's interesting how much it has stuck around," Bennett observes. "We have just replaced the pineal gland with a touring machine."
- This perspective is flawed because the interpreter (hardware) dictates what the software can actually do, making any claims about a software-only intelligence incomplete. This reinforces his argument for mortal computation, where the physical "stuff" is inseparable from the computation itself.
Assessing the State of AGI and the Role of Benchmarks
- When asked how close we are to AGI, Bennett offers a nuanced view. While current models can automate many economic tasks, they lack the sample efficiency and adaptive qualities of human intelligence.
- He expresses skepticism about benchmark results, like those on Chollet's ARC-AGI test, especially when the code and methods are not transparently released. He notes that even impressive scores don't necessarily translate to genuine understanding, famously testing new models to see if they can add long numbers—a task they often fail.
- For investors and researchers, this is a crucial insight: benchmark saturation may be more a function of clever engineering or memorization than a true leap in general intelligence. The focus should be on systems that demonstrate genuine adaptability and efficiency, particularly in hardware.
Hybrid Architectures and the Path Forward
- Bennett sees promise in hybrid systems that combine the strengths of different computational tools. He categorizes these tools into two main types:
- Approximation: What LLMs excel at—handling vast, noisy data inexactly.
- Search: Precise, procedural methods used for tasks like navigation.
- He points to alternative AGI architectures like Pei Wang's NARS (Non-Axiomatic Reasoning System) and Ben Goertzel's Hyperon, which are designed as modular, decentralized frameworks capable of integrating various components, including LLMs, symbolic reasoners, and other modules. These systems aim for versatility rather than monolithic scale.
Consciousness as a "Tapestry of Valence"
- The conversation delves into consciousness, which Bennett frames not as an illusion or an epiphenomenon, but as a necessary feature of an adaptive, embodied agent.
- He proposes that subjective experience arises from a "tapestry of valence," where an organism's internal state is a complex network of attractions and repulsions at multiple scales, from the cellular level up. Classifiers for objects like "chair" or "television" are not value-neutral but are tied to their causal relevance to the agent's goals and well-being.
- He argues against the possibility of a philosophical zombie—a being identical to a human but lacking consciousness—by asserting that consciousness is a necessary adaptation for efficient, intelligent behavior in any conceivable world. Information processing and subjective experience are inextricably linked.
The Law of the Stack and "Libertarian Biology"
- Bennett introduces his "Law of the Stack," a principle derived from his thesis that explains the importance of decentralized adaptation. He was jokingly accused by a supervisor of writing "libertarian biology" for its implications.
- The law states that a system's ability to adapt at a high level of abstraction (e.g., software) depends on the adaptability of its lower levels (e.g., hardware).
- Biological systems excel because they delegate adaptation down the stack, allowing for modular, cellular-level learning and repair. In contrast, current computers are like "an inflexible bureaucracy that makes decisions only at the top."
- Strategic Implication: Overly constraining an AI system with too much top-down control (e.g., rigid safety rules) can make it brittle and prone to failure, much like a totalitarian state stifles its population. Effective and safe AI design may require building systems with delegated control and decentralized adaptability.
Defining Life: The Difference Between "Simping" and "Waxing"
- The episode concludes with a provocative theory on the nature of life itself, distinguishing it from non-living matter through two opposing strategies for persistence.
- Simping (Simp-maxing): Persistence through simplicity. A rock persists because its simple structure is stable and less likely to be disrupted by environmental changes.
- Waxing (W-maxing): Persistence through increasing complexity. Living organisms self-repair and adapt, becoming more complex over time to maintain homeostasis and navigate their environment.
- Bennett argues that life is fundamentally that which "waxes at the expense of simp." This provides a clear, actionable distinction for researchers trying to create artificial life: the goal is not just persistence, but persistence through adaptive, self-organizing complexity.
Conclusion
This discussion argues that the path to AGI lies not in scaling disembodied models but in creating integrated, decentralized systems inspired by biology. For investors and researchers, the key takeaway is to prioritize projects focused on hardware-software co-design, decentralized control, and emergent self-organization, as these principles define the next frontier of intelligent systems.