Machine Learning Street Talk
August 28, 2025

Intelligence Isn't What You Think

Computer scientist Michael Timothy Bennett deconstructs modern AI, arguing that our pursuit of disembodied software intelligence is a repeat of a 400-year-old philosophical error. He proposes a new framework rooted in biology, embodiment, and the messy reality of the physical world.

The Ghost in the Turing Machine

  • Bennett argues that mainstream AI suffers from “computational dualism,” treating intelligence as pure software separate from its physical hardware. This mirrors Cartesian dualism, which separated mind and body, with the Turing machine simply replacing the pineal gland as the magical bridge between the two realms.
  • “It's interesting how much [Cartesian dualism] has stuck around because we've kind of done the same thing with AI. We have just replaced the pineal gland with a Turing machine.”
  • “If we just have software by itself and we don't say anything about the hardware, then it can't really be intelligence... because whatever that software does has to pass through an interpreter and the interpreter decides what it does.”
  • Thinking of intelligence as a program is flawed. Its performance is entirely dependent on the physical interpreter (hardware) running it, meaning you can make any program arbitrarily smart or stupid by changing the hardware.
  • Intelligence requires “enactive cognition”—it isn't just in the head but is an interactive process between an agent, its body, and its environment.

Biology's Unfair Advantage

  • Biological systems achieve incredible feats with a tiny fraction of the energy and data used by today’s AI models. Their advantage comes from a deeply integrated, multi-layered architecture where adaptation happens at every level, from cells to organs to the organism.
  • “Biological systems with a tiny fraction of the energy and learning data could do so much more.”
  • All systems—biological, organizational, and computational—are a “stack of abstraction layers.” AI is like an inflexible bureaucracy where decisions are only made at the top (software), while biology delegates control and adaptation down the entire stack.
  • This decentralized delegation is key to robustness and efficiency. Overly centralized, top-down control leads to fragility, a phenomenon Bennett calls the “law of the stack.”

Consciousness is a Feature, Not a Bug

  • Bennett tackles the “hard problem of consciousness” head-on, arguing that consciousness is not an illusion or a mysterious byproduct but a necessary adaptation for any sufficiently complex agent.
  • “I propose to solve [the hard problem] by showing that a philosophical zombie is impossible in every conceivable world... Consciousness is a necessary adaptation.”
  • Subjective experience is a “tapestry of valence,” a complex web of attractions and repulsions at every level of an organism that impels it to act and form representations of itself and the world.
  • Information processing without this valenced, subjective component is implausible for any agent that needs to efficiently navigate a complex environment.

Key Takeaways

  • True AGI will require a paradigm shift away from simply scaling disembodied software and toward creating integrated, biologically-inspired systems where hardware and software co-evolve.
  • AI's Cartesian Error: Modern AI treats intelligence as software, ignoring the critical role of hardware and environment. This “computational dualism” is a fundamental mistake; true intelligence is embodied and enactive.
  • Biology's Stack is Smarter: Biological systems are hyper-efficient because they delegate adaptation across a full “stack” of abstraction layers (cells, organs, organism). Today’s AI systems are rigid bureaucracies that only learn at the top.
  • Intelligence Requires Consciousness: Consciousness is a necessary adaptation for navigating the world, not a mystical add-on. Truly intelligent and adaptive agents will, by necessity, be conscious.

For further insights, watch the full discussion: Link

This episode dismantles the dominant software-centric view of AI, arguing that true intelligence—and consciousness—emerges from embodied, decentralized, and biologically-inspired systems, not just scaled-up algorithms.

Redefining Intelligence Beyond Computationalism

  • Michael Timothy Bennett, a computer scientist focused on the nature of intelligence, begins by challenging conventional definitions. He favors Pei Wang's concept of "adaptation with limited resources" for its succinctness and clarity over more complex academic definitions.
  • Bennett contrasts this with François Chollet's definition, which frames intelligence as the "ability to acquire skills." He notes that while Chollet's work is descended from the ideas of Marcus Hutter and uses Kolmogorov complexity (a measure of the shortest computer program needed to produce an object), Chollet himself argues that pure compression is insufficient for intelligence.
  • Chollet views Large Language Models (LLMs) as collections of skill programs, where the programs are the output of intelligence, not the intelligence itself. This sets the stage for a deeper critique of purely computational models.

The Limits of Theoretical AI and the Problem of Subjectivity

  • The discussion moves to theoretical models like AIXI, a formalization of a general reinforcement learning agent that uses Solomonoff induction (a mathematical version of Occam's razor) to find the simplest model of its environment. Bennett, while finding the overall idea compelling, highlights a critical flaw.
  • He argues that in an interactive setting, the agent's notion of "simplicity" is subjective and dependent on its "interpreter" or internal language. An external observer might assess complexity differently, making it possible to design scenarios where the agent performs arbitrarily poorly.
  • Bennett states, "You can essentially make simplicity completely disconnected from performance if you like." This subjectivity undermines the claim that such models are truly optimal.
  • This leads to the necessity of considering embodiment. Intelligence cannot be understood as abstract software because its performance is fundamentally tied to the hardware and environment it operates within. This aligns with the concept of enactive cognition, where cognition is seen as an interaction between an organism and its environment.

Computational Dualism: The Modern Pineal Gland

  • Bennett introduces "computational dualism" as a critique of the modern tendency to separate AI software from its physical substrate. He draws a provocative analogy to Cartesian dualism, the 17th-century idea that the mind (mental substance) and body (physical substance) were separate entities interacting through the pineal gland.
  • He argues that today's AI research often treats software as a disembodied "mental substance" that runs on hardware, with the Turing machine acting as the modern-day pineal gland.
  • "It's interesting how much it has stuck around," Bennett observes. "We have just replaced the pineal gland with a touring machine."
  • This perspective is flawed because the interpreter (hardware) dictates what the software can actually do, making any claims about a software-only intelligence incomplete. This reinforces his argument for mortal computation, where the physical "stuff" is inseparable from the computation itself.

Assessing the State of AGI and the Role of Benchmarks

  • When asked how close we are to AGI, Bennett offers a nuanced view. While current models can automate many economic tasks, they lack the sample efficiency and adaptive qualities of human intelligence.
  • He expresses skepticism about benchmark results, like those on Chollet's ARC-AGI test, especially when the code and methods are not transparently released. He notes that even impressive scores don't necessarily translate to genuine understanding, famously testing new models to see if they can add long numbers—a task they often fail.
  • For investors and researchers, this is a crucial insight: benchmark saturation may be more a function of clever engineering or memorization than a true leap in general intelligence. The focus should be on systems that demonstrate genuine adaptability and efficiency, particularly in hardware.

Hybrid Architectures and the Path Forward

  • Bennett sees promise in hybrid systems that combine the strengths of different computational tools. He categorizes these tools into two main types:
    • Approximation: What LLMs excel at—handling vast, noisy data inexactly.
    • Search: Precise, procedural methods used for tasks like navigation.
  • He points to alternative AGI architectures like Pei Wang's NARS (Non-Axiomatic Reasoning System) and Ben Goertzel's Hyperon, which are designed as modular, decentralized frameworks capable of integrating various components, including LLMs, symbolic reasoners, and other modules. These systems aim for versatility rather than monolithic scale.

Consciousness as a "Tapestry of Valence"

  • The conversation delves into consciousness, which Bennett frames not as an illusion or an epiphenomenon, but as a necessary feature of an adaptive, embodied agent.
  • He proposes that subjective experience arises from a "tapestry of valence," where an organism's internal state is a complex network of attractions and repulsions at multiple scales, from the cellular level up. Classifiers for objects like "chair" or "television" are not value-neutral but are tied to their causal relevance to the agent's goals and well-being.
  • He argues against the possibility of a philosophical zombie—a being identical to a human but lacking consciousness—by asserting that consciousness is a necessary adaptation for efficient, intelligent behavior in any conceivable world. Information processing and subjective experience are inextricably linked.

The Law of the Stack and "Libertarian Biology"

  • Bennett introduces his "Law of the Stack," a principle derived from his thesis that explains the importance of decentralized adaptation. He was jokingly accused by a supervisor of writing "libertarian biology" for its implications.
  • The law states that a system's ability to adapt at a high level of abstraction (e.g., software) depends on the adaptability of its lower levels (e.g., hardware).
  • Biological systems excel because they delegate adaptation down the stack, allowing for modular, cellular-level learning and repair. In contrast, current computers are like "an inflexible bureaucracy that makes decisions only at the top."
  • Strategic Implication: Overly constraining an AI system with too much top-down control (e.g., rigid safety rules) can make it brittle and prone to failure, much like a totalitarian state stifles its population. Effective and safe AI design may require building systems with delegated control and decentralized adaptability.

Defining Life: The Difference Between "Simping" and "Waxing"

  • The episode concludes with a provocative theory on the nature of life itself, distinguishing it from non-living matter through two opposing strategies for persistence.
  • Simping (Simp-maxing): Persistence through simplicity. A rock persists because its simple structure is stable and less likely to be disrupted by environmental changes.
  • Waxing (W-maxing): Persistence through increasing complexity. Living organisms self-repair and adapt, becoming more complex over time to maintain homeostasis and navigate their environment.
  • Bennett argues that life is fundamentally that which "waxes at the expense of simp." This provides a clear, actionable distinction for researchers trying to create artificial life: the goal is not just persistence, but persistence through adaptive, self-organizing complexity.

Conclusion

This discussion argues that the path to AGI lies not in scaling disembodied models but in creating integrated, decentralized systems inspired by biology. For investors and researchers, the key takeaway is to prioritize projects focused on hardware-software co-design, decentralized control, and emergent self-organization, as these principles define the next frontier of intelligent systems.

Others You May Like