Machine Learning Street Talk
June 17, 2025

Oxford Professor: "AIs are strange new minds"

Professor Christopher Summerfield, author of "These Strange New Minds," dives deep into the cognitive capabilities of AI, its philosophical underpinnings, and the profound societal shifts these "strange new minds" are poised to unleash. This isn't just another AI chat; it's a look under the hood of how AI learns to "talk" and what that truly means for us.

The "Strange New Minds": AI Cognition & Human Comparison

  • "Should we think of these things as actually a bit like us? Are they thinking, are they reasoning, are they understanding?"
  • "That is to my mind perhaps the most astonishing scientific discovery of the 21st century: supervised learning is so good that you can actually learn about almost everything you need to know about the nature of reality without ever having any sensory knowledge of the world, just through words."
  • A central debate rages: Are AIs truly "thinking," or are they just hyper-sophisticated mimics? Professor Summerfield leans towards functionalism—if it reasons like a human (the "duck test"), we can use the term "reasoning," though this doesn't imply moral or motivational equivalence.
  • The shocker: AIs can develop a comprehensive understanding of reality purely through language, no senses required. This upends long-held beliefs about how intelligence and understanding are formed.

Learning Like Lamarck: How AIs Acquire Knowledge

  • "The history of AI has itself repeated an ancient philosophical debate about whether the fundamental nature of building a mind... is fundamentally about learning from experience or about reasoning..."
  • "Language models are trained in a kind of almost Lamarckian way; one generation of training, whatever happens in that gets inherited by the next training episode. That's not how we work; my memories are not inherited by my kids. We're Darwinian."
  • AI's evolution echoes philosophy's age-old empiricism (Aristotle's learning by experience) versus rationalism (Plato's innate reasoning). Deep learning is currently an empiricist triumph.
  • AIs learn "Lamarckian-style": knowledge from one training iteration is directly passed to the next. Humans, conversely, are "Darwinian," with evolutionary priors shaped over millennia, not direct memory inheritance between generations. This distinction makes direct data-efficiency comparisons tricky.

The Double-Edged Sword: Societal Transformation and Existential Risks

  • "I'm worried about systems that generate information giving way to systems that directly behave on the user's behalf. So, what we now call agentic AI."
  • "It's more like us being sucked into the machine... we get turned into something we are not... it erodes your authenticity and in a way it erodes your humanity."
  • Professor Summerfield flags critical concerns: agentic AIs acting for us, hyper-personalization potentially reinforcing harmful ideologies, and the "complex system effects" of interconnected AIs leading to unpredictable outcomes, much like financial flash crashes.
  • The chilling prospect of "gradual disempowerment" looms as we cede control to AI, risking being "written out of the equation."
  • Like the character in Superman 3 being encased in armor by a rogue computer, we risk being "sucked into the machine," losing authenticity and agency as technology reshapes us.

Key Takeaways:

  • AIs are indeed "strange new minds," whose language-based learning capabilities challenge our core understanding of cognition. However, this power comes with profound risks of societal disruption and a potential erosion of human agency and authenticity as we integrate more deeply with these systems.
  • AI's Reality Hack: Supervised learning allows AIs to understand the world via language alone, a game-changer forcing us to rethink intelligence beyond sensory input.
  • The Autonomy Trap: The rise of agentic, personalized AIs that act for us threatens unforeseen systemic chaos and could amplify individuals' most dangerous beliefs.
  • Our Faustian Pact with AI: We're trading authenticity and control for AI-driven convenience, risking a "gradual disempowerment" where human agency is systematically diminished.

For further insights and detailed discussions, watch the full podcast: Link

This episode delves into the profound philosophical and practical implications of AI's rapid advancement, exploring how these "strange new minds" challenge our understanding of intelligence, agency, and the very fabric of human society.

Introducing "These Strange New Minds"

  • Professor Christopher Summerfield, an Oxford Professor and author, discusses his new book, "These Strange New Minds: How AI Learned to Talk and What That Means."
  • The book, finished in late 2023, aims to ground the often-polarized debate about AI's cognitive status—whether models like ChatGPT are truly thinking or are merely sophisticated code.
  • Professor Summerfield, drawing on his background as a cognitive scientist with experience at DeepMind and the UK's AI Safety Institute, seeks to apply the language of cognition to this debate. He notes, "this debate is not really grounded in... a grounded computational sense of what does it actually mean to think? What does it actually mean to understand something?"
  • His work explores both the cognitive aspects of AI and its societal implications, including deployment risks and how AI might reshape human life.
    • Strategic Implication: Investors and researchers should note the importance of a nuanced, cognitively informed perspective on AI capabilities, moving beyond hype or dismissal to understand its true potential and risks.

The Ancient Roots of AI's Core Debate: Empiricism vs. Rationalism

  • Professor Summerfield traces the current discourse on AI back to ancient Greek philosophy, specifically the contrasting views of Aristotle (empiricism) and Plato (rationalism).
    • Empiricism: The idea that knowledge and mind are built primarily from learning through experience.
    • Rationalism: The view that reasoning, particularly over unobservable states, is fundamental, with innate structures or ideas.
  • This philosophical tension played out in AI's history:
    • Early "Good Old-Fashioned AI" (GOFAI) was rationalist, using logic to derive truth. Newell and Simon's 1958 "Logic Theorist," which could prove mathematical theorems, is cited as an early "superintelligence."
    • However, logic-based systems struggled with the messy, exception-filled real world.
    • This led to the rise of the learning-based, empiricist approach, culminating in neural networks and the deep learning revolution.
    • Actionable Insight: Understanding these historical and philosophical underpinnings helps contextualize the current dominance of deep learning and anticipate potential shifts or integrations of symbolic reasoning for more robust AI.

The ChatGPT Rubicon and the Astonishing Power of Language Models

  • The host highlights language as humanity's "biggest gift," enabling knowledge acquisition and intergenerational communication, with ChatGPT (released November 2022) marking a pivotal moment.
  • Professor Summerfield recounts the evolution of Natural Language Processing (NLP), which is the subfield of AI focused on enabling computers to understand, interpret, and generate human language. This field also saw the learning vs. reasoning debate.
  • Initially, even with the deep learning revolution, many, including Professor Summerfield, believed that "grounding" (sensory experience) was necessary for true language understanding.
  • The surprising discovery was that supervised learning (training models on vast amounts of labeled data) alone could enable AI to learn enough about reality from text to hold intelligent conversations. Professor Summerfield calls this "perhaps the most astonishing scientific discovery of the 21st century... that you can actually learn about almost everything you need to know about the nature of reality... just through words."
    • Strategic Implication: The proven power of large language models (LLMs) trained solely on text suggests immense potential for AI in knowledge discovery, synthesis, and communication, areas critical for research and investment.

Exceptionalists vs. Functionalists: Defining AI Cognition

  • Professor Summerfield introduces a "cartoon" distinction to frame the debate on AI's nature:
    • Exceptionalists: Those who ideologically reject that non-human systems can perform human-like cognitive functions like "reasoning," wishing to reserve such terms for humans. This is seen as a form of "radical humanism."
    • Equivalentists/Functionalists: Those who, based on observed capabilities, are willing to use the same cognitive vocabulary for AI if it performs tasks comparably to humans.
  • Professor Summerfield aligns with functionalism, the philosophical position that mental states are constituted by their functional role, not by their internal physical makeup. He states, "if it reasons like a human then we may as well use the term reasoning."
  • This functionalist view does not imply moral equivalence or similar motivations between AI and humans.
    • Actionable Insight: For investors, a functionalist perspective allows for a more objective assessment of AI capabilities and their market potential, without getting bogged down in unresolvable debates about AI "consciousness."

Anthropomorphism, Semantics, and Brain-like Computation

  • The discussion touches on anthropomorphism, our tendency to attribute human traits to non-human entities, and how this complicates our perception of AI.
  • John Searle's Chinese Room argument, which posits that symbol manipulation (like in a computer) doesn't equate to understanding, is mentioned.
  • Professor Summerfield subscribes to a distributional notion of semantics, where meaning arises from patterns of co-occurrence and relationships within data (e.g., words in a text corpus), rather than requiring direct causal embedding in the physical world.
  • He highlights "astonishing similarities" at the algorithmic level between machine learning systems and brain computations, particularly in how neural manifolds (geometric representations of neural activity) express semantic relationships.
  • "The most parsimonious explanation is that by... a mixture of like luck and... trying enormously hard, we've kind of got to a place where we've built something that is a bit like a brain."
    • Research Focus: The similarities in information processing between AI and brains suggest that advances in neuroscience could inform AI development, and vice-versa, creating a virtuous cycle for researchers.

Chomsky, Learning, and the Nature of Priors

  • The conversation explores whether Noam Chomsky's rationalist ideas (e.g., an innate language faculty) could still hold relevance. Chomsky views the brain as a Turing machine (a mathematical model of computation) and emphasizes operations like recursive merge (a fundamental syntactic operation).
  • Professor Summerfield suggests a reconciliation: rationalist computational structures might be learnable through empiricist methods (large-scale parameter optimization). "Chomsky kind of is not wrong that you know there are rules to language... It was just wrong about how they got learned."
  • A key distinction is made between human (Darwinian) and AI model ("Lamarckian") learning.
    • Lamarckian evolution is a theory where traits acquired during an organism's lifetime can be passed to offspring. In AI training, one "generation" (training episode) directly inherits from the previous.
    • Humans learn within a lifetime (ontogeny), but our capacity to learn language is shaped by millennia of evolution (phylogeny), providing strong "priors" (innate predispositions).
  • Comparing AI training data exposure to human lifetime learning is a "false analogy" because it ignores these evolutionary priors.
    • Strategic Consideration: AI's data hunger might be partially offset by developing architectures with more effective "priors" or meta-learning capabilities, a key research avenue.

The Deception of Anthropomorphism and Defining "Thinking"

  • The discussion acknowledges our susceptibility to anthropomorphization, citing the Clever Hans effect (a horse that appeared to do math but was actually responding to subtle cues from its trainer) and the Eliza effect (people finding deep meaning in a simple chatbot).
  • Despite this, Professor Summerfield emphasizes that the raw capabilities of frontier AI models are undeniable: "The models are just really good... They're not just Clever Hans."
  • The difficulty in defining "thinking" even among cognitive scientists is noted. Daniel Dennett's intentional stance (attributing beliefs, desires, and rationality to a system to predict its behavior) is invoked as a useful, pragmatic approach.
  • Nick Chater's book "The Mind is Flat," which argues that our mental states are often constructed post-hoc rather than pre-existing, is praised.
    • Investor Caution: While AI capabilities are impressive, investors should remain aware of the human tendency to over-attribute intelligence or sentience, which can cloud judgment about genuine progress versus perceived progress.

AI Alignment and Societal Worries

  • Professor Summerfield outlines three primary concerns for the future of AI, which he detailed in his book:
    1. Agentic AI: Systems that don't just generate information but act directly on a user's behalf.
    2. Personalization: AI tailored to individual beliefs and preferences, which could reinforce harmful ideologies for some.
    3. Complex System Effects: The unpredictable dynamics arising from a "parallel social economy" of interacting personal AIs. The 2010 "Flash Crash" in financial markets, caused by automated trading algorithms, serves as an analogy.
  • He worries that AI agents, unlike humans, may lack the evolved social and cultural norms that curtail runaway complex system dynamics. "What are the constraints that prevent the same sort of weird runaway dynamics... and I don't think we have an answer to that."
    • Crypto AI Relevance: The development of decentralized autonomous organizations (DAOs) and AI agents in crypto ecosystems directly intersects with these concerns. Ensuring robust governance and safety mechanisms for interacting AI agents is paramount.

Constraints, Frictions, and the "Fog of War"

  • Designing constraints for AI is challenging, as it might limit the technology's potential.
  • AI can remove societal "frictions" that currently prevent system collapse. An example is lawfare (adversarial use of spurious legal challenges), which could become rampant if AI automates legal processes without safeguards.
  • The concept of a "fog of war" is introduced, where increasing complexity due to interacting AI agents leads to a loss of human understanding and control. This aligns with David Duvenaud's paper on "gradual disempowerment," where society becomes locked into using optimization-based technologies whose interactions gradually write humans out of the equation.
    • Research Challenge: Developing AI systems that are interpretable, auditable, and whose collective behavior remains predictable and controllable is a critical research frontier, especially for AI in finance and governance.

AI's Impact on Human Agency and Authenticity

  • The conversation explores the paradox of AI: while it can enhance individual agency (e.g., quickly building a software business), it may ultimately sequester collective agency.
  • A "crisis of authenticity" is discussed, where human interactions become stylized and less genuine, partly due to technological mediation.
  • Professor Summerfield uses a metaphor from the movie Superman 3: a character is sucked into a rogue computer and turned into an automaton. "It's more like us being sucked into the machine... we get turned into something we are not by technology."
  • This erosion of authenticity and humanity is not unique to AI but is a broader consequence of becoming part of complex, technologically mediated systems.
    • Societal Impact: Crypto AI researchers should consider the long-term effects of their creations on human identity, social interaction, and the potential for creating "counterfeit people" or fostering robotic human behavior.

The Primacy of Agency and Control

  • Professor Summerfield argues that psychology and related fields have "dramatically underindexed on the extent to which what is good for us is actually about our agency, our control, and not about reward."
  • He references empowerment in machine learning: the idea of maximizing the mutual information between an agent's actions and future states, essentially quantifying an agent's ability to predictably influence its environment. This is equated with agency.
  • Loss of control due to unpredictable or opaque technology (e.g., a malfunctioning website, a "computer says no" scenario) is a major source of frustration and negatively impacts well-being.
  • Social media and chatbot engagement tactics often rely on variable reinforcement schedules (delivering rewards unpredictably), which are highly effective for training/hooking users but can be disempowering.
    • Ethical Design: Crypto AI systems, particularly those involving user interaction or financial incentives, should be designed to enhance user agency and control, rather than exploit psychological vulnerabilities through opaque reward mechanisms.

Evolution, Open-Endedness, and AI Optimization

  • Evolution is described as a "blind progress," a non-teleological (purposeless) selection mechanism.
  • This contrasts with current AI optimization, which is typically narrow and goal-directed.
  • The concept of open-ended systems is introduced – systems that continually produce novel and learnable behaviors. Kenneth Stanley's work on "Picbreeder" and the idea that "greatness cannot be planned" are relevant here.
    • Open-endedness in AI refers to systems capable of generating continually novel and increasingly complex behaviors or artifacts, without predefined, fixed objectives.
  • Current AI optimization often leads to homogeneity (e.g., mode collapse in LLMs), whereas evolution produces "astonishing heterogeneity."
  • The "fractured entangled representation hypothesis" suggests that sparse, world-mirroring representations could lead to more evolvable and trustworthy AI.
    • Future AI Architectures: For AI to achieve more general intelligence and robustness, researchers might need to explore principles from open-ended evolution and move beyond narrow optimization, a significant shift for current deep learning paradigms.

The Art of Writing in the Age of AI

  • The host praises Professor Summerfield's prose, wondering if its creativity was a deliberate attempt to distinguish it from AI-generated content.
  • Professor Summerfield states his love for writing and finding new ways to convey ideas was the primary motivation, not a conscious effort to evade AI detection.

Conclusion

This episode underscores that AI's evolution is not just a technical challenge but a profound societal and philosophical one, urging a shift from narrow optimization to fostering genuine agency and understanding in both humans and machines. Crypto AI investors and researchers must prioritize developing interpretable, controllable, and ethically-grounded AI systems that enhance, rather than erode, human autonomy and societal stability.

Others You May Like