Machine Learning Street Talk
January 23, 2026

Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]

Abstraction & Idealization: AI's Plato Problem

by Mazviita Chirimuuta

Date: [Insert Date Here]

Quick Insight: For builders chasing AGI, this summary dissects why treating the brain as "pure code" is a dangerous oversimplification. It argues that intelligence is a biological process of interaction rather than a mathematical mapping of inputs to outputs.

  • 💡 Why does the "brain as a computer" metaphor lead to scientific tunnel vision?
  • 💡 What is "Haptic Realism" and why does it suggest LLMs lack true understanding?
  • 💡 How does the "Platonic" view of the universe mislead AI researchers?

Mazviita Chirimuuta, a philosopher of science, challenges the dogma that cognition is merely a mechanistic computation. She argues that our obsession with "clean" mathematical models blinds us to the messy biological interactivity required for true intelligence.

The Platonic Trap

"Abstraction gets you like the higher level of reality... we do abstraction because we're finite knowers."
  • Platonic Overreach: Researchers often assume the universe is written in code and our messy reality is just a "kaleidoscope" of these rules. This leads to the false belief that AGI is inevitable once we find the right "source code."
  • Cognitive Shortcuts: Abstraction is a tool for limited human minds rather than a mirror of nature. Treating a model as the thing itself creates a map-territory confusion that ignores vital biological data.
  • Signal Selection: Scientists decide what is "noise" based on current goals. What we discard as irrelevant today might be the secret sauce of biological efficiency tomorrow.

Haptic Realism

"Knowledge is more touch-like... we have to pick things up, engage with them, ultimately change them."
  • Interactive Intelligence: True knowledge requires "haptic" engagement rather than passive observation. Intelligence is the result of a system meddling with its environment to survive.
  • Embodied Constraints: Biological brains run on a tiny energy budget compared to GPUs. This efficiency comes from being deeply embedded in a physical body rather than being a detached logic gate.

The Computational Fallacy

"Because computational neuroscience is this successful field... we know now that the brain is a computer. I think that is not an inference we should make."
  • Functional Equivalence: Just because we can model a brain as a computer does not mean it is one. A rock can be modeled as a calculator but that does not make the rock a cognizing agent.
  • Biological Signaling: Neuronal activity is an extension of metabolic processes rather than a separate digital layer. Stripping away the "wetware" might remove the very thing that makes understanding possible.

Actionable Takeaways

  • 🌐 The Macro Shift: Transition from "Spectator Knowledge" (passive data absorption) to "Interactive Knowledge" (agentic engagement).
  • ⚡ The Tactical Edge: Prioritize "embodied" AI architectures that integrate sensory feedback loops.
  • 🎯 The Bottom Line: AGI will not be solved by better math alone. It requires accounting for the physical and biological constraints that define intelligence.

Podcast Link: Click here to listen

What should we say as philosophers about the relationship between neuroscience and philosophy of mind? So, how much of our ideas about how the mind works can we read off from the results that neuroscience is telling us?

The results you get in the lab can be well-established and fine. There's nothing wrong with those data, but there's more of a problem of generalizing from what you learn in the lab to outside of the lab with neuroscience for cognition in the real world. It's precisely all of that complexity and all of that interactivity that is really important to how, for example, animals are able to negotiate their environment.

It's not an argument that AI is impossible so much as why does it seem so possible, so inevitable to people. If you look at the history of the development of the life sciences of psychology, there are certain shifts towards a much more mechanistic understanding of both what life is and what the mind is, which are very congenial to thinking that whatever is going on in animals like us in terms of the processes which lead to cognition, they're just mechanisms anyway. So why couldn't you put them into an actual machine and have that actual machine do what we do?

Welcome to MLST. It's amazing to have you here.

Thanks so much for having me along.

So, you wrote this book, The Brain Abstracted. It's an amazing book. Folks at home should definitely buy this book. It's really, really good. Tell me about this book.

It was quite a few years in the making. I think officially I started writing it maybe 2018 and it came out in 2024 but it was really based on ideas that I've been working on maybe since 2014. I started publishing some philosophy of science papers about computational explanation in neuroscience and then going back beyond there some of my own experiences when I was doing training in neuroscience on the visual system and I was using computational models of the era before there was deep learning or anything that fancy.

Thinking about really what does understanding the brain through this lens of computation by saying that we have models which not only simulate the brain as you know biological simulation using computers and all kinds of things or weather simulations such and so forth but actually kind of alleged to duplicate the function of cells in the brain which is this kind of additional claim which is made of about computational modeling when it's applied to the brain as this unique structure which is not only a biological organ but also a kind of computer itself.

The arc of your book is we have this problem with simplification because as scientists we want to build legible theories about how the world works. A lot of philosophy of science in recent years has picked up this topic of abstraction and idealization.

So abstraction is sort of quite a general word which can just mean sort of ignoring details which are there in concrete real life situations. So it would be you know familiar to you from doing sort of Newtonian problems in physics where your teacher tells you well there's always friction in real life but we'll pretend that the friction isn't there. So you're leaving out a detail which is known to be there in the concrete system.

Idealization means sort of attributing properties to the system that you're modeling in science which are known to be false. So for example in genetics modeling the assumption is made of infinite populations. These kinds of idealizations often make the calculations more tractable. But of course there's no such thing as an infinite population in real life.

In some way, an abstraction is also always a false representation, always an idealization. So sometimes the difference between the two can be subtle. How I put this in the book is that an idealization kind of points us to the thought that when we have a scientific representation, we're kind of presenting something which is kind of cleaner and better than the thing in real life.

When we talk about something someone being idealistic, it's like they have a view of how things should be and unfortunately reality does not live up to that. So idealization in science is often to do with sort of representing things mathematically in a way which is kind of cleaner and neater than could be possible in real life.

And on abstraction, you said in your book that there's the lofty philosophical version of abstraction, which is, you know, upstairs in the heavens of Plato, I think you said, or even Galileo, there's this idea that these natural forms exist which are disconnected entirely from the the sort of the spatial the the temporal realms. And then there's the the more deflationary view of abstraction, which is simply that we just ignore details.

Now, I'm speaking with my good friend France again tomorrow. he's releasing the new version of the ark challenge and I I think he does have this and many AI researchers do they have this platonistic idea he calls it the kaleidoscope effect which is that the universe basically is written in code and what we see is like a kaleidoscope when all of the rules of the universe just get composed together in different ways and all we need to do as AI researchers is kind of decompose back into the into the rules. What could possibly go wrong?

So I watched some of the videos with France. I found it really fascinating precisely this kaleidoscope hypothesis because seeing that as a philosopher I thought that's Plato because France precisely says we have the world of appearance. It's complicated. It looks intractable. It's messy but underlying that real reality is neat mathematical decomposible.

This is precisely this sort of contrast between the world of forms and the world of being sort of eternal stable truth and the world of becoming appearance messy flowing complicated reality. And so it goes back thousands of years in philosophy. It's really interesting that this is an assumption not only that AI researchers make often but it's runs through science as a kind of justification for the pursuit of mathematical representations even when they sort of depart from known facts about the concrete physical systems in reality.

The idea that the mathematical representation is getting you more to the truth the underlying truth of how things are as opposed to what I call the downto- earth view of what abstraction is and mathematical representation is that it's something that we do because of our cognitive limitations. So instead of thinking that the abstraction gets you like the higher level of reality, just saying that we do abstraction because we're finite knowers. There's limits to how much complexity any individual person or group of people can actually encompass in their modeling strategies or representations.

And actually it's only by pretending things that are more simple than they actually are that we get some traction. So that's like the downto- earth mundane explanation of why abstraction is so much used in science.

Yeah, it's it's so pervasive in the deep learning world. I mean I also interviewed the the folks who pioneered this geometric deep learning blueprint and that's the same idea basically that you know the world is described with geometry and all we need to do is imbue these geometrical inductive prior into deep learning models and and then they can you know essentially by reducing the degrees of freedom to ones which are aligned with how the universe works then then we get where we where we want to go.

I think the notion of like patterns and real pan patterns to invoke Danet's term there is a helpful one. So one one thing that you could say is going on here is that yes there's lots of complexity there in the natural world. It's apparent in the data, but like if you just dn noiseise the data a bit underlying there, there's a real pattern and we should we don't have to be like plonist and weird about it, but there's just regularity that is sometimes masked by noise.

That doesn't seem like too metaphysically problematic. But one of the questions that I sort of pose to that as a challenge to that, you know, very moderate view and I and I say this frequently in the book is when you're saying that some of the apparent disregularity in the data is irrelevant. That's your decision as a scientist. It's not relevant to you at the moment, but that it could be relevant to someone else. it could be really important to how that system works in the natural world for reasons that you're not aware of.

So when we sort of classify the signal versus noise in our data sets, we shouldn't ignore that the fact that those are decisions that we're bringing to bear on our investigation. We shouldn't assume that we're just reading off the signal, the real pattern that is there in reality and that there aren't very many other significant real patterns there. And to the extent that we're probably also kind of creating pattern through the through the very denoising process that we bring about.

Interesting. I mean physicists aren't under any illusion. So they know that Newton is is an idealization. And just to contrast I mean you you cited reflex theory. I mean of course Pavlov and the dogs you know folks at home will know about that. And Newton is still around. We still use that but we don't use reflex theory anymore.

Yeah. So, this is a chapter that I present in the book as a case study of how oversimplification can get scientists on the wrong track. So, the history of science is like hindsight 2020. We're looking at a a theory about how the brain worked which was really dominant for a few decades at the end of the 19th century, beginning of the 20th century.

It's yeah familiar to us in the in with Pavlov with this idea that we can explain behavior in terms of reflexes which get conditioned and there can be there's obviously learning involved with that. The most ambitious version of the theory said that all of the functions in the brain are basically versions of u reflex arcs so sensory motor loops.

So a very prestigious and sort of well- reggarded physiologist like Charles Sherington was heavily invested in the reflex theory. But he admitted in his in his book the integrated action of the nervous system that this notion of a simplex simple reflex is an idealization. It probably doesn't exist in real life. And yet this is the key that's going to kind of unlock neurohysiology. it's going to help us decompose and make sense of all of these different interactions that could be observed experimentally.

So what seemed to be going on there is that scientists were sort of taking that age-old method which is that it's a good huristic to seek parsimonious explanations to use Okam's razor and the obvious thing to do was like let's assume there's this thing that's there's a simple reflex and then running with it way too far actually never being able to explain the amount of data that they had initially thought that they would with.

And it's not clear how long the reflex theory could have gone on for if it hadn't been for the computational theory sort of coming in during the second world war era and basically providing an alternative explanatory framework which was also quite neat and and I would say provides its own kind of idealization toolbox.

So a very popular thing in cognitive science is to say well if something behaves you know the same way as a cognizing human for example then we can maybe we might draw inferences that it has consciousness and it has many other cognitive faculties but there there's always this kind of you know almost ignorance of the actual mechanism of of the object of study.

I think behaviorism is it has a bad name but it's not that discontinuous with a lot of thinking which is sort of normal and still acceptable in science which is to treat things as black boxes. This is precisely what the behaviorist said. It's like the mind is opaque. It's hidden within the walls of someone else's individual subjectivity. As scientists all we know are the inputs and the outputs and we'll just track those.

And that's like a version of what you just said. If well, if the inputs and the outputs, the behavior of this system are looking like what we know to be a conscious system elsewhere, well, let's just treat them as all of the same class of objects given that the only available information is the inputs and outputs.

I think that kind of reasoning can be fine in certain contexts, but it's a philosophical leap to say the access that we have to our own thoughts and the presence or absence of subjectivity that we're aware of with other people is irrelevant to making these decisions or judgments about what other kinds of systems can have consciousness.

So I think it's much too quick to just go behaviorist and say well there's no relevant difference between X and Y even if one is a person one is a machine just because we can say that there's some similarities and inputs and outputs.

I think if I remember correctly at one point you drew an imaginary kind of graph where you said on one axis we have science realism which is where you know our scientific theories actually represent things in the world and then we have empiricism which is the idea that you know facts we receive you know tell us something about the world and and then there's this more interesting axis which I think you're very inspired by which is this kind of constructivist idea. Can you can you explain that?

Yeah. So the constructivist path sort of which is different from the scientific realist and empiricist one sort of really runs with the idea that we are active makers of knowledge. It shouldn't be confused with the kind of constructivism that we have in some kind of like more extreme branches of sociology of knowledge which say that all scientific theories are social constructs and not constrained by phenomena that have been observed in nature.

So it's so I'm not saying that scientific theories are merely constructed in the way that like poems could be like a work of imagination and so forth but the idea is that there's this interactivity between humans groups of scientists their plans as epistemic Agents going out into the world with an agenda to find stuff out about certain phenomena in order to achieve certain goals often technological applied science and its goals and there's a some push back from the things in nature themselves that they're investigating.

But the idea that knowledge is always the product of this interactivity. So we cannot discount that there is a human framing side to this. We can't go along with this idea that a scientific theory is just sort of reading off the source code of the universe as if the human way of conceptualizing those phenomena had no bearing on the theory as it ultimately turns out. But we also can't discount that the theory that arises is constrained by how things happen to be that is worked out through that process of experimental interaction.

You said I I think you were inspired by Emanuel Kant. So he he had this transcendental idealism and and please bring that in but that somewhat informed your your own view which is this haptic realism. C can you introduce that?

Yeah so that's that's saying that knowledge is comes about through this process of interaction. So this notion of haptic realism is emphasizing that it's through engagement. So haptics being like the sense of touch. The contrast here was is with an ideal of knowledge which is based on this idea that we can know things in a disengaged way.

If you think of vision as the archetype for of knowledge, what happens when we look around at our surroundings and use sight as a source of knowledge? We can get into this mindset where it seems like we do not have to interact with things in in order to know them. we can just kind of absorb information passively and then because we're not bringing about our representations in a kind of active way it would seem to us and I'm not saying this is how vision works but it's a kind of conceit that often comes about if you use this very visual model for knowing John Dwey called it the spectator theory of knowledge so this is a clear predecessor from what I'm saying here is that like we just look around we absorb how things are our knowledge is sort of entirely objective.

It's almost like a God's eye view on reality. But if you think that scientific knowledge in particular is more kind of touchlike, you can't ignore the fact that we sort of run into things. We have to pick things up, engage with them, ultimately change them in order for us to acquire knowledge of them. So you cannot discount the fact that we're kind of meddling with things in the process of bringing about our our knowledge.

And another sort of dimension of this haptic metaphor is that our hands are not only a sensory organ, but they're also the means by which we manipulate things. So manipulation means precisely working with the hands. And so I think that really captures, if you like, the double face of scientific models. They're both of uh means of acquisition of knowledge in the way that hands are also sensory organs. We sort of find things out about the world through the sense of touch, but we're also they're also means for changing things, for doing things.

We speak about this in evolution. What would happen if you could just rerun evolution? You know, what would happen if we could just have a parallel universe and the entire enterprise of science just ran again? And what you're alluding to is that it wouldn't be completely different. Maybe there are some guardrails but but it is actually quite divergent.

Yeah. There's certainly like contingency in the history of science, you know, where people start out, cultural factors which prompt them to people to ask certain kinds of questions and not others. A view quite similar to what I say about hapsic realism in the book is by Hassok Shang who's a professor of philosophy of science at Cambridge. And he has a view which he calls realism for realistic people. That's the title of his new book.

And he is an outand-out pluralist about science. So he says that because there is contingency in the history of science, it means there are paths not taken, but we could maximize the acquisition of knowledge if we just like explored as many of those different paths as possible, which isn't something that I say in the book myself because I think there are also reasons why it makes sense to narrow views and paths of inquiry. And also we don't have like un unlimited resources. But yeah sure there are opportunity costs that come along with like taking a certain path and there are others not pursued in the enterprise of science there might be a trope or or an idealization that we're getting closer to the truth.

Yeah. And is is do you think that's the case? Do do you think as as the enterprise of science just progresses that we're getting closer to the truth or could we be in culde-sac basins of attraction and so on?

That's very much associated with scientific realism. So there's this view that there is one way nature is and science succeeds in so far as scientific representations conform to this one way that nature is for my from my view sort of takes very seriously the idea that nature could just be sort of inexhaustibly complex. So if you ever try to sort of if you ever pin it down in one representation, there are ways also that it could be represented sort of inexhaustibly many different varieties of ways that you can investigate it and then also ways that any one representation is lacking.

So there's a kind of inherent sort of lack of convergence that that picture brings about. One of the ways of expressing this is to say that nature is protein. There's this mythological character called Proteius who was a shape shifter. This sort of this yeah mythological being that lived in the sea and it would he would keep changing his shape. You couldn't but if you could pin him down he would answer you a question and tell you the truth. But the thing was you had to pin him down.

And I think this is a really nice illustration of like what's going on with our interactions with nature as scientists. Nature is sort of inexhaustibly complex. There's all kinds of patterns and things going on there. It can be pinned down and we can get true answers. But it but when we sort of release our grip, it will carry on shape-shifting. And there's lots of other ways that it could be. So yeah, one final theory. I'm not so convinced by that.

This is very much a view that I think makes sense if your basis for your theory as a philosopher of science is really the biological sciences which is where I'm coming from. If you're a physicist, it seems much more natural to think that there is one fundamental set of laws of the universe which is going to be nailed down once and for all and could explain everything. Biology just sort of throws up lots and lots of examples, particularities. it tends to be less considered sort of less intellectually satisfying in comparison with physics.

Oh, you can just spend all your time in biology doing stamp collecting because there's this thing and there's this thing and there's this thing. How do you tie it all together theoretically? But on the other hand, I think that if you take that particularity and that shifting quality to biological phenomena, then actually it just forces you to think about knowledge differently.

In in your book you you you spoke about a trajectory I suppose of possibly failures of simplification. So we just spoke about reflex theory but one one of the big things is this metaphor of cognition or or the brain perhaps as being a kind of computer and you spoke about the the early roots of this from reflex theory to cybernetics and computationalism. Can you sketch that out?

So this connecting thread is really this idea that what cognition is is something that is machine-like that what I don't know going back to the 17th century this is a view associated with the philosopher and physicist and physiologist Rene Deott who said we need to give up we need to go along with this idea that everything that happens in the body is explicable in terms of quite simple mechanistic forces and this you idea that biological systems are machine-like has obviously been hugely influential in the different branches of science.

The reflex theory was one instance of that. People often said machine-like reflexes and making comparisons with sort of Newtonian decomposition. With the computational framework, you had an actual machine, a digital or analog computer which could be compared with brain processes. I mean cybernetics is an interesting stage along the way because they were building sort of little devices which had some degree of autonomy made up of and and supposed to be emulating like versions of negative and positive feedback and and as hypothesized to occur in the body.

But yeah, I would say at the core of this research idea is that if what's going on in the body is ultimately a mechanistic process. Then by redoing engineering with this non-living system which is capturing some of the core operating principles that we find in biology, then we can use that device as a map as a as a resource to then reinterpret what's going on in the biological system. You saw that with for example McCullik and Pitts in their 1943 sort of landmark paper of interpreting neuronal cells as logic gates and then saying yeah you could build a computer out of neural nets. This is the origin of neural nets as we know them today. This is the birth of the idea. but then using that notion that neurons are logic gates to then interpret what's going on in physiology.

So what I describe in it's in chapter four of the book is a sort of back and forth thing of of making devices which are somewhat inspired by biology and then using those then as the lens through which to review biology again. And I say that the advantage and the appeal of this pro process is that it allows you to or gives you kind of license to ignore so many things that are happening in the brain and nervous system which are just not shared with non-living machines like all of the biochemistry, all of the ways that neural tissue is shaped by vasculature and interacts with the immune system and all of that sort of background stuff that if you're a theoretical computational neuroscience, you can say, I'm only interested in the computational properties of the brain.

I don't need to care about all of that messy biological detail. So, it gives you a kind of tunnel vision, which as scientists can be fine to have tunnel vision. You can't take in everything at once all of the time. But what I take issue with is the kind of ontologization of that, saying that because computational neuroscience is this successful field of inquiry, we know now that the brain is a computer. I think that is not an inference we should make.

Yeah. I mean, I don't think connectionists typically argue that. I mean, they would say it's a different mechanism.

Yeah. But they they think that there's some kind of functional equivalence. Mhm. And that's the things because so many folks in AI at the moment they are interested in biologically plausible architectures. So what if you know like the cyberneticist did what if we have more autonomy, diversity, agency and and so on. And they they fundamentally think that I guess they make the assumption that the world is a machine and if we replicate it with sufficient fidelity then we can reproduce the behavior.

Yeah. To what extent are the mechanisms of the brain inherently bound up with the fact that the implementation here is in living tissue. So I think it's really there's sort of tantalizing evidence about how the extent to which sort of brain processes and signaling between neurons not just the electrical specialized signaling that neurons do but biochemically it's kind of outgrowths of signaling that's happening elsewhere in the body all of the time.

So that there's nothing that we shouldn't think of neuronal cells as sort of distinctively cognitive as opposed to the other cells in the body, but that they're extensions of the ways that cells signal anyway. And if neuronal function is so much just a manifestation of what's happening with metabolizing cells anyway, that makes it more of a stretch to say that a machine that's not living could have the same functionality.

Yeah. I mean, no one's trying to sort of build artificial neural networks with living cells.

No. But I mean, there is there's a there's an analogy in neural networks. There's this thing called like the lottery ticket hypothesis which spoke about pruning and what the researchers found is that you train this big dense neural network and after it's trained because you need the density for stocastic gradient descent for you know training tractability after it's trained you can prune away 90% of the connections and it still works the same way and maybe maybe evolution and and our kind of like biological instantiation maybe it's the same thing. It's it's we've been through this billion plus year training process and all of these things that we think are important like you know the the the instantiation the auto poesis the you know the the the agency and so on maybe those are vestigial and we can now just kind of snip snip snip and we can just kind of create this abstract version right it seems reasonable what do you think that it's just vestigial?

Huh. I I mean I think we really need to take seriously the economy that is there and of biological information processing like we do a lot more with a very limited energy budget running our brains every day than is is like artificial neural networks are really really expensive to run. It doesn't strike me that biological cognition could get away with being that wasteful. That surely to keep things sort of blowing up in terms of like energy being consumed for information processing biologically that there must have been a fair amount of pruning on the way.

I kind of think of agency as being a bit of a spectrum from, you know, you can think of it in a deflationary sense as being this autonomous thing that's the cause of its own actions and then the deep philosophical sense is that there's this intentionality and you can control the future and whatnot. And it rather speaks to like the the the physicist's view is that you know all of these you have this light cone and you have all these micro interactions and of course that's beyond our cognitive horizons. So, so we we develop ideas of of representations where we can have these distal relationships between things that are in our mind and and and things that are far away in in time and space.

And I suppose, you know, you think of that as being another form of idealization. But the fascinating thing though when I think about agency, I think about it in terms of like apparent causal disconnectedness. We are Agents because you have consistent beliefs and ideas and you're not just an impulse response machine that's being your actions aren't determined entirely by this situation. You're a person and and I I kind of perceive that as a kind of causal disconnectedness.

Yeah. I agree. I mean the what I say in the book in that chapter I I set out and I say this to be sort of very metaphysically neutral about what representation is what intentionality is but at the same time not what I directly wrote in the book. I think I agree with you that there is something very important about connecting the notion of agency and intelligence with this thing of like being responsive to what is actually very distal. It could be distal in time and space. It could be like distal because it happened a long time ago, but this is what biological memory is. Things that happen like to you when you're a baby affect how you are now.

Physical systems like non-living physical systems. They're much more constrained in their in their actions, and I don't mean that in like action, but just what they do, what happens to them, by what's proximal to them. There's like the distal is always screened off by the proximal, if that makes sense. It's like the whereas for you all of these things that happened in the past could be as relevant as anything that happens in the room right now or your ideas about the future would be relevant to what you're saying right now. So yeah, this this notion of being sensitive to what's not immediately driving you in your surroundings. I think that's a really important like thing to latch on to and like delineating at least the class of systems that we want to call cognitive to ones that we would say sort of merely physical, not intelligent in any important sense of the word.

Very cool. So, so, so we we're we're trying to I suppose partition the world into logical units that we can understand and and Agents are a great version of that. And Daniel Denny, of course, he had the three stances. He had like the the um the physical stance, the design stance, and the intentional stance as a way of kind of like, you know, building useful explanations of of of how, you know, things behave and introduce those. But you said that you didn't quite agree with that because to Dennit it's a hierarchy which means you know like the intentional stance perhaps has precedence over the other ones.

Oh, so it so actually it's kind of the reverse. He it's it's as if the physical stance has an onlogical priority like that's what's really there but it's useful to use the design and the intentional stance.

Very interesting. But but you said for you you don't really have that prioritization. You kind of you're open-minded.

Yeah. So that's part of the sort of metaphysical neutrality of that I set out with the chapter is to say okay let's not go in with the assumption that low-level physical causes are the like the primary causes of everything. Yeah. It's a it's a way of of if you like taking intentional phenomena at face value. intentional in the sense of like bearing representations. And I think one of my criticisms in that chapter is this agenda which is Sarin philosophy of mine to said to say that okay if representation is real we need to be able to tell a physical story about how it comes about and I'm just saying why not why go along with that project if there's no and it's this is actually going a denian view is is um you might see it as that if talking about representations and intentionality is useful within the sciences. Why not just take that in face at face value and not say that that needs to be established by making it coherent with some causal story about what's going on um in terms of non intentional physical interactions.

So that was the position there with Putnham's rock right you know he said that you can take any open physical system and you can configure it in such a way to have the same types of information processing and then why wouldn't that have all of the you know all of the the cognitive properties that when you're making a claim that you know the brain is a computer and that that explains cognition what have what grounds have you got for saying that any arbitrary physical system is actually implement a computation just from looking at its physical dynamics.

If it's purely a question of mapping the physical dynamics to a computational formalism, then any physical system can afford a mapping of that sort, whether it's a rock, whether it's the sofa, whether it's my stomach as opposed to my brain. And so I so that's a challenge to the computational theory of mind that it's assuming that brains implement computations just because we can sort of model them as computationally but we can model all kinds of things computationally what makes brains special.

So so what about this idea of whether computation itself has causal powers I don't think it does um so computation itself is mathematical formalism it's like exists it's it is a mathematical structure. Things that have causal powers are concrete physical systems. So I just think they're different kinds of things.

So So S famously argued that you know um the the reason why we can't build strong AI AI is that computation doesn't have causal powers. It's implemented in in silicone. So what does have causal powers are the machines that actually implement the computation. But but couldn't you sort of say, well, there is still a causal graph. Perhaps you you would argue that computation isn't a node in that causal graph. It's just some kind of an aspect of it.

Yeah. I mean, I think it just goes back to this issue like computation in and of itself is not the kind of thing that could have causal powers. I I think I think Soul's point, and this is in the rediscovery of mind on this, was an interesting one. It was kind of maybe of kind of subtle and it kind of gets lost in the wash of like AI back and forth and soul bashing which happens a lot but it was about the kinds of ways that we form explanation in the sciences and he what his point was that cognition if it's anything is something as part of the physical realm the realm of causation.

The assumption of the computational theory of mind and he argues that this is very dominant within cognitive science is that you can explain this phenomenon which is a phenomenon of the concrete physical world through this non-causal thing which is computation and suddenly there's no gap that needs to be closed and I and I think that's that's a fair point is that there's something inherently that needs like further justification of why of all of the things that happen in the concrete physical world that demand explanation, why we reach out outside of the concrete realm of physical causation into computation in order to explain this thing cognition.

Another argument S was making was you know how machines couldn't understand. Yeah. And of course he was talking about things like semantics. Yeah. Do do you feel that they could understand?

So what do I think? I think certainly there's more to human understanding than that. I I think that a thing about human cognition and animal cognition in general is that my view is that it's not a set of discrete modules that work separately from one another. I think ling language is bound up with sensory motor engagement and how we likewise how we perceive the world is shaped by linguistic concept formation and everything like that. So the idea that you could just sort of detach off a language faculty have it replicated in an LLM that doesn't have the other bits of our cognition and it doesn't have embodiment doesn't have the capacity to engage with the world and that it could have understanding in the same way that we do. I find that implausible.

Interesting. But but again, you know, if we do the the galaxy brain thing and we say we can embed robots in the physical world, give them sensory motor affordances and all the rest of it. Um, you know, there are many replies to the Chinese room argument about this, like the robot replying. Um, you know, would would they have a little bit more understanding?

Yeah, I mean

Others You May Like