
by Machine Learning Street Talk
Quick Insight: This summary is for builders who need to separate the hype of "feeling the AGI" from the reality of mechanistic automation. It explains why our current computational models of the mind are likely temporary placeholders rather than absolute truths.
We are currently living through the ultimate scientific simplification: the idea that the mind is a computer. Professor Marvita Chiramuta and a roster of experts challenge this zeitgeist by tracing how our metaphors always mirror our most advanced toys.
"[It] will always be the case that our explanation for how the brain works will be by analogy to the most sophisticated technology that we have."
"Software is spirit. We put the spirit back into nature using the concept of software."
"GPT3 has done nothing. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing."
Podcast Link: Click here to listen

Let me tell you a little story. In the 1960s, during the summer, a little kid named Carl was playing around in the back of his garden, and he noticed all of these wood lice crawling around. You know, the little insects that can curl up into a ball.
What he noticed was that depending on whether they were in the sun or in the shade, they would move faster or slower. They behaved differently. And that's it.
Carl grew up to be Professor Carl Friston, one of the most cited neuroscientists alive. He's been on this channel before, more times than I can count. And that childhood observation about wood lice, it never left him.
He spent decades developing what he calls the free energy principle, which tries to explain all of behavior with one equation: perception, action, learning, why you scratch your nose, all of it, Friston claims, comes down to minimizing a single mathematical quantity.
There's an old physics joke, assume that we can model a spherical cowl in a vacuum. The joke is about how scientists grotesquely simplify messy reality to tame it. The free energy principle might be the ultimate spherical cow.
It promises to explain self-organization, this bewilderingly complicated phenomenon with something so emaciated we might as well call it tortological. Even Friston himself agrees with this by the way.
This is what he said to us last time we spoke with him. The free energy principle is not meant to be complicated or difficult to understand. It's actually almost logically simple.
So the whole free energy principle is just basically a principle of least action pertaining to density dynamics. The dynamics or the evolution of not densities but conditional densities. That's just it.
This is before thermodynamics. It's before quantum mechanics. It's just about conditional probability distributions.
So what do we do with this? Has Friston actually found some deep truth about how minds work? Or is he doing what many scientists do, which is mistaking the simplification for the actual thing?
Well, it turns out there's a philosopher who has spent an incredible amount of time thinking about this exact problem. Professor Marvita Chiramuta teaches at Edinburgh University. Her book, The Brain Abstracted, is basically about what happens when neuroscientists simplify brains to study them.
What gets captured? What gets lost? One of the answers that might seem obvious to people is that we pursue science because we're curious. We just want to know how the world works. We want to reveal discover the underlying principles of the universe which apply in all cases.
Switching off the idea that you're just interested in nature for its own sake out of curiosity and saying, Okay, how can we engineer these systems to actually do things that we want? Getting them to behave in artificial ways.
If those simplifications sort of allow you to achieve your technological goals, there's no in principle problem with oversimplification. If you're going to say, I'm not just interested in nature for its own sake. I just want applied science.
I should say, by the way, that The Brain Abstracted probably influenced my thinking more in 2025 than anything else. She's an inspirational lady. I look up to her very much, and certainly thinking back on many of the episodes we've done in 2025, I can see her influence in the questions I ask and how I think about things.
So, here's her starting point. Scientists have to simplify. We're limited creatures trying to wrap our heads around systems way more complex than we can actually comprehend.
Our working memory holds maybe seven items. Our attention is more scattered than a group of toddlers with iPads. We die after 80 years if we're lucky.
So, we build models, right? We leave stuff out on purpose. We tell ourselves stories about how the world works. But the question is, why does any of this even work at all?
Science is a humanistic endeavor, right? The purpose of science in the universe is to make the universe intelligible to us, not to control it, not to predict it, and not to exploit it.
Now, you can do all those wonderful things if you like, but in the end, as far as I'm concerned, science is no different from poetry is that we're trying to make sense of the world, trying to give it meaning in relation to our own existence.
If you'll allow the indulgence, I want to tell a little story. It's a boxing match in the red corner. Simplicius. He thinks science works because the universe is actually simple underneath.
Find an elegant equation and you've hit the real thing. Simplicity tells you that you're on the right track. And in the blue corner, Ignorantio. He thinks we simplify because we're too dumb to do otherwise.
Our models work well enough for our purposes, but they're approximations, just useful fictions, if you like. The map, not the territory. Now both of them agree that scientists need to simplify but where they disagree is what that means about reality.
Simplicius had history on his side or at least a certain type of history. Galileo, Newton, Einstein, they all believed pretty explicitly that nature was fundamentally orderly and that finding simple laws meant you'd found something true.
Einstein famously said, God doesn't play dice. And no, he didn't actually think God had anything to do with it, but he was expressing faith that the universe is at the very bottom legible.
Now, Chiramuta has gone allin on Ignorantio's position. She thinks successful science tells us we've become good at building useful simplifications, and that doesn't prove that nature is simple.
The philosopher Nicholas of Kusa had a phrase for this attitude, doctor ignorant. Basically, learned ignorance. You study hard, you learn a lot, and what you learn includes what you don't know.
Now, when we interviewed Cherimuta, she had been following Francois Schlay's videos. And for those of you who don't know, Francois is a friend of the channel. He's our mascot. He's one of my heroes.
And he's got this idea called the kaleidoscope hypothesis, which is basically that the universe is made out of code. And underneath all of the apparent gnarly mess that we see, there is intrinsic underlying structure.
Everyone knows where the kaleidoscope is, right? It's like this cardboard tube with a few bits of colored glass in it. These just like few bits of original information get mirrored and repeated and transformed and they create this tremendous richness of complex patterns. You know it's beautiful.
The kaleidoscope hypothesis is this idea that the world in general and any domain in particular follows the same structure that it appears on the surface to be extremely rich and complex and infinitely novel with every passing moment.
But in reality it is made from the repetition and composition of just a few atoms of meaning. A big part of intelligence is the process of mining your experience of the world to identify bits that are repeated and to extract them extract these unique atoms of meaning.
When we extract them we call them abstractions. Now she's not saying that Chole is wrong. She's saying that he's making a philosophical bet. Might be right, might be wrong. It's the same bet that Plato made.
Seeing that as a philosopher I thought that's Plato because France precisely says we have the world of appearance. It's complicated. It looks intractable. It's messy. But underlying that real reality is neat mathematical decomposible.
Now I feel like I should defend Charlay a little bit here you know because obviously we love Charlay. He's not making any weird metaphysical claims. At least I don't think he is.
If scientific theories actually explained reality the way it is, you would expect fewer U-turns. Now, the biggest simplification in the 21st century, the final boss of simplifications is this idea that the mind is a computer or that the mind is running a software program.
So, we have inputs, we have processing, we have an output. This metaphor has become so established in the collective zeitgeist that no one even questions it anymore. It barely even registers in our brains as a metaphor.
So is it or isn't it isn't it a little bit weird that computation is this abstract formalism like you know an automter that makes these state transitions something completely non-physical and we're describing the mind as if it is that abstract thing that sounds a little bit weird.
There are many movies made about this who talk about uploading their minds into the matrix. Neuralink talks about interfacing with your brain's software. Yosha Bach thinks that consciousness is a software program running on your brain.
That this is the universal that you have these invariances in nature that you can have patterns that have causal power that have the ability to reproduce themselves that have the ability to shape reality that are invariances that you cannot simply explain more simply by looking at what atoms are doing in space.
But you have to look at these abstract patterns to make sense of them. Every other explanation is going to be more complicated in the same way as money is going to be impossibly complicated if you try to reduce it to atoms.
So you have to look at these causal invariances and spirits are actually such causal invariances. They are actually disembodied, right? They they're not bodies. They're not stuff in space. They're not mechanisms in the same way, but they are causal mechanism, abstract mechanisms.
And so we put the spirit back into nature using the concept of software. A lot of people think that's metaphorical, but I don't think it's metaphorical at all. It's the literal truth. Software is spirit.
We're all just talking about this stuff without even batting an eyelid. Like, where's the skepticism, man? It just sounds so plausible to us. So, we assume that it just kind of has to be the case.
There is something super interesting about computers. What a computer ultimately is is it's a causal insulator. The computer is a layer on which you can produce an arbitrary reality. For instance, the world of Minecraft.
You can walk around in the world of Minecraft and it's running very well on a Mac and it's running very well on a PC. And if you are inside of the world, you don't know what you're running on, right?
It's not going to have any information about the nature of the CPU that it's on, the color of the casing of the computer, the voltage that the computer is running on, the place that the computer is standing in in the parent universe, right? Our universe.
So the computer is insulating this world of Minecraft from our world. It makes it possible that an arbitrary world is happening inside of this box. And our brain is also such a causal insulator.
It's possible for us to have thoughts that are independent of what happens around us. Right? We can envision a future that is not much tainted by the present. We can remember a past that is independent from the present in which we are.
And that's necessary for us. Our brain has evolved as such a causal insulator as well to allow us to give rise to universes that are different from this one. For instance, future worlds so we can plan for being in them.
B says that money is an example of a causal pattern. It's not the ink on a bank note. It's not the electrons in your bank server. It persists across and inconces in various physical instantiations.
So paper, coins, gold, digital ledgers. And yet they say money causally affects the world. It gets you fed. It starts wars. It builds cities.
He says that software is the same. A program is an abstract pattern that can run on many types of chips, maybe even neurons. And that pattern has causal power because it controls whatever substrate it's running on.
The same algorithm produces the same effects regardless of what physical stuff implements it. So the invariance that sameness across substrates is the causal mechanism the pattern itself at least according to Yosha.
He even accepts that physics is causally closed. He says that the abstract description and the physical description are two ways of looking at the same causal structure. Neither is reducible to the other. Both are real.
But I'm pretty sure Chiramuta would ask who identifies that invariance when we say the same algorithm runs on different chips. Completely different things are actually physically happening, right? Different voltages, different electrons doing different things.
The sameness is something that we impose. It exists in our description, not in nature. And as for the money example, money only works because of human interpretive practices. Right? If you take away the humans and their agreements, it's just paper, right?
Money is just paper and the causal power is actually in the social substrate that participates in it. Now, I think Yosha has taken a useful way of talking about complex systems and promoted it to metaphysics.
And that's simplicious all over again, right? Mistaking the elegance of our descriptions for the structure of reality itself. I mean, maybe information really is more fundamental than matter, but that's another philosophical wager. And we've made these bets many, many times before.
Just look at the history of all of this. So, Daycart thought that the nervous system worked like the hydraulic automter in French royal gardens. Fluids pumping through tubes, pushing levers. That was the high-tech metaphor of his day.
Later, when scientists figured out that nerves carry electrical signals, the brain became a telegraph network. Then it was a telephone switchboard, signals traveling down wires, operators routting calls. And now in our era, the brain is a computer.
To be precise about what we mean by physical and everything has to be physical because even GitHub, you know, has to store its data in some sort of hard drive or magnetic field or whatever technology, but it's not storing it in nothingness, you know.
So, so knowledge information always has this form of physical embodiment. I think we tend to think about it as non-physical because it is a thing that is not a thing which is the same as temperature.
You wake up, you look at your phone and you see the temperature and you decide how you're going to dress and nobody has any doubt that temperature is something that can be measured. But it took about like 2,000 years for us, you know, as a species to figure out, you know, what temperature was and the fact that it could be measured.
And there were two fundamental difficulties that I would say made it difficult for us to understand you know uh temperature. The first one is that first people thought that hot and cold were two separate things. Okay. So that temperature was like a mixture of the two. It's like when you make green out of blue and yellow. Okay.
And it took a while for people to understand that cold was the absence of heat and not that cold and heat were two different quantities that were tempered together. They were mixed. So temperature actually mix means mixture not you know like what we now mean by temperature.
The other thing that was very difficult to understand is that people thought that temperature was a thing was some sort of fluid that grabbed onto things. So let's say if you had a steel uh rod that is hot is that steel rod kind of like has this sort of invisible fluid that is heat and they had good reasons to believe that it was an invisible fluid because it could flow.
Let's say you could connect that rod to something that was cold and that cold thing was going to warm up because that fluid was going to be flowing in that direction and so forth. So they thought that it had a physicality as a thing.
A brilliant Englishman Jou basically figures out that that is not the case that you know temperature is not a thing. And the way that they do it is through this observation in which I don't know if you know how cannons used to be built, you know.
So if you just grab a piece of sheet metal and you make it into a cylinder and you try to make a cannon out of that, the moment exactly that you that you shoot the cannon, that's going to open up like a flower in a cartoon, you know, like like you know, like a Looney Tunes type of situation.
So what they would do is they would make these solid you know h cylinders of metal and they would bore a hole in it you know to create the cannons and boring those holes released an enormous amount of heat.
So J thought well how come all of that heat is there it's like an infinite amount of heat. If I continue to bore a hole in a piece of metal for an infinite amount of time I'm going to it cannot be a thing then.
And that you know leads him to realize that temperature is actually something that has to live in things but it's not a thing itself is related to the kinetic energy of the particles in the thing but it's not a thing itself. It doesn't have its own particle. There isn't kind of like a temperature particle.
Temperature is kind of like a property that matter has and that holds on to things. Knowledge is similar, you know, in that it holds on to you and to me, you know, and and and to the collective to exist, but it doesn't have kind of like a physicality in itself, but it always exists in some sort of physical medium or substrate.
So, in that sense, it's always going to be physical. No matter how virtual it gets, it has maybe a different type of physicality. But even electromagnetic waves that are transmitting, you know, data from your Wi-Fi router to your laptop are technically a physical embodiment.
Now, I spoke with Professor Luchiano Fuidi a few years ago, and it was actually one of my favorite ever episodes of MLST. I think very highly of him, which is why we're going to show some clips of him in in this show because it's very apppropo. But this is what he had to say about it.
Ontology, on the other hand, is how we structure the world in the sense that we think that that's the way it is. With the kind of eyes we have and the kind of light around the world, that those are the colors we we perceive. But certainly a world full of colors uh is the world which I take it to be the world. That's my ontology.
Reontologizing means changing some of that particular nature. Allow me a distinction. So I hope it's not too confusing. Reality in itself call it system description of reality as we perceive it enjoy it conceptualize it live through model of the system.
Ontology to me is the ontology of the model is not the metaphysics of the system. I hope I haven't uh no made a complete mess here. Okay. So metaphysics noon system whatever the source of the data that we get fantastic the data don't speak about the source the music of the radio is not about the radio but there is a radio of course the music is what we perceive the music has it own ontology structure etc the model the model is at that point what we enjoy why dig the digital revolution has changed the the nature of the world around us not metaphysically but ontologically so the ontologizing because some of the things that we have inherited from modernity a sense of the world that is now being restructured and a certain understanding of the world. So re-epemologizing as well of that world.
We go back to this temptation of talking about reality as if it were something that we need to grasp, catch, portray, uh hook uh spears. When in fact the way I prefer to understand it is as malleable understandable in a variety of ways something that provides constraints.
It doesn't mean that you can interpret in any possible way but leaves room for different kind of interpretations. So if the flow of data that come from whatever is out there and again I rather be sort of agnostic about it can be modeled in a variety of ways.
One way is to especially 21st century given the technology we have etc to interpret that as know an enormous computational kind of uh environment. It's perfectly fine as long as we don't think that there is a right metaphysics is the correct ontology for the 21st century.
Now this is not relativism because on the other hand different models of the same system are comparable depending on why you're developing that particular model. And let me give you a completely trivial example.
Suppose you ask me whether that building is the same building. That question has no real answer because it depends on why you're asking that question. If your question is asked because you want to have directions I'm going to say oh yeah that's the same building. So the same building. Yeah. Absolutely not. Go there, turn left. No traffic lights.
But if your question is like same function as I know it's completely different building. It was a school now it's a hospital. Next question. So is it or is it not the same? That that question is the mistake. an absolute question that provides no interface what computer scientists call level abstraction chosen for one particular purpose so that I can compare whether an answer is better than another let me crack a joke for the philosophers who might be listening this is it the same or it's not the same who is asking why because if it is the tax man the tax man you're doomed man I mean there is no way you can play any oh I change every plank that you're going to pay their tax. It's the same ship. I don't care. But if it is a collector, that ship is worth zero. You change all the planks, you must be joking. It's worthless.
So, is it or is it not the same? Depends on why you asking that particular question. Tell me why and I can give you the answer. No. Why? In other words, no frame within which we have chosen the interface that provides the model of the system. No potential answer. So the question is like is the universe a computational gigantic? Yes or no? Meaningless. Is it worth modeling the universe as a gigantia for the purpose of making sense of our digital life? Oh yes, definitely. Because we are informationational organisms. Aha. So metaphysics. No, I meant in the 21st century the best way of understanding human beings today is as information organisms. Last century we thought that biologically not made much more sense. A lot of water and a sprinkle a little bit of extra and so on. Mechanism time etc. Not absolute answers not relativistic answers but relational answers. The relation between the question the purpose and the actual answer. But it takes three not two.
So the computational model isn't literally true but it's useful. The mistake is forgetting that it's a model. So the early cybernetics guys, so we mccullin pits, they knew that they were working with analogies.
McCullikin pits wrote their famous paper showing that neurons could theoretically work like logic gates. Now they weren't claiming neurons actually were logic gates, but they were using it as a kind of functional description.
But somewhere along the way, the metaphor hardened. A lot of neuroscientists today don't say that the brain is like a computer. They say it is one and the metaphor became the thing itself.
Now Chiramuta borrowing from Whitehead by the way she said that this is the fallacy of misplaced concreteness. This is another one of those leaky abstractions I was talking about. By the way there's a great book called um the brain abstracted by Marvit Chimuta.
I interviewed her recently and she said that one of the most pervasive myths in neuroscience is that we use these leaky abstractions and idealizations to talk about cognition and usually it's using the most recent technology at the time. So you know a few hundred years ago we were describing the brain in terms of pulley pullies and levers.
Yes, that's right. And and you know and then it was um you know as a prediction machine as a computer and all this kind of stuff. At the end of this is an example of the these are grounded things that we understand. They're really good models because we can both talk about computers. We both know what computers are, but the brain doesn't work like that in any sense.
Jeff Beg put it even more bluntly when we spoke. It will always be the case that our explanation for how the brain works will be by analogy to the most sophisticated technology that we have. Is that how's that for a non-answer? Right.
So, so you know you know couple thousand years ago, right? How' the brain work? It was like levers and pulleys, man. I mean, duh. Don't be ridiculous. Why? That was the, you know, it at some point in the middle ages, it became humors, right?
Because fluid dynamics was like the, you know, was the kind of techn, you know, the technology that was like the most advanced or technology that took advantage of of water power was like the most advanced technology that we had. Now, the most advanced technology is computers. So, duh, that's exactly how the brain works.
Now, here's something that kind of bugs me, right? You go into any AI conference or you drink from the well of San Francisco by spending too much time on Twitter and you develop this mindset that AGI is inevitable.
You start feeling the AGI and you'd be forgiven for thinking this because I've been using claude code and my god I feel that there's been more interesting stuff happening in the world of software development in the last 6 months than there has been in the previous 20 years.
This this technology is genuinely amazing, but it is automation technology. It's it's not really intelligence, which means it's only really as good as your ability to specify and supervise and delegate to the system. But it is absolutely amazing.
But why do we have this view? It's not an argument that AI is impossible so much as why does it seem so possible so inevitable to people and saying that what I'm arguing that is that if you look at the history of the development of the life sciences of psychology there are certain shifts towards a much more mechanistic understanding of both what life is and what the mind is which are very congenial to thinking that whatever is going on in animals like us in terms of the processes which lead to cognition. They're just mechanisms anyway.
So why couldn't you put them into an actual machine and have that actual machine do what we do? So with all that all of that mechanistic history in the background, AI could seem very inevitable. But if that mechanistic hypothesis is actually wrong, then these claims for the inevitability of a biological like AI would not actually be wellounded.
But we could be subject to a kind of cultural historical illusion that this is just going to happen. Cultural historical illusion. I've been thinking about that phrase. Maybe our confidence says more about what we've inherited intellectually than about how minds actually work.
Now another thing that Marvita has inspired me to think about a lot is the difference between prediction and understanding. Indeed when I interviewed the Nobel Prize winner John Jumper at Google Deep Mind a couple of months ago this was the question I asked and he had quite an interesting way of distinguishing those two things.
It's almost like it's at any point learning how to refine and optimize the structure. Okay. So I think we should distinguish three things. Predict, control, understand first. So predict means that you say I'm going to do a thing. What am I going to what will be this value of my machine? What will appear on my computer screen in the future? That is predict.
Control is I want to measure this thing in the future and I want it to come out 17. Right? That's control. Understand is a lot like predict except there's a human in the loop.
Understand means that I have such a small collection of facts that you will predict and you will do it with facts that I can communicate to another human in kind of this compact fix fits on an index card that's almost understand and so I think these machines let us predict they let us control we have to derive our own understanding at this moment right we can experiment now on the artifact we can look at the 200 million predicted structures, not just the 200,000 experimental structures in order to help us understand, but it doesn't do the act of understanding for us. It does the act of predict and maybe control.
The problem is these two goals actually pull against each other. I think we're at this moment in science now because we have these tools like LLMs for language and connets and visual neuroscience are being used as predictive models of neuronal responses which don't have that mathematical legibility that originally so when I was trained in the field that people aspired to have and so you have this possible conflict you can either pursue that goal of understanding or you can pursue the goal of prediction. But it seems like you can't have both at the same time.
Now, on the one hand, people go into neuroscience because they want to understand the mind. They want that feeling where something clicks and you suddenly get how it works. That's what drew Chiramuta to the field in the first place. That's what keeps people up late at night reading papers.
But on the other hand, there's just prediction, building tools that work. If your model forecasts data accurately, maybe you don't care whether it's true in some deeper sense. So, LLMs are getting unreasonably good. They are winning math Olympiads. They are I mean, as of last week, actually, GPT 5.2 apparently discovered a new theor Well, it's it's solved one of these problems that Terrence Tao had on on his website. This is insane, but does it actually understand anything? And does it matter if it does or doesn't as long as it works?
Chomsky had an amazing commentary on this a few years ago when we spoke and I think it's still as relevant today as it was then. Suppose that I submitted an article to a physics journal saying, I've got a fantastic new theory and accommodates all the laws of nature, the ones that are known, the ones that have yet to have been discovered. And it's such an elegant theory that I can say it in two words. Anything goes. Okay, that includes all the laws of nature. The ones we know, the ones we do not know yet, everything. What's the problem?
The problem is they're not going to accept the paper. Because when you have a theory, there are two kinds of questions you have to ask. Why are things this way? Why are things not that way? If you don't get the second question, you've done nothing. GPT3 has done nothing. Classic Chsky.
So maybe theories are overrated, maybe prediction is enough. But Chiramuta worries about that trade-off, right? When you give up on understanding, you don't know when your tools will break. You're stuck with black boxes. They work until they don't and you won't see it coming when they don't.
I spoke with philosopher Anna Tunika about this recently and she had a beautiful way of describing it. Suppose you want to climb a mountain and you arrive on the top of the mountain. What's the argument to say that actually it's only when you're on the top of the mountain that that what the climbing on the mountain is.
I mean you cannot really arrive on the top of the mountain if you don't do the first step. Every single step matters. First step is as important as the last one. Actually we are more conscious when we take the first steps in climbing the mountains than when we are on the top of the mountains and we have all this like full-blown capacities and sometimes we shut ourselves in the legs.
And of course, I brought this up when I debated Mike Israel. And the biggest misconception in all of AI, what all of the folks in San Francisco believe in is this philosophical idea called functionalism. That we're walking up the mountain and when we get to the top of the mountain, we have all of these abstract capabilities like being able to reason, play chess, but that disregards that the path that you took walking up the mountain is very important.
And not only the path, the physical instantiation, the stuff that the mountain is made out of. So Mike's view is that if something produces intelligent outputs, why does the substrate matter? Silicon neurons, it doesn't make any difference. It's all information processing.
Needless to say, he pushed back hard. You can climb mountains. You can touch stuff. But you never truly embodied experience anything if you push on that philosophical button hard enough because you can always abstract out to like these are just neural network pings from groups of neurons.
And so you don't truly deeply know anything in some kind of weird philosophical way because it's just neural network calculus all the way down. You know, you climb the mountain, that's cool. Helicopter can climb the mountain much better than you. does not have the ability to reason and abstractly and plan and predict things at all.
So, it's possible that what you can do or how you can function isn't the whole story. Or maybe if that's wrong, we should just start using helicopters. So, individual minds are limited. But what about collective minds? What about humanity as a whole?
We've built this incredible thing over centuries, right? Libraries, universities, Wikipedia, an expanding store of knowledge that no single person could ever hold. Doesn't that escape our individual limitations?
So there's this dream of universal knowledge accessible anywhere perspective free. There is a tacid and implicit idea there that knowledge is something that something can have while my view is that knowledge is a much more collective phenomenon. Okay.
So and it's not something also that you can put in something like a book. In my opinion the book doesn't have knowledge. The book is an archival record of some ideas that I was able, you know, to put together in a nice structure. But you cannot have a conversation with the book. Knowledge only can go to work when it's embodied. You cannot throw like, you know, a bunch of engineering manuals and cement into a gorge and expect to get a bridge because the books don't have knowledge. Teams have knowledge. Organizations have knowledge.
Yes, knowledge is social. Communities accomplish what individuals can't. But collective knowledge is still knowledge from somewhere. This matters, right? It's shaped by particular questions, particular tools, and particular blind spots.
I think one of the interesting things about this phenomenon, not only of LLMs, but the internet as this idea that it's the repository of all human knowledge is that it goes along with this idea almost that knowledge doesn't have to be perspectival. It doesn't have to be like of a place of a community. It kind of can float free of the