
by Machine Learning Street Talk
This summary breaks down why Dr. Mike Israetel believes machine intelligence will surpass human capability through sheer scale rather than biological mimicry. It is essential for builders who need to understand the transition from "stochastic parrots" to agentic problem solvers.
Dr. Mike Israetel joins Tim Scarfe to argue that the path to superintelligence lies in data density rather than biological embodiment. While Scarfe defends the necessity of physical grounding, Israetel posits that a brain is merely a lossy data center. The conversation shifts from the mechanics of neural networks to the inevitable arrival of a post-suffering paradise.
"Artificial super intelligence is linguistically, mathematically, and scientifically 100 times the power of a human."
"I think we can get to a place where humans have joy 24/7."
Podcast Link: Click here to listen

So if you had a simulation of fire in a computer or of a stomach, it wouldn't digest. Fire wouldn't get hot. Water wouldn't get wet. It would digest. We can't even I mean this is just we can't even be debating this. I can and I will watch this. When you saw the Matrix, right? You know that the architect, the first version of the Matrix, it was too perfect. There weren't any problems.
[Speaker Name]: Yeah. So like I'll say that when I saw that, my inner philosopher was like, you don't actually want to live in that world, do you?
[Speaker Name]: Desperately everybody does. It would be horrible. It would be the best thing ever.
[Speaker Name]: Well, sorry. What? By definition, by the way, what would be horrible about it?
[Speaker Name]: Exactly. Don't Yeah, but don't you think that life is about suffering?
[Speaker Name]: No. no. Doesn't that give you perspective on joy?
[Speaker Name]: No. no.
[Speaker Name]: Going to go and pick up Dr. Mike.
[Speaker Name]: Great to meet you, Mike.
[Speaker Name]: Mikewise, how are you?
[Speaker Name]: Very good. Very good. Yeah, it was good. Nice meeting you, Mike.
[Speaker Name]: Nice to meet you.
[Speaker Name]: Hey guys, Mike is Richel here. I'm uh by training a sport scientist and I run RP Strength Fitness Company and I lift weights and my head looks strange. People make fun of me in the street. They laugh and they throw usually rotting fruit. It's unfortunate. Sometimes a vegetable. And I have a vision of the world in which more intelligence is almost always better, in which cooperation is a good thing. in which we build a future for every single human that's orders of magnitude better than it is now. And I think that would be a great thing in almost every regard. And the three reasons that you should be watching this podcast is that if you want to see me completely out of my depth, embarrass the out of myself with terms I don't understand as they're rendered out of my mouth, get pushed hard and be corrected on a variety of things, and also have some lively debate, and maybe make some future predictions that get us both in trouble, I think those are some great reasons to tune in.
[Speaker Name]: Uh, so Mike is going to demonstrate the first exercise which one of you will do. And while you're doing this one, the other guy will do the other. So Mike is going to bend forward, hinge at the hips, torso down as deep as you can to the floor, almost even touching if you can, and row up. And then again, and that's how they should look. No swinging into the upper body, nothing like that. And then once you're done with that, you'll move over to the next exercise, which your partner will be doing. So you'll switch just push-ups. So, we like to do it kind of dive bomb style where you keep your belly button really high and you push your chest to the ground and come up. All right, keep it cranking. Nice and easy. Oh, yeah. Absolutely. You guys want a workout until you can't do any more reps without cheating? No cheating. Tim, you're doing a great job.
Love the control. These are dang good push-ups. These are excellent reps, Marcus. Keep that back flat. Keep your chest high, but your arms stretched. There you go. Oh, just like that. Good. Now you're getting a little bit of hamstring involvement, too. Now switch. Switch. No rest. Nobody said rest. Now you get to do the pulling musculature, Tim. Yes, Rose. So, bend over. Tummy up, chest up. Go deep. Yes. Deeper. Lean forward even more. Remember, the more reps you do, Tim, the more Marcus suffers, which what you This is what he wanted. There you go. Good job. machines here. Let's go. Don't you quit. Are you out of your mind? What did this? All right, one more round. Switch. Switch. Last round. Marcus, get back down and start doing push-ups. God damn it. is wrong with you? Three more. Let's go. Three more each. One. Super deep. Two. Last one. Three. Yes. Beautiful. We did our workout earlier.
[Speaker Name]: Am I the least technically qualified guest you guys have ever had?
[Speaker Name]: I must be top five. Like you ask a very distinguished exercise scientist like, "Hey, how many reps should I do on curls?" And they're like, "I just study one pathway of muscle hypertrophy and I know it real well. I just go to the job my job, do experiments, and I don't lift weights or know much about that." And you're like, "Oh, The the the generalization did not go anywhere. I'll be here defending San Francisco, which is an odd statement to make. I have a very unique take. I think ASI is coming in 2627 and AGI is coming in 29, 30, maybe 31.
[Speaker Name]: Yeah, you spoke about this with Lon. Why is AGI after ASI?
[Speaker Name]: Because artificial general intelligence has uh either depending on who you speak to no good definition at all or multiple uh spectra definitions somewhere between shitty um like the um uh you know various tests of if you you know can text something is it truly intelligent um all the way through I think very good but artificial general intelligence typically some of the better definitions encompass all of the kinds of intelligence and abilities that human beings are able to put out. And honestly, from a vibe perspective, if you say like, "We've really cracked AGI, but your machine can't do some kind of cognitive work that a human can," you haven't cracked AGI in any meaningful respect.
But because humans have some very interesting abilities that we just don't have the reach for from um kind of an integration perspective. For example, um smells and tastes. I mean nanotech's got a bit of a way to go to get a machine to smell and taste in a meaningful way. And so like something about being a chef and being able to cognitively rank and hear tastes and smells and spectra and functionally employ them in real life to make food that's not going to happen probably in 2627 right but then we look back and so AGI because it is human inclusive and that means we have to replicate every kind of human intelligence or say confidently we have AGI from my perspective all the following all dilotant takes by the way um then you know AGI is actually like uh pretty tough. Well, will happen I think but tough.
Now artificial super intelligence is uh can be thought of in many ways but I think once you have something that on many domains of human ability not all is radically more intelligent it functionally vibe checkwise is super intelligence. For example, something that is linguistically, mathematically, scientifically, three-dimensional object rotation, world model depth, recursion ability, is 100 times the power of a human. You can tell me it's not super intelligent cuz it doesn't know how to smell or taste. It's never seen the world with its own eyes, even though it's scanned all of YouTube and integrated all of YouTube's visual data into an integrated 3D world model. Can I swear a little bit or is that not a good It's a artificial super intelligence then whatever. Right.
And so I think because in many respects like so I work with um a few AIs but I love OpenAI's product suite. I pay the infinity amount per month to get the pro uh version the pro subscription for GPT. And so a GPT5 Pro is in some respects not as smart as me yet. Mostly because it can't cross-link distant concepts together and integrate it whole as well because that's not really what it was designed to do because that would burn a shitload of tokens for an ROI that for most people is meaningless. But boy does it do black hole physics better than me and almost everyone and recently better than every scientist because it's got novel discoveries now. And so you take the abilities of just GPT5, right? And in the back rooms at OpenAI, they have something that just beats the out of DPT5 and it's currently getting fine-tuned or, you know, post-trained, whatever. That thing's probably 10 times as smart. That's super intelligence. also factual knowledge about the world.
I mean like what percent do you think you know of total data, total real things about the world? How tall are people? What do they look like? Where you know what's the capital of France? That stuff compared to just GBT5. I mean it's some infantestimally small fraction. Uh that on knowledge base is already super intelligence. I mean put GBT5 against any Jeopardy winner and you get the world's biggest creaming all of a sudden. That sounds like one of my older films. Am I right? So uh so in in in many respects I think in 2026 we are going to see AI systems that 10x or 100x human abilities or 2x or anything between that and so once you get to maybe 2/3 of all cognitive abilities maybe 75% maybe 80% demonstrabably machines are just like categorically superior by orders of magnitude that is super intelligence so super intelligence two ways on vibes and heristic by what I just described and also an effect.
I mean, if you have a smart enough AI and it's crapping out novel hypotheses once an hour and it takes scientists weeks to grind through them and it starts getting 60% 80% 90% hit rate, I'm like, um, it understands the cell and it's giving us novel disease cures every week. Uh, that is super intelligence. And so super intelligence has to be measured uh by two things. One is under the hood cognitive science abilities, but the other is like real world effect. Because here's another really big thing super quick. You don't have super intelligence really in truthful evidence to convince most skeptics until you have the fruits of its labor. It's like someone tells you they're really rich. You're like, "Hey, can you buy me a flight to Dubai?" They're like, "Ah, money's tight right now." You're like, "I believe you, but I don't." But if if you know Elon Musk, you're like, "Hey, can you buy me a flight to Dubai?" He's like, "Yes, I just bought you Dubai Airlines." So you're like, "The airline?" You're like, "Yeah, I just acquired it." You're like, "Right." Okay. Wow. That's a demonstration.
And so I think in 2026 I'm very confident though never certain from scientific perspective CIA probability scale extremely likely 97 to 100% that in sometime in late 2026 we are going to start opening the cornucopia of machine intelligence that people understand that at least parts of machine intelligence are absolutely super intelligent and because of exponentials by 27 28 29 the is going to get completely insane.
[Speaker Name]: There is so much to unpack there and I really like you Mike so I don't want to disagree with you on my podcast but I have no please I love disagreement. I have to and I think I disagree with almost everything you've just said so we can take it one one at a time. So um first of all the definition of intelligence you were talking about knowledge as well. I think that knowledge is nonfgeible. In fact one of the the most pervasive critiques of artificial intelligence going all the way back to the 70s there was Drafus and there was you know John C. It's it's this grounding problem. Steven Hanid spoke about this grounding problem and essentially the enterprise of intelligence is the accumulation of information which is relevant for adaptivity around an environment. So the product of our knowledge is all of the um you know it's Wikipedia. It's it's the hard drive in the sky. It's our culture. It's all of the things that we have acquired.
But the problem is there is a gap between syntax and semantics. So if an alien read Wikipedia, they might read the article about Trump or about America or something like that. And that is not the same as the embodied experience of being there. These are just pointers. So this semantics is talking about the connected inactive embodied graph of actually experiencing things through time. And the biggest misconception in all of AI, what all of the folks in San Francisco believe in is this philosophical idea called functionalism. And the best way to describe it is there is this analogy of walking up a mountain. I spoke to a philosopher the other day, wonderful lady called um uh Anna Siauna and and she said that we're walking up the mountain and when we get to the top of the mountain, we have all of these abstract capabilities like being able to reason, play chess, but that disregards that the path that you took walking up the mountain is very important. And not only the path, the physical instantiation, the stuff that the mountain is made out of because it's a reasonable argument in my opinion that intelligence is a property of adaptive matter. It's an extensive property. It's much like temperature. Temperature is a coarse graining. It's an it's an effective theory to describe the details of the molecules moving around. And we we screen that detail off and we call it temperature. And I think intelligence is like that. And I think that knowledge is quite similar. I think knowledge is actually a physical causal graph that is enacted over time. And it cannot be abstracted in the way that you're describing. The reason the abstractions work is because they are pointers to our embodied experience. So they make sense to us, but on their own they don't make any sense.
[Speaker Name]: I like that take. I'll push back. Um, intelligence. My favorite definition is the mo the most basic definition of intelligence, the ability to solve problems.
[Speaker Name]: Yeah, I hate that. um uh and you find out if you're truly intelligent if you can solve problems of any degree of complexity that is uh I would actually say responding to one stimul one stimulus in a cogent manner at least knowing how to respond to one stimulus input output uh is the beginning of intelligence uh all intelligence above that is just stacked layers of complexity so when you say embodied there's a lot there for sure. But also the only reason that we call humans intelligence intelligent is because we have a representational abstracted neural network called the brain. Your brain doesn't actually have anything in it that's magical. It's just network pings off one another recursively. And so your brain is in very many deeply important respects exactly as abstracted and unrelated to reality as a data center. And so you can climb mountains, you can touch stuff, but you never truly embodied experience anything if you push on that philosophical button hard enough because you can always abstract out to like these are just neural network pings from groups of neurons. And so you don't truly deeply know anything in some kind of weird philosophical way because it's just neural network calculus all the way down. And so whether it's you or a machine brain like what they have in Optimus for example, it's representational.
And so intelligence is always seemingly going to be something that is lossy to some extent. It is a compression function. Is it? It's a real rude way to say it because it's a very impressive one. But there, you know, you climb the mountain. That's cool. Helicopter can climb a mountain much better than you. Does not have the ability to reason and abstractly and plan and predict things at all. And you can get really amazing highde video and samples of rocks and the mountain and everything. But if you don't have an analyzer, a system processor, an integrator and a recursive function, you don't actually model that data in any meaningful way and it just sits there as bits on a computer. And so when you say that, you know, embodiment is a prerequisite of intelligence, I would push back and ask you to answer this. If we have somebody that has read every single physics book and especially let's say particle physics and they're so good at it that they can tell the computer system that aligns the particle beams at CERN to do a job good enough to produce realworld data. Are they truly intelligent about particle physics?
By your definition, and I'm being a little bit facitious for comedic effect, the answer is no. Because if I put you in the particle accelerator, you get torn to shreds and you're like, "Oh JK." There's actually impossible for humans to perceive particles directly in any sensory organ because they're plank length type You can't see it. Vision is not a cogent concept at those lengths. So that goes straight to hell. And so if that scientist is so adept that he can actually run CERN, but it's all neural network modeling in his head, he's never had actually had any real experience with a particle collider. No human has. You can't open the latch when the thing is on. It'll shoot nutrinos at you. So if that's true and we say, "Okay, okay, okay, well, he actually knows things. He actually has intelligence. Why doesn't GPT5 and its network clusters actually have an understanding of the real world? It's never seen, but the guy's never seen a particle in his life. Zero scientists have ever seen a nutrino ever. They have no perceptual experience of a nutrino. It's purely hypothetical. But they make real world predictions because the neural network and all the addendent structures that create what we call intelligence and understanding are their their their vector arrangements represent decently well enough always an approximation but a damn good one predictively valid one of real things that are happening in the real world.
And if GPT5 understands like how humans act to a 98%, but if it was embodied in a robot body and had vision, it would be like, "Oh, There's the rest of that 2%." I think 98% of the way to intelligence is the same thing that back when I was a professor of if uh one of your students gets a 98% on an exam, is he is he pretty good at understanding the material? yeah. Now, just off camera over there is my best student of all time, Jared Feather. He never missed a single point on any of my exams. 100%. Well, that just means the exams weren't hard enough to really push his understanding. Is Jared the best student ever had the best trainer? Yes, for sure. But does that mean when kids score 95 or 98% of my exam, I'm like, never let this kid train you. He doesn't really know the real world. It's just abraction. No, no. They know real things in the real world and can really help you get fit. even though they might themselves have never been been able to get very fit uh because they know the actual structure out there purely representational neural network but it passes the test of when they see the real thing they go I know what a dumbbell is I know what a human is I know what vectors are I know what forces are and I'm going to have them do this and it works.
[Speaker Name]: yeah a human would have a shared understanding because if you think about it the representations in our brain they are the product of evolution so we've been evolving for billions of years and and I know you read that Ray Kurszwell book and it's It's a beautiful um recount of how we had these kind of exponential increases. So you know we we had um genetic evolution which is very slow and then we had this onto genetic filogenetic hacking where we developed nervous systems and brains and we developed culture and we started you know evolving at light speed and so on and so forth. At the end of the day um the product of that knowledge acquisition that that process of intelligence are all of these representations and everything in language is a representation you know like for example um algo speak that that is um a term to represent the evolution the linguistic evolution of our language there's this term called unal alive so to get around the social media um filters we now say I'm alive someone and that is an example of linguistic evolution and chat GPT is not capable of doing Because there's a big difference between the process. Absolutely not. You wouldn't understand what unaliving is. We'll get into this, right? But understanding is not about being at the top of the mountain. Understanding is about the path to the mountain.
So, we understand things because we are um physically embodied, right? You you were talking about, oh, you know, we can't observe the particles. It doesn't matter. We are in the causal graph of the particles. You know, they affect us even if we're not they don't they're inside their collider. You don't a radiation. You're not made out of particles. That that is Yeah. You're not made out of nutrinos. They're passing through you. The nutrinos are completely abstracted as far as we're concerned. They actually pass through the earth the entire time. So like there's all kinds of theoretical physics that have nothing to do with you really in a causal graph. They don't affect your behavior. They haven't really meaningfully affected any of your decisions. We we'd have to abstract like 18 layers and do quantum mechanics that none of us are I don't know. You're probably smart enough to get it. I'm not. And so like that shit's really in the real world that we're purely abstracting to. But because knowledge is actually representing the real world, it works. I I'm not I'm not so sure if you could explain what it is exactly you're getting at that we get out of embodied experience and how libraries and education curricula and schools and reading a bunch of books can possibly give you knowledge if none of that is any kind of experience that is embodied. Like how does ChachiPT know the vibe of Shinjuku in Tokyo and can accurately render to me what the vibe feels like if it's never been to Tokyo? Same way a travel agent does. But it's real actual things they know that when you go there, you're like, "Oh, It was pretty much correct." How does that work?
[Speaker Name]: Well, first of all, we we should distinguish um you know, knowledge and understanding. So many many people have different um definitions of this. You know, like in the cognitive sciences, people talk about um you know, kind of factual knowledge, which is you know, just states of affairs. Um there's procedural knowledge, knowing what to do. There's conceptual knowledge, knowing how to think. And roughly speaking, I make the designation that there are, you know, knowing um states of affairs. And understanding is this ability to generate new knowledge. So if you have an understanding of the world, which means you know it at an abstract level, you can do this Lego building in your mind and you can kind of create new knowledge. But the simple grounding problem is something a little bit different even with the the nutrinos that we use this thing David Krakow calls it the principle of materiality which is that we use the world to think and many of the abstractions that we've converged on over you know millions of years of of evolution is because we have this shared physical world. So we you and I both understand similar abstractions which are derived from our sensory motor circuits and from language and so on. And it means that we can talk at a very abstract level and because we're both players in the same game, we understand each other.
And the parlor trick and we won't go over S's Chinese room argument. I'm sure you're familiar with it. He was saying exactly the same thing that you know basically a computer can tell you all of these things as if they understood and you understand because you actually have the the feeling. You actually know you you you even though you've never been to Japan, you've experienced many things that are like people that go to Japan experience. So you have some kind of a shared understanding. there is this there this intersection in your understanding tree that something like chatgbt doesn't have. Now I want to make a distinction as well. Um I'm really impressed with language models. You know um opus 4.5 came out yesterday. GBT is not a language model. Has been multimodal for two years now. It it it's still it's it's known as a language model. It's a self attention.
[Speaker Name]: Right. But but we but we have we have to concede the fact that it is it's an omnimodel. It's been multimodal for a long time.
[Speaker Name]: Yeah. It's multimodal, but it's it's still a self attention transform. It has a couple of traits. The transformers part is true, but just calling it a large language model, I think, cuts it a short of a huge fraction of its capability. I mean, it does visual reasoning. It doesn't make any difference. Okay. Yeah. Yeah. So, so it's a self potential. I mean, on the most recent show we just published, I interviewed the the the guy who invented transformers actually, but yeah, we we just tokenize different domains. We learn the statistical distribution. We do this next token prediction, you know, within a fixed context window. So, it's distributional matching, but then there's some other stuff on the top. there's the RLHF and there's also this um you know reinforcement learning with verifiable rewards for um verifiable domains and and this is why they're doing really really well on reasoning tasks when you have a verifier. So the ARC challenge is a great example of this. I'm a big fan of that. You know France is a friend of mine and he designed this challenge to show abstract generalization. So you only have a few uh examples to learn from and LLMs he predicted would be terrible at this and unfortunately for him the LLMs are now getting incredibly good at it. And that's because in my opinion the LLMs don't really understand in this grounded way that we're talking about. But in abstract domains you can fully describe an abstract domain like a 2D grid you know problem. You don't need to have any grounded knowledge. You can completely understand it.
[Speaker Name]: What do you mean by grounded knowledge? What is this?
[Speaker Name]: This is grounded knowledge, right? You know, you're touching a chair.
[Speaker Name]: Touching a chair. So, that's not knowledge. That's that's neuronal perception arcing up into your brain and back down. And so, what what what are you learning about the chair when you're touching it like that? This is the other issue that that I have with your perspective that that you have um I would call it a corticoentric um view of of cognition. Absolutely. Yeah. Exactly. So, this is another one of those leaky abstractions I was talking about. By the way, there's a great book called um the brain abstracted by Marva Chirima. I interviewed her recently and she said that one of the most pervasive myths in neuroscience is that we use these leaky abstractions and idealizations to talk about cognition. And usually it's using the most recent technology at the time. So you know a few hundred years ago we were describing the brain in terms of pulley and pullies and levers. Yes, that's right. And and and you know and then it was um you know as a prediction machine as a computer and all this kind of stuff. At the end of this is an example of the these are grounded things that we understand. They're really good models because we can both talk about computers. We both know what computers are. But the brain doesn't work like that in any sense. And a great example of this is knowledge. You as a personal trainer know this, right? So you train you, you know, I'm sure in the past you've trained folks in the gym and what you've probably realized is that in your mind you've got a distilled abstract beautiful like you understand things. You've thought about it. You've distilled it. You could write it down in a book and you probably think all I need to do is just tell people and and I I've done all of the thinking. I've figured all of this out. I just tell them and what you learn to your chagrin is that there is no substitute for experience because part of knowing is actually doing. It's just the pure experience of lifting the weights just feeling the sensations in the body going through that process. That's why knowledge is nonfgeible.
[Speaker Name]: I've been fairly impressed incrementally with modern AI's ability to render pretty good recommendations on exercise and sport science. And I'll push you back on one of those things. I know a lot of people, they're great people that have a lot of what you would call embodied knowledge about training. And because they haven't been interested in or capable of doing the abstraction to distill principles, they're not really that great at training other people. They know what works for them and they know scarcely of that. uh and they're just not good at generalizing because they've never abstracted out the concepts. I've also uh had many discussions with some of my very very smart friends like one of my friends uh graduated from Harvard, masters data science, worked for Apple and a bunch of other AI companies and stuff and um his ability to pick up how to do an exercise incredibly well just off principal discussions and then trying it like twice. He's instantly like one of the best trainers I know in the area. one shot. Why? Because Jared and I can distill to you like five principles of how to do an exercise well, which are geometrically not complex, heristically very cogent, and you will instantly be good at teaching people how to exercise without ever doing the exercise. Is some nuance missing? Absolutely. There's some stuff and we can abstract how it works that you have to do to actually like, oh, I see moving the hips back like this. But if someone was able to describe that to a computer in vectors of how hips should move back in a squat versus just down, especially if so, so I'll tell you this. If I was able to train a model, and I'm not currently doing this and had no plans to do so, of giving it a thousand samples of a squat done right, a thousand samples of a squat done wrong, and about 10 rules of how to squat, it would instantly be 99 percentile of teaching people how to squat with zero grounded experience.
And I would also say what we consider grounded experience is mostly visual data stream. Like yeah, touching stuff is cool and it gives you something like I'm not letting a computer have sex with me until it's had sex with test animals in the lab. That came out all wrong. Um but uh you a lot of our data is visual. Huge huge fraction of it. I suspect that if you let a neural network train on all of YouTube, which I understand I think isn't happening because there's like copyright which drives me insane. You just like have like an algo just eat all of YouTube which which is a preposterous amount of data. It would know more about how the world looks in real life than any extent human period because there's more visual data on YouTube than I have collected with my own eyes by I don't know 10 12 50 orders of magnitude something like that. And so zero embodiment. We did the embodying for it by putting up camcorders into people's faces while they throw up from drinking challenges and then downhill skiing and then the pictures of microbes. If you really train a visual model and even better a 3D relational model on YouTube alone, you will get a more grounded understanding than your eyeballs, which by the way are also just cameras, can ever reference to your brain, which is also a computer. period. The brain is a computer. Is it a different type of computer than a CPU? Yes. Is it a different type than a GPU? Yes, but closer. Are we going to replicate the exactly how the brain works computationally? Yeah, I'm sure at some point. I suspect we'll need to do that because there's no good reason to believe that brains are like super super good at thinking. They're just evolution's best crack at it. You know, like a tank isn't a walker. Are you a Star Wars fan?
[Speaker Name]: No.
[Speaker Name]: Do you know Star Wars? Do you know what an AT-AT or AT-ST is?
[Speaker Name]: Okay.
[Speaker Name]: You know those dog looking big walker things that shoot lasers out of their faces? Like what's better that or a modern battle tank?
[Speaker Name]: Well, modern battle tank by a long shot.
[Speaker Name]: Why? A battle tank sits like 8 feet off the ground. It can hide in trees and it can shoot like 2 miles out. An AT80 walker stands above the tree line and you just shoot one leg and it falls. Why the would you do that? Well, it's a pretty decent attempt to replicate animal locomotion. But the assumption that animal locomotion is like real the best way to do it is not. So just the same way as like a rocket or a supersonic aircraft is like categorically better at flying than a bird is in almost every respect, AI and machine intelligence is going to surpass human intelligence and real deep understanding of the real world just by bypassing our architecture and then later being so smart that it can come around scan the human brain be like oh that's how they were doing it. Oh that's kind of cool. I don't think there's a huge ROI of what it's going to get out of that because it's going to be way ahead. Uh but basically suffice it to say I think that what we assume we're getting from sensory experience is like just a few I don't know gigs or some number of bytes of data like I've seen a lot of in my time but if if it's all also by the way wildly abstracted like if you ever uh think about memories you've had from childhood and you try to really parse like how did this dresser look what was my mom wearing you realize two things one your precious little actual visual data it's like a 10-second long video maybe at any kind of fidelity and the fidelity sucks. And two is you're actually hallucinating a lot of that. Like you're just pretending in details. If you let an AI train and eat all of YouTube at 4K and really like develop the model from that, it would see and understand the visual three-dimensional space around it and the world at a level of embodiment that shits on all of us instantly. That is my conjecture.
[Speaker Name]: I I understand your perspective. I mean, first of all, you were talking about, you know, doing personal training. Um, there are folks like yourself, and I I count myself like this as well. I I even when I'm doing editing and post-production, I really like to build theories about things. I think very deeply about things. I come up with abstractions and and I think they're very fruitful for me and they're not for other people. And this is the beauty of our collective intelligence because some people just just feel in the moment. I mean, for example, when I see my personal trainer, she's called LRA. Shout out, she's probably not watching this, but shout out anyway. I swear to God, I've been training and eating well. You just haven't heard from me in