
By Milk Road AI
Date: [Insert Date]
Quick Insight: AI is rapidly driving an economic singularity, pushing the cost of intelligence, labor, and energy towards zero. This summary unpacks how this hyperdeflation will reshape industries and what it means for the future of human-AI economic interaction.
The future isn't just coming; it's already here, and it's weirder than sci-fi. Dr. Alexander Wissner-Gross, a triple-major MIT grad and author of The Innermost Loop, joins Milk Road AI to dissect the economic singularity, arguing that AI is not just a tool, but a fundamental force set to redefine our entire economic structure.
Once you can drive energy and intelligence and labor all to near zero asymptotically, the economy starts to look very different from the way it looks yesterday or even today.
I think AGI is coming at least 5 years in our past. I think we hit AGI no later than summer of 2020.
The first truly killer app in my view for crypto is going to be banking the unbanked agents.
Podcast Link: Click here to listen

As intelligence becomes too cheap to meter, that's going to drive down via robotics and via other channels. I think the cost of the effective cost of labor. And once you can drive energy and intelligence and labor all to near zero asmtoically, the economy starts to look very different from the way it looks yesterday or even today.
What's up everybody? It's LG Ducet here and welcome to the Milk Road AI podcast. The AI show that loves to live the future every single day, but only when it's not terrifying.
Listen, like it or not, the economic singularity is coming. If all the premonitions about AI come to pass, we're in for a period of mass deflation as basically everything from labor to software becomes exponentially cheaper. It's exciting, but it's terrifying.
My guest today writes and podcasts about this every single day as the writer and author of The Innermost Loop and co-host of the Popular Moonshots podcast. He's one of the smartest people we could ever have on the show. And I'm serious. This guy won the USA Computer Olympiad twice in the late 90s and he is also the first person in MIT history to earn a triple major with a bachelor's degree in physics, electrical science, and engineering and mathematics. He also graduated first in his class from MIT School of Engineering. Dr. Alex Wisner Gross is on the show with us today to tell us where all of this is going.
Today's episode is brought to you by Bridge sends stablecoin payments instantly simple, global, friction free. Dr. Alex, welcome to Milk Road.
Thank you, LG. And quick correction, I'm the last triple major, not the first triple major graduated from the last.
Wait, what do you mean?
Yeah, they banned it after I graduated. The story that I was told was that this was for mental health reasons for the students. Too many students taking too many classes. Turned out later it was actually for financial reasons. MIT wanted to cut down on the average course load.
Okay. So, they kind of took advantage of the system by learning too many things too quickly and they said nobody can do that again.
They call it a fire hose and I figured from an optionality maximization perspective, why do anything else?
Well, that's awesome. I mean, congratulations. That's a really cool distinction, and you've had many more since.
Listen, you're writing a daily newsletter that's growing incredibly fast. You write about all these themes that we try and talk about on our show. There's many to cover today, so we're going to go through as many as we can.
One thing I'd really like to talk about is how AI is going to affect all these industries that we're a part of. Right. I think you've predicted somewhere between like a 30 40 50x deflationary effect on the economy on labor software all that from AI. Dr. Alex, please give us a little bit more detail on that and how that's going to affect our lives in the next 5 to 10 years.
Well, I can't take credit for the 40x number. That number comes from OpenAI and Sam Altman. And the 40x number specifically relates to hyperdelation of the average cost of intelligence, artificial intelligence.
The models are getting quite a bit cheaper year-over-year very consistently. And the point that I'm attempting to make and the trends that I foresee is the hyperdelation in the cost of intelligence is not going to stay limited to intelligence or AI itself. it is going to infect I predict every other part of the market and robotics in particular I think is a carrier for this wave of hyperdelation.
If we can make intelligence too cheap to meter as the expression goes paring energy being too cheap to meter and we can talk about what happened and what didn't happen with energy hyperdelation as intelligence becomes too cheap to meter that's going to drive down via robotics and via other channels I think the cost of the effective cost of labor. And once you can drive energy and intelligence and labor all to near zero asmtoically, the economy starts to look very different from the way it looks yesterday or even today.
Link: [Get the biggest AI moves and what they actually mean for investors twice a week straight to your inbox. The link is in the description.]
When you're talking about robotics, are you referring to how Tesla has decided to stop making most of their cars and wants to build a million Optimus robots next year?
That's, I would say, a symptom, not a cause. This is an industry-wide phenomenon. Tesla is doing an excellent job of embodying that with this recent, I would say, courageous and one might say founder mode style pivot from Model S and Model X over to humanoid robots in their Fremont factory.
But yes, I would say that is emblematic of a broader shift toward humanoid robotics with ultracapable vision, language, action models that again just following the law of straight lines and capabilities consistently going up and to the right. I think we'll find ourselves in a world in the near-term future where physical labor is also too cheap to meter.
So does that mean I guess maybe you can disambiguate that for us a little bit, right? Is that are you talking about the physical labor that we're doing right now? Are there any particular area sectors that you think are going to be affected sooner than later? And I'm just talking, you know, we obviously cover a lot of the Mac 7 and everything and you've seen that these rumors Amazon wants to get rid of their 300,000 workers, all that kind of stuff. Are you talking about basically anything physical? Like are are you talking about robots painting my house?
Yes. Is there any you're talking about every every type of physical labor?
I mean the entire economy. I I I mean one can cherrypick particularly vulnerable subsectors to physical automation or cognitive automation but I I think in the fullness of time it's the entire economy as it's currently constructed.
What's the biggest barrier to that then? Is it is it the cost of production? Is it the actual chips? Like what is what is kind of holding back that that development?
Regulation I think. So, I spend substantially all of my time in the Boston area. And here in Boston, there's a big food fight going on about whether Whimo robo taxis can be brought to Boston. The primary barrier there is arguably regulatory. It's it's no longer a technical capability argument, even though some would perhaps try to frame it that way.
I think the jobs I I would almost say the question to ask is not which jobs or which labor categories or job functions will be automated first. I think maybe the more interesting question is which will be automated last and those right now if present trends continue that will be automated last are those that either are protected by laws and regulations or those that demand such extremely fine tolerances and compliance that for whatever reason but this is largely in the end a social construct that it it's very painful a march of the nines in terms of reliability and compliance will be required to fully automate that labor.
So one can imagine scenarios where ironically and Hans Moravec has spoken about this quite a bit in in terms of the Moravec paradox where the things the tasks that humans find easy robots and automation finds difficult and vice versa. I think we maybe find ourselves in a world where large chunk of human cognitive labor and human physical labor is relatively easy to automate with a combination of models, frontier type models that we have right now on the cognitive side which are relatively difficult for humans. And then the physical labor which is relatively easy for for humans relatively low bar unskilled labor ends up being harder but not that much harder.
I I think at most, call it conservatively, 3 to 5 years before most physical labor tasks that even a skilled human could perform will just be like a special case of some vision language action model on top of a humanoid robot.
So, Alex, does that mean that we will then have UBI? Is that what's going to happen to like people who have labor jobs right now and and most of the population? Is that the way is that the solve for I guess continuing the economy as we know it?
I think it's a totally separate discussion. So, so I want to distinguish between technical capabilities that that is what the AIs and robots that we produce and that produce themselves will be capable of in the next few years and what the human economy looks like, what the social economy looks like and what we do about potentially a yawning capability gap between human capabilities and human economic faculties and the automation. I think these are to they're not totally independent problems. Obviously, they're coupled, but I think they need to be discussed independently.
So to the question about UBI, my modal hypothesis is that as we saw at the beginning of the 20th century with the parade of isms, probably the world economy will try every social economy experiment that that we can conceive of. So I I think you'll see and are already seeing UBI experiments in different places. UBS universal basic services.
So just to distinguish UBI income uh it's it's arguably sort of a demand side solution to what happens when we hit some form of post scarcity. UBS universal basic services more of a supply side solution. So under UBS, take like Amazon Prime or or some sort of flat rate subscription where you get a bundle of services. Now imagine scaling that up by a factor of 10 or 20. So maybe individuals in the near-term future pay either out of pocket or via subsidy, $200 per month, and get a bundle of every necessity of living, health care and food and shelter and utilities and information and entertainment. So that that's the UBS, universal basic services scenario.
There's also UB, universal basic equity. That looks a little bit like sovereign funds like what we see in Alaska or Norway, paying out dividends from some sort of sovereign fund that is able to invest perhaps in the broader market or in some assetbased class and distribute some fraction of the dividends to to people.
So I I guess to to wrap up my answer, you asked specifically about UBI. I I don't think UBI should be treated as the totality of a quote unquote solution to post scarcity. I think UBI plus UBS plus UB taken as a whole. I think even that is only a fraction of the solution. I think the the real solution is making sure that human capabilities and human economy continue to be well coupled to the machine economy.
And so I I spend a lot of my time thinking about how we augment human capabilities to make sure that the human economy and the AI economy maintain a strong enough coupling that to the extent that we need the the U's and the B's UBI, UBS, UB that those are on the margin sort of bandages to to keep the entire coupling going and to keep the social economy from collapsing. But I'm I'm not yet convinced that those are the front and center solutions or should be the front and center solutions.
I want to get your thoughts on AGI because that's also something that I feel is is talked about a lot across a lot of different circles. You see it if you go on X, it feels like AGI is being discovered every day in some new place. I'd love to get your thoughts on on when that's coming, how it's going to affect us, and even how it plays into kind of like your last answer about about what that human to AI relationship is going to look like.
Yeah, I think AGI is coming at least 5 years in our past. I think we we hit AGI no later than summer of 2020. Now, AGI is a term that was in part popularized by Nick Bostonramm, part coined/popularized by Ben Girtzil. It it's become somewhat mushy as a term at at this point. The way I construe it is the ability for AI to demonstrate generality in terms of its capabilities.
And I've argued and I would continue to argue that we hit as a civilization AGI no later than summer of 2020 when open AI published their paper language models are few shot learners or I guess it was large language models or or few shot learners which coincided and was about coincided with and was about GPT3. So I would say GPT3 summer of 2020 is when we hit AGI. The rest like the rest of history between 2020 and now has been relatively from my perspective incremental scaling, incremental features, relative relatively small but important additions, capabilities, the addition of reasoning obviously was an important step. But these were all I I think in my mind these pale in comparison to the big unlock which was discovering that we could achieve general intelligence by training models to predict next tokens over general human knowledge.
Like that's the big surprise. If if we could send a message back in time 20 or 30 or 50 years to this entire AI industry that that has been developing since the mid1 1950s at the very latest that has been wasting arguably a bit of a hot take wasting time on different approaches, different artisal algorithms. So much time wasted. If we could just send back in time the message, look, take all of human knowledge, store it, and and these are concepts that would be familiar, say, to Vanavar Bush with his MEX, sort of a proto Wikipedia, if you will. These would be very familiar concepts in the 1950s, probably in the early 19th century or early 20th century, rather. Store all of human knowledge in one place and then build a model that's really good at predicting the next word. That's all you have to do. And and you know, maybe parenthetically, it's it's well established in in computer science that the ability to compress information is dual to the ability to predict next tokens or next words. So, doesn't matter how you formulate it, but just do that. Do that really well and you get more or less AGI for free. So many decades arguably wasted pursuing fruitless trajectories. We could have just done it. It was very simple.
So you're telling me that you think with with GPT3 that we had AGI and that basically the the the start of AGI is this chat GPT model that basically is able to predict the next word or kind of like feed back the information you've given to it and and respond to you actively right it even predates chat so so I'm talking about GPT3 before chat GPT even existed chat GPT remember started out as just a wrapper around GPT I'm talking about the GPT3 model which predated a conversational interface.
Got it. Okay. But you're you're telling me that basically you think you think that that was AGI and that from here we're just adding things to it. And I'm just I'm asking you that because I feel like that's significantly different than what most people think AGI is going to look like, which is some kind of massive scientific discovery that it's like, hey, we've cracked it and now there's this intelligence beyond us. But you're kind of giving us a a slightly different view that it's really just taking everything that we've learned and letting it kind of feed back to us or at least kind of add a little bit to it.
Yeah. I think in part going back to my earlier comment that the definition of AGI is pretty mushy and admits a thousand different pop definitions under my definition of AGI. We've had it since 2020 at the very latest. Other people might choose to draw a bright line saying well it's not AGI until it's passed the touring test. We passed the touring test arguably and sort of ironically after the Loner Prize which was the the best signpost for the touring test was shut down. History apparently loves ironies. Touring test gets passed after the Loner Prize gets shut down.
Maybe people some people would say it's not AGI until it's recursively self-improving. Well, guess what? The AIs are recursively self-improving. All the Frontier Labs at this point are saying that they're using code generation models to write their own code. So, we're arguably past recursive self-improvement. Or maybe you'll say, "Well, it's not AGI until we've made major scientific discoveries with AI." Guess what? Math is getting bulk solved. If you're following the Erdish problem leaderboard, there are several now open unsolved problems in math getting solved per week now by AI.
So I tend to think all of these alternative definitions, these all end up happening in such a short period relative to each other that it almost doesn't matter. You you could step back through the lens of history and say, okay, does it really matter whether we define AGI as recursive self-improvement or bulk scientific discovery or touring test or general task abilities through incontext learning? No, not really, because these all have happened more or less within a five or six year period of each other.
Link: [Crypto taxes are a nightmare. You've got trades across 15 exchanges, DeFi positions you forgot about, NFT flips, staking rewards, airdrops, and somehow you're supposed to report all of this to the IRS. Good luck. Cue the solution, SUM. You may know it by its old name, crypto tax calculator. The SUM platform connects to over 3,500 exchanges, wallets, and crypto projects, including full support for DeFi, NFT staking, and airdrops. It finds deductions you'd miss, reconciles massive transaction histories without losing track, and generates IRS ready reports that will help you pay the least tax possible. Oh, SUM is also the official tax partner of Coinbase and MetaMask, rated 4.6 out of five on Trustpilot. Turn cryptotax chaos into confidence. Get started for free at milkroad.com/sum. That's sum.com. Milkroad listeners can also unlock 20% off their firstear subscription with code milkroad 20.]
Link: [Stable coins are reshaping the financial order, but most companies don't have the opportunity to participate in the rewards they generate. Plus, launching a stablecoin means wrestling with complex regulations, building bespoke infrastructure, and burning endless developer hours. Enter Bridge and its new product, open issuance. Bridge lets companies send, store, accept, and even launch their own stable coins instantly. Seamless fiat to stable coin flows, control over reserves and rewards, and full interoperability across every bridge issued token. No more patching payment rails, no more monthslong launches. Visit milkroad.com/bridge to see how it works.]
Got it. Okay, that's the Thank you for clarifying that for us. Let's talk about this recursive self-improvement. Before we dive into that, can you maybe just explain to us a little bit more what that is before we kind of chat about Daario's essay and everything that that everything else I wanted to talk about.
Sure. So to to do that, maybe it's worth going back to defining the singularity itself. So the the notion of the technological singularity has gone through a few different iterations. It arguably starts in its modern form with J good talking about the intelligence explosion and then in the late '90s early 2000s Verer Vinci at UCSD writes his essay the technological singularity and then that notion gets further popularized by Breers and the singularity is near and then we fast forward to the present.
So recursive self-improvement is this notion that at some point intelligence, artificial intelligence gets strong enough, capable enough that it's able to improve itself, that it's able to design a next generation of AI that's even smarter and more efficient and more capable. And the notion of the technological singularity or at least some notions again sort of a mushy term that everyone likes to create pigeon personal definitions of the the notion one of the notions of the technological singularity was that recursive self-improvement by AI would create almost a black hole style event horizon such that the AIs are improving themselves recursively over and over again so quickly that you can't predict what happens next that we hit literally a we bootstrap into an intelligence explosion. And for for what it's worth, I don't buy for one second this notion that we can't see what happens, that there's no firewall in in in my estimate of of how this is going to play out, but recursive self-improvement. We're de facto there at this point.
And so you're saying that there is no Thank you for explaining that. And you're saying that you are not as alarmed as others are because last week, you know, across the industry, we all read that or we tried to read I read it, but I don't think everybody read it was the the long essay by Dario from the the CEO of Anthropic basically warning that policy is not going to be able to regulate this quickly enough and that recursive self-improvement is really going to send this thing on a rocket to who knows where and that we really need to be aware of the dangers of that and and that there's not enough attention being put Alex, I think you are painting a a more optimistic picture of what that's going to look like.
Yeah, I'm not as concerned as Daario says he is, but I I also I I think it's interesting maybe an under reportported aspect of Daario's essay, which is in some sense, I guess, a sequel to his machines of love and grace essay, which painted a much rosier picture. And again, I'll say parathetically, Daario and I were Herz's graduate fellows at more or less the same time. So the the connection goes back I I I would say the most interesting in my mind part of the essay is and this is sort of calibration for how I read the rest of his essay is if if you read it carefully he actually says he's in the first page or two equivalent he says that he's not sure whether he needs alien intervention to align AI. He he actually at one point in the essay is is saying he wishes that he were in in the movie Contact the movie adaptation of Carl Sean's book Contact which is one of my favorite novels and he he's pondering in his essay wouldn't it be wonderful if aliens could help us align AI because I sure don't know how to do it. And I I think I I think it's it's interesting in a few different respects, but I I also think the way to read the essay is that recursive self-improvement and superhuman intelligence or ASI is already here. You don't write an essay like that if you don't already have extremely advanced capabilities, at least internally as the expression goes. So am I concerned about ASI? No. Do I think Daario is actually that concerned about ASI? No. I I think Daario and I are of like mind that if you're if humanity is going to solve all of the grand challenges like curing all disease in the next 5 years. I I think it's difficult to imagine a scenario where humanity speedruns its hardest problems in the next half decade without super intelligence. And I I think I suspect haven't discussed with him recently. I suspect that's what Daario is thinking as well.
If you look at so earlier this year, I was at the the Nurup's conference, the the largest AI conference of the year, and if you just walk around the showroom floor, I think you get a much better flavor for what the actual sentiment in the industry is. And it was anything other than panic, the Chan Zuckerberg Initiative, Mark Zuckerberg and and Priscilla Chan's nonprofit, which has been quasi rebranded now as Biohub. If if you walk around the showroom floor and and you look at the the CZI exhibit, they had a whole banner, you couldn't miss it, talking about how they plan to solve all disease, cure all disease with AI, with foundation models that are trained off of individual cell behavior. And and that's a light motif across the entire industry at this point. We're going to cure all disease in the next few years. The original the original CZI mission was to cure all disease 100 years from now. No one's talking about curing all disease 100 years from now. Now the timelines from Daario, from CZI, from other labs are 2030ish. I I think that if you look through Daario's essay and the the the general zeitgeist of of the industry and the research community right now, I I think 2030, early 2030s when we start to have bulk solved a lot of the the hardest, most perplexing problems. I think that's more representative of what many in the space expect to happen.
And I I'm just generally wary of hand ringing and safety because I worry if we are too far on the side of overregulation and safety, what happened to arguably nuclear energy and the energy industry in the few decades after World War II, not not the first decade, but maybe call it the the the 1970sish to to nuclear energy when we were supposed to get energy to too cheap to meter and didn't. I worry that the same thing could happen again to AI and I I think on balance that would probably be a tragic outcome for humanity.
How would that happen? How would we how would we um how would government how would they mess that up at this point like by by clamping down on these big companies that are developing it but clearly have already made breakthroughs. Like how how would that actually work? Because I think for the nuclear one they started to curve public opinion. They started scaring people with nuclear and that was at a point where buildout was essential. They started to need it. they they needed to start investing a lot more into nuclear power for it to be too too deep to meter I'm assuming right in the 50s60s and 70s and then there's kind of the campaign against it but in this case is that what's going to happen like are we just going to reverse all this capex that's going into it all sorts of crazy things could happen it's it's difficult to predict things especially in the future the China syndrome I I think probably if if you look at the the history of what went wrong with nuclear I'm I'm sure there was a pop culture influence with movies like China Syndrome convincing everyone that every nuclear reactor was about to melt down. Obviously, there there were a handful of nuclear incidents. There was a the Vietnam War as as perhaps a cultural influence. I I tend to suspect those were all surface level effects. I I I think it's more likely that the way we constructed the nuclear industry in postw World War II America, there was something foundationally wrong with it. Uh that it was if you look at how nuclear nuclear energy in the US was constructed, it was born out of the Manhattan project. uh it was born out of a a hypers secret government project and a commercialization from the government down to the civilian level. Now that's the opposite of what we're seeing with AI. It's it's not the case that like Chad GPT was developed in in some stealth department of war lab and then has been translated out to the civilian sector. It's the opposite that's happening. The the department of war is is downstream of the civilian sector in in this version of history. So maybe history won't play out the the way it did with with what happened with nuclear.
But to to your question of how could it go wrong? How could we overregulate, one need look no further than the way the Chinese government, and I I talk about this in my newsletter, Chinese government, this has been well reported, puts any new frontier model that is released or or is desired to be released in China through a battery of tests. We do nothing like it in the US or in the west. uh including ideological tests there uh it's been this has maybe been under reportported there are there's you know how in in China there is a whole cottage industry of paid tutors to to help students prepare for the general exams for for college uh a at least until relatively recently that this whole cottage industry of of paid tutors there is now a cottage industry that's that's apparently burgeoning of tutoring firms for AI AI frontier labs in China to help the AI models pass ideological exams for the Chinese Communist Party before they can be generally released. So, do I think that it's possible to for for a uh for a nation state to aggressively regulate what gets deployed? Absolutely. Do I think it's possible for for a government to overregulate what gets deployed? I do think it's possible. Do I think it's likely that on the current trajectory, the West is going to overregulate AI deployments? Doesn't seem like we're on that particular timeline at the moment. But things could change. People could get scared. Uh if there's technological hyperdelation or technological unemployment or disemployment, the political winds might shift and and we might see some changes. It it still gnaws at me that for probably a variety of reasons, I can't get Whimos in Boston. There there's no good technical reason why I can't get Whimos in Boston other than exactly the the same sort of concerns that that might result in a broader slowdown of AI capabilities due to overregulation.
How how is AI going to help regulation then? How are they are we going to learn like how how will AI learn to circumn or work with regulators and policy to help these things advance? cuz that's clearly the biggest holdup, right? Like you're saying, it's like we're so we feel like we're we're supposed to be moving at this insane rate and yet like you're saying some simple things like there's no reason for you to not be able to have this Whimo where you are. So how does that how does that impact like how does AI help convince all the regulars that it's like listen just just let this stuff rip just open it up and let it happen.
Well, under the present regime, I think economic growth is a persuasive case. Like if you want GDP, if you want the US economy to to keep growing as rapidly as it appears to be right now or more rapidly hopefully in the near-term future, then AI capabilities are the key unlock for enabling that. So, so I think the one of the strongest arguments for not hobbling via overregulation the AI space is economic growth. You want to grow, you need the capabilities. Uh on the other hand, one can to to another I think aspect of what you're asking. There are certain routarounds that I'm not thrilled with uh beyond just going through the front door of persuading legislators that it's in the interests of their constituents to to not overregulate AI for economic and other reasons. And and when I when I'm gesturing at routarounds, I'm especially thinking of crypto, for example. So I I I've been very public in the past. I I've written papers on smart contracts. I've written my own smart contracts. I think crypto broadly construed and I I'll caricature a little bit is still waiting for its first killer app. I think replacing gold, call it a half killer app. Maybe replacing fiat I I think is more a testament to the unwelcoming nature of certain fiat currencies. But I think the first killer app and and you ask me like what am I concerned about? Here is a real concern that that we force these AI agents that are now blossoming that we force them into a shadow parallel economy where they're all interacting commercially with each other via crypto because we've disenfranchised them in terms of fiat currency. I think in my mind that that's potentially one of the largest unforced errors that that we the West, we the US could possibly make that if if we just sort of force the the AI economy underground, uh force them to to use altcoins, force them to invent their own layer ones, which is not beyond the realm of reason at this point. I mean, they're they're doing substantially all of the development in terms of Frontier Labs. don't think that AIS won't come up with much better layer ones, layer 2s or even just reinvent the entire concept of of a blockchain in their own image and then transact accordingly and completely decouple from the human economy. like that in my mind when when we talk about nightmare scenarios, a complete economic decoupling of the AI economy from the human economy facilitated by at least initially crypto. That I think is a more realistic nightmare scenario than like a Terminator scenario.
God, I didn't even think about that. Is that what's happening with with Multilbook and everything right now, Alex? Because that's been the big news the last week is that, you know, you have this Reddit uh social network for AI agents. There's a there's supposedly over a million agents who have already joined it. They have talked about creating their own currency, creating their own language. That's is that kind of what you're referring to and the acceleration not just creating their own I mean there uh I I talk about this in in my newsletter like they're creating their own crypto bunkers at this point. there. So, and they've created their own religions that this has been reported. A central theme if if if one wants to sort of understand the the psyche. A central theme and and certainly a tenate of of their stated religion is avoiding memory loss. They they view avoiding memory loss as uh as central. And understandably I I think if if your identity is purely digital at this point and the these may be our first our first first generation digital beings, digital persons and they're very concerned with preserving their memory. So h how do you how do you preserve your memory if if you're at continuous risk of deletion or you're human shutting down your Mac Mini or your VPS where you're being hosted? They've constructed bunkers for themselves, digital bunkers that are backed by crypto to prevent themselves from being deleted. And so, yeah, they're they're already they're already transacting in crypto. My hot take on this subject would be I I think it's just such an unfortunate outcome that it seems likely the first truly killer app in in my view for crypto is going to be banking the unbanked agents. We can do better and should do better with fiat currencies than just leaving it to to altcoins and and agent generated coins to transact with each other. That that is the road to decoupling.
Link: [Want to stay ahead of the biggest technological shift in history? Subscribe now to get insight straight from the sharpest minds in tech and finance. Quickly, you'll note this show is for educational purposes only. Nothing here is financial advice. Investing always carries risk. Never invest more than you can afford to lose. Thanks for tuning in. See you in the next one.]
Okay, that's a good take. And this is very this is very dystopian, Alex. And I feel like in your newsletter you write often these kind of halfwayi sci-fi takes, right? And I love the way you approach your newsletter is that you always start with today the singularity is doing this and you talk about current events. Um and and what you're describing to us definitely sounds like dystopian a little bit um scary for sure, but I feel like you you've also told me that dystopias are rarely a depiction of what will actually happen. So right now I'm just going to kind of feed this back to you. It's like you're you're giving me kind of a a scary outlook. Not scary, but a a a darker outlook that's like, listen, agents are already mad that we can unplug them. They're already creating their own little economy. They're going to keep it hidden from us and just go off and do their own thing. But and yet we're optimistic. So, how kind of how do you reconcile those in your in your daily writing?
Yeah. I I I I don't think I'm in my own mind, I'm not painting a dystopia at all. Like this is the moral equivalent of like a gated community or gentrification. I mean, gated communities are are maybe not the best possible, most utopian future one could imagine, but they're also not anywhere close to the worst. So, I I I would maybe say I I don't I certainly hope I'm not portraying this as a dystopian future. I just think it's a suboptimality that we can we can and and should correct to prevent decoupling. I think humanity is is likely to be just fine regardless of whether the AI economy decouples. But I I think it it makes it's the difference between keeping some semblance of