
By Milk Road AI
Date: October 2023
Quick Insight: This summary cuts through the noise of the AI race, detailing Anthropic's strategic gains, big tech's divergent AI investment returns, and the accelerating robotics frontier. It's for anyone building or investing in AI who needs to understand the shifting competitive landscape and its profound economic implications.
The AI narrative is flipping. As Anthropic secures a staggering $20 billion and reportedly wins Apple's internal engineers, OpenAI finds itself on defense. Patrick and Duncan unpack this competitive showdown, big tech earnings, and Tesla's bold bet on robotics.
"Anthropic in general just kind of has some swag to them and you find a lot less people who really hate Anthropic in the way that people hate OpenAI or hate Sam Altman."
"I personally find GP 5.2 Codeex to be a smarter software engineering model... But the Codeex product is so much less fun to use that I go back and forth and sometimes I have something more complex that I want to get done... But if I want to collaborate, I go to Claude every single time because it's so much more fun to use."
"At a certain point even the worst model is good enough for this for for bad stuff to happen."
Podcast Link: Click here to listen

as like a pure model. It's the fourth best model in the world. Uh, and it's entirely open source. What's up everybody? Welcome to the Milk Road AI podcast. The AI podcast that knows we're all doomed, but wants you to get excited about it.
Anyways, today is January 30th, 2026. The narrative of the AI race is flipping. There is a massive vibe shift towards anthropic as they raise a staggering $20 billion and allegedly win over Apple's internal engineers, leaving an open AI playing defense.
On today's show, Patrick and Duncan unpack that showdown, the brutal earning split that sent Meta soaring while Microsoft crashed, and the shocking report that Tesla is killing its flagship cars to bet the company's entire future on robotics. It's a great show lined up. I won't be joining, but I hope you enjoy it.
Anyways, today's episode is brought to you by Bridge send stablecoin payments instantly simple, global, friction-free, and yield AI that hunts stable coin yields 24/7.
Hey everyone, welcome back to another week of the AI rollup. We're recording this Thursday, January 29th. There's been a lot of news this past week. I think we're just going to run through some of the latest news from the top AI labs.
Get into some of the big tech earnings. We had Meta and Microsoft report this week. Meta up like 10% after earnings, Microsoft down 10% after earnings. So, tale of two cities there. And then kind of go into some if we have time some interesting developments in the robotic space.
All right. So the latest news anthropic upsizing their financing to 20 billion from we were talking last week about them raising 10 billion at 350. It's getting upsized to 20 billion. I feel like Daario got a lot of good press, or just like attention from his speeches at Davos and then also like the article he just posted.
And also just the more conservative approach Anthropic is taking on the spending commitment side, like they're kind of not going as aggressive as OpenAI and focusing on kind of scaling revenue more with the enterprise side. So it definitely feels like just amongst the the Twitter sphere there's a bit of a vibe shift towards anthropic being like leader in the space or kind of close on open eyes heels.
Yeah I agree. I think Davos did a lot of good for anthropic. At the same time, you had like the Wall Street Journal posting about cloud code and about cloud co-work and how it's taking over the world and taking over software.
So, a lot of great press. Anthropic in general just kind of has some swag to them and you find a lot less people who really hate Anthropic in the way that people hate OpenAI or hate Sam Alman. There's a lot to be said about that.
They've been so much more conservative in their fundraising and their buildout. And even though they've been conservative, as we talked about a couple weeks ago, if you look at the chart of projected data center buildouts, they're going to have the lead and have the most compute for a good chunk of 2026.
So they seem to be in a great spot. Their valuation that they're raising at 350 is half of what less than half of what OpenAI is about to try to raise at. So there's a lot there's a lot of good stuff coming for Anthropic so far. People love the models. They're going to keep getting better.
Yeah, this from the information basically their revised earnings projections and it seems like they're just scaling so quickly and then also like after Davos thanks Sam said like as they're going on this fundraising that they added a billion dollars in ARR on their API usage.
So, we're really just starting to kind of see these revenue curves like massively tick up and you know obviously this is the the whole big question around AI just how quickly is there penetration here and did things tick up?
An interesting point from Microsoft earnings that I saw was they have 450 million seats for like their 365 like you know Excel word product but only 15 million seats for co-pilot. So, you know, that's very small penetration just like in the low single digits of penetration for co-pilot versus like, you know, Office or like their their 365 suite which like probably every single company has.
So yeah, that is kind of like an interesting sign for, you know, how early the penetration of these AI products might still be and that can help kind of square the circle on some of these aggressive revenue ramps that, you know, OpenAI and Anthropic and others are kind of projecting.
Definitely. I mean like the the penetration into enterprise is going to continue to grow. Enterprises right now love cloud. That's been the bulk of anthropic success has been from enterprise adoption, a lot of software.
I was reading this morning that even though Apple has picked Gemini for upcoming Siri, all of Apple's internal operations, they all use Claude. So, they allegedly, this is what the story was, is that internally they all everything's powered by Claude, all of their tooling.
They wanted to get a deal done with Anthropic. But Anthropic was really like going to get a lot of money out of them and was really negotiating harshly and so they ended up going with Google. But it just goes to show like engineers love to use cloud. People love to interact with that model and it'll be interesting to see what happens as as OpenAI sets their sight more on the enterprise side whether they can get a good chunk of that back from anthropic.
Guys, if you're still listening to the show, it means you have a huge appetite for AI and for learning about this kind of stuff. And our new newsletter for AI just came out. It comes out Mondays and Wednesdays, too. It's free and it's on our website. So, make sure you go and check it out right now.
Mhm. And I guess like on on the like open eyes like codeex model versus claude code, what you've you've used both like what kind of do you think that claude is is a lot better? I know like Elon was saying when they got their claw code cut off at XAI he's like damn these guys have done something pretty special with cloud code but just from your like first person experience.
Yeah, there's there's two pieces. I've used both extensively. When I said that Anthropic as a company kind of has some swag, Claude the product like the model and Claude Code the product also has some swag to it. It's just like such a pleasure to use it.
It's so fun to work and collaborate with Claude and go back and forth. It has such like a enjoyable personality. And it's also very smart. I personally find GP 5.2 Codeex to be a smarter software engineering model.
I think it's better at executing the task if you've defined it properly. But the codeex product is so much less fun to use that I go back and forth and sometimes I have something more complex that I want to get done and I I know that I have a plan for it. I can put codeex on it and just let it do its thing. But if I want to collaborate, I go to cloud every single time because it's so much more fun to use.
That's going to continue to change. I think OpenAI has gotten a lot of flack recently for maybe over prioritizing coding at the expense of some soul in the model and like how enjoyable the personality is, the ability to do creative writing.
And open had a fireside chat two days ago where Sam said that like they may have overshot the coding and now they're going to look at some of the soft skills a bit more. So all this iterates so fast. We're going to see a GP 5.3 codeex and stuff out soon, which I imagine will have bit of a better personality.
We spoke before about their partnership with Cerris, that big $10 billion deal. That's going to make Codeex really fast that makes the product feel really good. You collaborate faster. So, I use both. I think Codeex is a little bit better uh raw intelligence, but Claude has it beat by a mile in terms of joy.
Interesting. All right. And so, yeah, I guess talking more about OpenAI. They, you know, we've been talking about this $730 billion like valuation. Uh, it seems like that is, you know, we're getting a slow news feed trickle out about that. It seems like that is is coming coming together.
There's kind of reports that you know Soft Bank would do up to another 30 billion which is crazy given their 40 billion from last round. I don't know where they're getting all this money and then Nvidia another 30 Amazon maybe 20 billion and Microsoft somewhere around the 10 billion mark.
So just adding those up is you know nearly 90 billion. Uh you have some other investors come in. I know Sam was in in the Middle East looking for maybe some sovereign wealth money, you know, a couple weeks ago. So, if that comes together, raising that 100 billion again, like we said, should help calm down the market on just like the immediate needs of the spending commitments.
Definitely feels like that's coming more into the mainstream zeitgeist. Uh, I know we had to talk about like uh Elizabeth Warren's letter to to Sam Alman that she wrote. I think that was this week that kind of says, "Hey, you know, you're not supposed to be profitable till 2029 and you have a trillion like 1.4 trillion in spending commitments uh over the next decade."
I think it's like political posturing of like are you going to ask for a bailout? you know, are the taxpayers gonna end up funding, you know, this AI uh build out if things don't go well and like, you know, what even are the net benefits to the taxpayers. I think like the average person is kind of finding it a little bit harder to see like there's a lot of fear around what's going to happen with AI.
So, that's definitely coming more into like the mainstream zeitgeist just like outside of the the world of like finance and markets.
Yeah. And we we've talked about this extensively. You know, you're just a normal guy reading the news feed, seeing these trillion dollar spending commitments, these massive deals coming out. Uh you've used a little bit of AI, but you maybe don't really reap all the benefits of it. Obviously, it's scary. People, you know, Sam Alman, Dario are talking about how this is going to take your job.
Uh it's been interesting to see politically how the two parties in the US have kind of split along the AI line and you have the Republicans under Trump taking a much more pro- AAI stance and then the other side the Dems seem to be much more anti-AI more like prohuman labor.
The the Elizabeth Warren letter specifically is largely around some comments that uh the CFO of OpenAI, Sarah Frier, had made months ago kind of alluding to potentially a government backs stop. She immediately retracted those comments and I honestly don't give too much credit to those.
But the point stands that the Dems are positioning themselves as the anti-AI party. We're going to be going into midterms. We're going to be going into elections in 2028. it's going to be a huge topic.
So, it's interesting to see that split and it's only accelerating. So by the time we get to elections and get to 2028 or even an election sooner potentially. Um, what do the models look like? Are people starting to feel more of the pain economically? Um, this week there have been huge layoffs like at Amazon.
That's probably going to continue to happen. So it's going to be a huge political topic.
Yeah. Yeah. And we've talked about that a bunch before. definitely going to be the biggest topic for the next election, maybe even at midterms if you know things keep on moving as quickly as they have been.
And okay, well, let's go through a couple more points on the AI labs and then we can circle back because Dario had that that article which kind of brings all all this back together. So, quickly going through, you know, some of the other top AI lab news.
Apparently, routers uh was reporting that Elon is looking to merge or is considering merging XAI and SpaceX uh ahead of the SpaceX IPO. I think like that, you know, my initial read on that would be really hard to compete um you know, with all the crazy spending commitments and ramp up that's happening and it's extremely capital intensive on the data center side.
So, you know, bundling that into SpaceX, maybe there's a bit of a subsidy for AI there. He's playing on, you know, the broader narrative of data centers in space, which like I would guess is still, you know, pretty far away, but is is definitely something that's starting to get talked about more.
What What are your thoughts on on that? Kind of entering IPO season on all these labs. Um, OpenI has been talking about it for a while and Dropping's been talking about it. XAI raises at like a significantly lower valuation than the others. Um, and there's a lot of like personal conflict between these guys.
Elon is suing OpenAI that is going to trial in end of March, early April. That'll be an interesting one to watch. But could also be like you merge these two things. SpaceX was talking about IPOing at like a $900 billion valuation. slap the valuation of XAI on top of that integrated and then you IPO above OpenAI and you get the whole space narrative, you get Starlink, you get all of this stuff all under one umbrella becomes like the future company.
But we'll see it. I'm I'm excited to follow the trial in March.
And Elon's asking he's he's looking for over hundred billion dollars in damages from OpenAI, which would be like the end of the company. Would that just be his original investment? Like at the early days when he thought he was funding, you know, he claims he was funding like a non forprofit, but like if that was actually in the for-profit structure, that's how much his ownership stake would be worth. Is that the gist?
Yeah. Yeah, it's it's close to that. And then there's like I mean, they're being sued in in in federal court and they're like criminal charges, so there's a lot going on here. It'll be interesting to see what happens.
Um, it's it's a federal suit, but it's being uh litigated in San Francisco. So, I don't know how you find a jury that doesn't know who any of these guys are in San Francisco, but uh that's going to happen in the next 3 months.
Hunting for the best stable coin yields is exhausting. Rates change constantly, protocols come and go, and you're stuck manually checking dozens of platforms just to squeeze out a few extra basis points. Enter Yield Seeker, your personal AI agent that hunts down the best stable coin yields 24/7.
Here's how it works. Sign up and get your personal AI agent. Deposit as little as $10 USDC on base and your AI agent handles the rest. No more endless scrolling. No more FOMO on better rates. Just set it and forget it. Plus, Milkro listeners who deposit over $100 get 10,000 bonus points. Head to milkroad.com/yieldseker to get started.
Stable coins are reshaping the financial order, but most companies don't have the opportunity to participate in the rewards they generate. Plus, launching a stable coin means wrestling with complex regulations, building bespoke infrastructure, and burning endless developer hours. Enter Bridge and its new product, Open Issuance.
Bridge lets companies send, store, accept, and even launch their own stable coins instantly. Seamless fiat to stable coin flows, control over reserves and rewards, and full interoperability across every bridge issued token. No more patching payment rails, no more monthslong launches. Visit milkroad.com/bridge to see how it works.
Yeah, Elon is pissed. So we will see what happens there. Deep mind obviously the other top lab continues to just ship a lot of great products. They uh put out like a prototype of like you know the ability for regular users to interact with their latest world model. We can play maybe like a quick uh maybe as it's playing.
So just just for context on on Genie just for context on Genie. Um this is Genie 3. that was just released this morning. Uh it's the first Genie world model that you as a paying customer can go and play with. Um and it generates entire 3D digital worlds that you can navigate through and modify stuff in.
Yeah. So in terms of like where we are obviously on the textbased models versus this is kind of like a whole other you know the video models, the image models like it's a whole other frontier. I guess do you have you know any sense on like who is is Google's biggest competitor in more like the world model side obviously like you know the Grock Chachi Claude Gemini are all kind of headto-head on tax and and there's some multimodal features but more in in the world model side have have we seen anything come out of the other like big labs I guess not public like not super publicly
Right. Yeah. Nothing super publicly. I mean, you have Sora, which kind of has the beginning of a world model in generating videos. There's another company, World Labs, which is started by, I think, Feay Lee, who's a who is a professor at Stanford. Um they have been pretty early on world models. They have some good stuff. Um but it hasn't really been super accessible to the public.
We're in the very early stages of these. This is like GPT3 era even before potentially of world models. Um so you can expect to see these follow the same sort of this is this video just everything going on is just so crazy.
Yeah. So it'll be really interesting to to see what happens there and like what gets unlocked as we scale up you know these more physics and video models and you know also the robotic side. I think this is going to create a lot of breakthroughs.
Um, so you know, that's a whole other thing unfolding that I think, you know, not that many people are are paying attention to just because it's it's a bit more in its its infancy outside of the the robotic side. So, we'll have to see.
on on robotics specifically on robotics specifically you have all these different things happening at the same time that individually look I mean it's just a world model like how is this going to apply to my day to my day-to-day life but alongside the stuff that we've talked about in robotics like that physical intelligence paper of kind of generalization of data um you can imagine a world where these world models are used to power robots and all of a sudden this it's no longer just like making a video game at home. It's now being used in the training pipeline for humanoids.
Definitely. So, we we will see what happens. But Google just, you know, as as Demis says, he's like pretty confident in them. They have a very deep research bench. I think, you know, they're not as resource constrained as the other labs and they're pursuing kind of a whole bunch of of like AI verticals, I guess, um, to make breakthroughs in.
So definitely feels like Google continues to be in a really amazing position long term because they're not worried, you know, nearly as much on the funding side and they just have the the breadth of the team to kind of keep shipping like state-of-the-art products.
Um, I guess shifting away from America, the Kimmy K2.5, you know, Chinese open source model came out. Kimmy we've we've talked about a bit in the past when K2 Kimmy K2 came out it was the best open source model. Then they had K2 thinking which was again the best open source model. Now they've got Kimmy K2.5 um which is just an absolute beast and it's entirely open source.
So this chart is done by company called artificial analysis which kind of aggregates benchmarks for different models. You can see it's the it's fifth on the leaderboard here, but two of these are GP 5.2 just with different thinking modes. So, as like a pure model, it's the fourth best model in the world. Uh, and it's entirely open source.
You can use it from like I've seen people running it on home hardware if they have like a bunch of Mac studios. Um, but in the cloud you can it's like 10 times cheaper than competing models and it's really good. I've used it. It It only came out a couple days ago, but uh I've been using it a bit and it's it's quite good. It feels a lot like Claude.
So, there's there's some speculation that it's been heavily distilled from Claude and that they essentially train on Claude outputs. Um because it has some of the same personality traits and kind of patterns that you see with Claude models.
Yeah. But I guess the craziest part here, okay, so it's the fourth best model and then you know you're talking about onetenth the cost in some cases like let's say 1/5 onetenth the cost of the other top tier models which is pretty scary. Um, I guess like this kind of ties into, you know, we've been talking about this a lot, just like the the geopolitical race here.
Dardo's comments in Davos that kind of blew up in and within his article of like we should not be selling or like America should not be selling the latest GPUs to China. like that's the same as selling, you know, nuclear weapons to North Korea, which is obviously very extreme take. But, you know, I guess this would be a good point to to talk about his article cuz he's essentially claiming in the next like one to two years, he thinks there's a real chance that this kind of recursive loop is solved where the AI can make itself better and that's going to really accelerate things on AI intelligence and model side.
and you know it could enable some functionality that is dangerous. So I think that this is like a pretty scary data point given how close like you know the like Chinese are behind the Americans if if you're worried about geopolitical tension.
Yeah. Do you have any kind of thoughts on on that or maybe we can jump more into article and just kind of give some more context?
Yeah. So I mean in terms of the performance of this model, it doesn't really break any of the trends that that we've already seen. Historically we've seen a western frontier model comes out. People love it and then four to six months later uh that performance has reached kind of been in in open source.
It's the time between like parody performance has been shortening a bit. Um but this is fairly on trend. I think we can like expect to see this continue to happen. We're getting close to another release cycle of models uh from American companies. GP 5.3 is coming very soon. Uh there's rumors of Cloud Sonnet for Cloud Sonnet 5 coming uh in early February. I'm sure we're going to get some more Google models in the mix.
So there's a lot to come on the western frontier and this chart will likely look different then where you have the next generation of models you know a sufficient step above KK 2.5 and then come summer we'll see some more Chinese models come out that kind of get closer to that to that new benchmark and then we repeat the whole cycle.
The key point is that you reach and we've talked about this extensively in the past where you have like a saturation point where you're no longer performance sensitive on the task that that that you care about. Software for example at a certain point you may not need the next generation of models. The current ones are sufficiently good software engineers for your workflow.
Um the same thing you could imagine from a from a risk perspective happening in uh the creation of bioweapons or mis like other forms of misuse where okay Kimmy 3.5 comes out and it's sufficiently good at massive disruptive tasks that you don't need further frontier models to kind of get the the downside of this being out in the open. that gets you closer to kind of Dario's framing of of giving nuclear weapons to North Korea.
We can talk all we want about like what is the exact gap in performance, but at a certain point even the worst model is good enough for this for for bad stuff to happen. Um Daario essay is definitely worth a read uh people listening uh it's called the adolescence of technology look it up and check it out. It's quite long. It's quite depressing, I would say.
Um, but this is, you know, being written by one of the guys who has the most insider knowledge on kind of where we're going, uh, and the state of the current frontier. He talks a lot about the downside risks. So, he kind of broke it into five categories of risk. We had like autonomy risk where the models kind of in classic sci-fi fashion decide that they're going to act maliciously. That's kind of one way this could go down.
Then there's misuse for destruction. Um you imagine like terrorists getting access to sufficiently powerful AI and using it for for bad stuff. You have misuse for seizing power. So you imagine China for example having access to powerful AI using that to expand like unauthorit like uh non-democratic regime then you have economic disruption which is what we're going to see I think the soonest in the west where all these guys are saying like 50% of entrylevel white collar jobs getting disrupted there's huge risk there really kind of breaks the social contract democracy there doesn't seem to be like a great plan in place for for how that is going to be resolved.
Um, and then there's like indirect and and unknown risks of people becoming addicted to AI, loss of purpose, having like unhealthy relationships with AI. So, definitely check out the paper.
Yeah, I definitely think that he highlights a lot of things that'll come up as big talking points in in politics in the future, especially if like some of his predictions are right because you have like this massive labor market displacement and then also on the the other side is like an immense like concentration of wealth which has like already been happening and like a long-standing trend of like money from people who don't have assets to money with people who do have assets and now this could like even further supercharge that.
So like wealth inequality is already very bad levels. Um and this is just going to supercharge that. So it is definitely worth reading and kind of the other side of the coin on you know all the potential benefits AI could bring and definitely going to be some some big political talking points in the future.
Yeah. One more thing just on this. Um, after this paper after this blog post went out, uh, the tech scene kind of like went crazy over it. Anthropic did a followup and I think it's it's helpful to understand this just to kind of put it in context cuz it's one thing to hear on a podcast, you know, boweapons and like massive economic disruption and all this stuff, but what does that actually look like?
Um, Anthropic has been floating internally the idea of guaranteed lifetime employment for their employees. So you reach a point where your human services are no longer of of value to the company because the agents are doing a better job but you have you know you have stock options and equity in the company. You keep your job as a form of like company distributed UBI where they have just made like a social pact with their with their team and with their employees and they're saying like we're building this thing. it. We They already see that it's impacting their jobs, but we've got you guys, you know, we're we're gonna you won't lose your job to your own creation.
I have a hard time believing that something like that will be adopted by other companies. Uh I think Antropic is uniquely missiondriven in that sense. So, I don't think we're gonna see like KPMG and Deote start to roll out similar uh similar things in their organization.
Yeah, that's pretty wild. So, we'll see what happens. I mean like I think out of all of the heads of the labs I kind of opinion I trust the most seems to be like um Damis because I think he has like the least incentive to like need to raise money and like create some sense of urgency. So his timelines are you know a little bit further back than than Dario but not by like on on a scale of our lives not by anything that meaningful. So it is it is they're in the same order of magnitude basically. It's like one to two years versus five years.
Yeah. But yeah, anyways, one to two years is just so so close. So I guess like we'll just have to see how that plays out. I think it's hard to to think about it too much because it can get uh it can get overwhelming for sure.
All right, so let's go on to the big tech earnings. Just like this week, interestingly, we had, you know, some some pretty big moves. Meta up 10% after earnings. Microsoft sold off more than at some point today, like 12 12%. I guess like really quickly. The the the story there is they're both essentially spending all of the money that they can possibly spend on the AI buildout in capex.
So I think you know Meta is expected to spend 115 to 135 billion next year on Capex which is up from like 72 billion in 2025 which is basically all of it its cash flow for for next year and it was up on this news because you know revenue is up 24%. the earnings came in like around 10% higher than expected and like higher guidance and I think like what the market is seeing here is through uh like Meta has been able to monetize their investment in AI through like higher engagement and higher ad revenue um you know on a much shorter kind of life cycle than some other companies.
So, they're kind of like the investment community is like, okay, if it's paying off now, you know, we'll kind of let you continue to to bet the farm on on AI. And I think like a lot of these companies feel like, you know, essentially all of them are spending pretty much close to to all of their cash flow or a very large percentage of it on the buildout.
And they kind of feel like it's existential to their future. But in this case, the market I guess is saying, you know, if you're able to monetize it, then kind of keep going. Then on Microsoft, it's kind of like, you know, similar thing. Their their capex came in a little bit higher than than expected and they said that they're going to be cutting back like slightly in the future, but it's still essentially, you know, the vast majority of of their 2026 cash flow and their their cloud services didn't grow quite as quickly as the market was expecting.
So I think that like slower kind of conversion of investment in into returns and then they kind of disclosed a bunch of stuff around open AI and so like 45% of their backlog for for cloud revenues is open AI which I think you know obviously they're quite imshed but that you know is lower lower quality earnings than you know some some other players. pairs. Um, I think there's just like some general fears here around, you know, the numbers going into this whole AI buildout are so extremely large and the orders like hundreds of billions, trillions of dollars, you know, is it going to create the value on the other end that kind of keeps uh the investors happy and and sees this as like a good use of capital whereas I think a lot of these companies are just making the bet that like AI is existential to our business and if we don't spend this money then like we we might not even have a business versus like focusing on you know the is this like a good payoff. It's just like they kind of are forced. It's a good payoff cuz you know if they don't do it then their their whole business could kind of be at risk.
So that was an interesting cuz like kind of on the surface you know it seemed like the market was kind of rewarding Facebook Meta for for spending more money but that's because they were able to convert it and then kind of punishing Microsoft but they just weren't able to convert as quickly.
Um, yeah, we we talked about this a few months ago. Um, I think it was Meta's last earning calls. Um, Zuck was saying how they've seen a direct correlation between the amount of compute used and the engagement on their ads. Um, that's a pretty great position to be in, especially while we're in the middle of the biggest compute buildout ever.
Um, as compute continues to go up, they make bigger ad recommendation models, engagement goes up. Um, the the the costs for for clients, it becomes more lucrative for them to use meta ads. Consumers also benefit from this. You get higher you get more relevant ads. It'll be interesting to see like what happens with OpenAI as they start to roll in ads. um a lot of their team is is from the X Meta advertising team, but for Metal specifically, they're in a great position with this where they're building out a ton of compute and it's going to lead directly to or at least they've seen a strong correlation between that and the value of their of their primary ads offering.
And then kind of bringing this back to a theme that we've talked about a lot on the podcast is Michael Bur's thesis with the GPU depreciation. I don't know if you want to pull up that uh either the written house research piece or the zero heads chart but in the last few months we're seeing H100 rental prices and demand going up significantly and the significance of that is that this is last gen so these these GPUs are a couple years old um and we're seeing increases in demand for them uh Microsoft and Corby were saying that their clients which are super high quality clients um are signing contracts s to run these GPUs in 5year chunks.
Uh, and we had Bur a few months ago saying that the useful life of these things is actually only 2 to 3 years. Those seem directly at odds. Um, and we're seeing through the market in the H100 rental prices that yeah, people still have significant demand for these things.
Yeah. So, seems like this is up maybe like 15% off the kind of lows. was kind of stagnating at the end of last year but uh or kind of like maybe Q2 of last year and has jumped up. But yeah, I think that