
By Semi Doped
Date: October 2023
This summary cuts through the noise of hyperscaler earnings calls, revealing how unprecedented AI infrastructure spending is creating a memory supply crunch and reshaping the semiconductor industry. It's for investors and builders tracking the real economic impact of AI's insatiable demand for compute and storage.
"The memory aspect of semiconductors today has gotten so extreme. Stuff is so expensive that people are simply not able to make lower-end equipment or like devices anymore. And this is like killing everything, right?"
"AI chips make like 65% operating margins and gaming does like 40%. And the 8% is of the revenue only comes from GPUs now. So obviously from a business perspective it doesn't really make sense to put too much effort into GPUs."
"We're in an era of finding a use case for something that just requires so much memory. This I don't see it changing in the immediate future."
Podcast Link: Click here to listen

The memory aspect of semiconductors today has gotten so extreme. Stuff is so expensive that people are simply not able to make lower-end equipment or devices anymore, and this is killing everything.
Hello listeners. Welcome to another semi-doped podcast. I'm Austin Lines of Chipstrat, and with me is Vic Shaker from Vic's newsletter.
Hey Vic, what should we talk about today?
I don't know. There's so much going on in the semi world. My head was reeling trying to think of what to say here. We don't want to sit here all evening and chat about this stuff, but some of the big things that have happened is that memory is continuing to get out of hand. I think we should just talk about it at least because it's kind of well known for now.
Totally. It feels maybe it's just because we're paying more attention than ever, or maybe things are as crazy as ever. Memory, optics, CPUs, logic, everything. It feels as crazy. But we haven't talked a ton about memory. So, let's dive in. Let's talk memory.
Yeah, let's do memory. In a sense, the memory aspect of semiconductors today has gotten so extreme. Stuff is so expensive that people are simply not able to make lower-end equipment or devices anymore, and this is killing everything.
One of the things that has been in the news, I'm not entirely sure if it's true or not, so I'm going to preface that by saying it could be a rumor, but it's possible that Nvidia might not announce a new gaming GPU generation in 2026. We may not see an RTX60 series. We will stick with a variation maybe of the 5090Ti. Somebody online said it's maybe coming through in Q3.
This is kind of a big deal because as I was reading in the last 30 years of Nvidia's history, they have never not announced something GPU related every year because they are fundamentally a GPU company and they have always had something to offer consumers. When we spoke about Nvidia at CES, we were joking like where's the consumer stuff here? I can't buy any of the stuff Jensen's showing us, right?
That's really interesting. And to be clear for listeners, when Vic is talking about GPUs here, he's obviously talking about graphics cards, consumeroriented, and it is zooming out, it really feels like there's this tension, and we'll get into this with the impact to memory between data center AI data center spending and consumer spending, both GPUs and with CPUs, right?
So we see from a memory perspective and we can jump into this. We see a lot of memory companies like a micron for example wanting to invest more and more of their capacity and their money into HBM DRAM and at the trade the cost the trade-off of making consumer memory for example.
And just for context, if anybody isn't aware of why HBM and DRAM or why GPUs and AI accelerators are related, it's because both of them use DRAM and the AI accelerators use a high performance version of DAM called high bandwidth memory where they just stack these DM chips and they are able to sell those stacked DRAM chips called high bandwidth memory for a lot more money.
Of course they are a lot harder to make which is why only SKH High, Samsung and Micron can even make HBM. They're only three companies but yeah so this is why so all these companies these memory manufacturers take their wafer supply and make HBM for AI accelerators then nothing is left behind in terms of capacity for like regular consumer gamers who just want to play a game when they get back from a hard day's work.
Totally and in the same breath Nvidia is naturally incentivized to sell more AI data center chips. I think you had a note here that gaming GPUs are 8% of Nvidia's revenue down from 35% of revenue in 2022 and I think that really illustrates that like yes obviously Nvidia loves gamers it's their that is the name of the company like I mean that is like the core of the company for the last 30 years but just naturally from if you're running a business you're incentivized to chase after the biggest markets and unfortunately for us consumers, and all the gamers who helped build Nvidia into who they are today, like they're just not the biggest market today.
And AI chips make like 65% operating margins and gaming does like 40%. And the 8% is of the revenue only comes from GPUs now. So obviously from a business perspective it doesn't really make sense to put too much effort into GPUs which is kind of sad because what happened to the rest of us everything is like AI.
The AI chips have the better operating margins and therefore it's the same thing trickles down to these memory companies. It's like the to your point, the HBM, the sort of AI memory. You're just going to make more selling wafers full of that than you are wafers full of the memory needed for gaming GPUs.
Every once in a while, something comes up in the news and everybody's go like, "Oh, that's the GPU, you know, HBM killer. That's it." Like no more HBM and everybody can get like regular DRM again. One such incident was like the context memory storage which came online and everybody was worried that this is going to kill HBM because you don't need to store that many weights I suppose or like context on HBM and because you could store it all relatively cheap in this massive NAND storage system for context memory and KV cache.
However, you can't just get rid of HBM. It is still the prime real estate for holding weights and doing Inference and all that. There is no way you can just lose HBM. And if you see all these newer technologies that are newer chips and GPUs that are coming out, they all are increasing the HBM content. Nobody's putting lesser HBM. HBM 4 is now here and all the GPUs are going to use that. They're going to put in like a terabyte of HBM in a single GPU something.
So it's not going down anyway. So there's no way like I see in the immediate future of how this DM supply is going to even like bounce back because if everybody's adding more HBM you know then there was some talk about you know how people are starting to use a combination of optics and DRAM to like pool memory together and bypass HBM again I don't it's a cool idea yes I get it but HBM performance is still unbeatable so it's always going to be a pre premium memory and given how much AI capex spend there's going to be in 2026 which is this we are going to get to in this podcast everybody's going to still have want to buy GPUs and HPM.
Just to give you an idea of really how bad things are, how bad is this memory thing? I pulled up some numbers by looking around and it seems like the GDDR7 price has like 4x in the last year. It's gone up four times. It's kind of ridiculous.
I saw this article from Tom's Hardware that estimated what is the cost of like a GPU, a chip itself. And it the calculation showed that it was like something like $300 to make it like the C like the core GPU but the memory around it was like more than that because of the 4xing of GDDR prices. So the so-called video RAM or VRAM on a GPU is like now 80% of a gaming GPU's bill of materials.
How on earth are you supposed to make money selling GPU cards now?
Oh man, that's painful.
I was looking around a little bit more to see like, okay, like Trend Force estimates that apparently DRAM contract prices have doubled in a single quarter. That's like a 100% increase in one quarter. Like from last Q4 to now in one quarter, it's like twice as expensive.
If anybody listening isn't sure exactly what spot versus contract pricing is, it's spot market is basically like what price is on the open market. Like if you were to go buy a stick now, you know what what is the price of RAM? That's kind of spot market and that usually varies significantly all over the place, based on demand, supply, all kinds of things day-to-day.
But contract prices are different which contract prices are basically memory makers making a deal with the HPs and the Dells and all these people that use DRAM in their products laptops and they have a volume estimate and they say okay look we're going to make like I don't know a million laptops this year and so we need so much memory and the memory companies will go all right given that you have this volume here is our contract pricing. So it's kind of like the commitments have been made beforehand and in spite of that if you know they are able to price it 2x in a single quarter it's it's getting out of hand seriously.
Indeed and this would be so contract pricing is would be why Apple had a really good quarter and you could tell that demand and margins weren't yet impacted by the memory price increases because they were still on these contracts that had been locked in in the past. But that's also the big headwind for Apple going forward, which is the question is how repeatable is that? Because we know that it's going to catch up with them. They're going to have to renegotiate and have new contracts and yeah, memory prices even at the volume that they buy.
Right now is a good time. Memory is very cyclical and it's hard it's hard business to be in, but now is a great time to to get to walk into your customers and say we're raising prices 100%.
You know, and that's the thing. Actually Apple is a good example because I think they got into a contract pricing with Kyokia like well in advance. I don't know how they saw it coming. It's it was it's kind of amazing that they were able to sign in a deal at lower prices compared to anybody else and that really saved a few quarters of business for them. But like you say, yeah, it's going to come back soon enough and they're going to be like outrun it. No, nobody can outrun memory.
Okay, we always talk about these like big memory players, right? Like because of HBM, you see these three big companies always around. But now I I saw that Na Tech, which is a a Taiwanese maker of memory, posted a 600% year-on-year revenue for DRAM. Okay. And they don't really make HBM or anything fancy for that reason. They don't they make standard DDR4. They don't even make much DDR fight. It's a small fraction and I think it's about 10 10% of their overall revenue. But even they are making money.
So you know if you have spare cash lying around I think we should just start like a memory fab like anybody will buy our stuff right now if we make memory. It's that much.
I think I'll have a hard time convincing my wife to invest our money into a memory fab. But it it it's compelling, right? Like obviously and you know this is what this is why the memory market is cyclical because everyone looks at you know how how do you get 600% year-on-year revenue. You can increase your prices 600% or you can try to sell 600% more chips at the same price or some combination therein.
So obviously there's going to be a temptation for everyone to say there is a low supply, massive demand, we should increase capacity, we should build more fabs. Then of course you get to the point where lots of capacity comes online prices you know you can't raise prices as much and then it all starts to cycle from there.
You know Na's price increases are the equivalent of like selling a can of baked beans for $100. I'm like this is baked beans. Why are we selling this for a hundred bucks? You know it's like that. It's standard DDR4 memory is not even the latest generation. It's ridiculous.
It's not only about NAND. I mean it's not only about RAM because we should talk about NAND flash because every ever since context memory has come online you know the there's this whole rush towards NAND memory and and storage systems because as Jensen announced the the Vera Rubin platform has in the storage rack 1,152 terabytes of NAND per rack and Morgan Stanley's estimates are that in 2026 7. They're going to ship 70,000 Reuben racks each with 1,000 terabytes of NAND. So, you can imagine how much NAND supply that's going to be in 2027. And Reuben alone, apparently, according to Morgan Stanley, is going to take up 13% of the global nan supply just for Reuben.
Talk about AI driving everything.
It's ridiculous. But you know I think context memory is fascinating. We had a podcast with the WA C AIO the chief AI officer Val Murkovichi which is one of the previous episodes if anybody wants to check that out. And also I wrote a Substack article on why this is going to drive the price of inferencing down although it doesn't seem like it. Opus 6 4.6 in fast mode is like burning tokens and people's bank accounts at the same time. but you know I think overall eventually when KV cache can be stored on mass I think the cash hits will improve and Inference costs should go down. That's the long-term trend I see but I I don't think we're there yet.
No, I agree with you. I think my takeaway from listening to that pod was there's to your point like if we can intelligently store the KV cache correctly so that we have cache hits and not cache misses then we can let the GPUs you know be lazy and not have to recomputee things right and so we can increase GPU utilization decrease tokconomics yet at the same time it just says to me we're going to need more memory.
Like at every step of the hierarchy, we're just we we're going to continue to want more, you know, because then that's going to open up the cheaper that gets and being intelligent there with offloading as much context as you can, it's just going to drive, like Jeff's paradox is going to drive people to want to put in more and more context, you know, into their prompts. I still struggle today where I'll work with cloud code or something and I'll feed it, you know, a bunch of PDFs and be like, hey, analyze all these earnings calls or whatever and it'll still be like, ah, that's too much. It's too big. It's or it takes too long. And you can still see that we're hitting the limits today.
So, I think the more and more we do, the more desire will be because at some point I'm going to be like, "Oh, great. Now, I want to analyze I want my Agents to analyze these 10 earnings calls that all happened in the last two weeks and I want to give you, you know, six quarters worth of context for every agent. Go do all this, you know, and there'll be a point where it can, but it will only be able to do that with innovations in where we store the KV cache and how how we do that in such a way that it doesn't cost me a thousand dollars to do it, right? But maybe $10, you know, of API credits.
Not to throw a spanner in the works here, but the whole agentic AI open claw you know hype cycle that we're in. And it's not just hype cycle. I think the agentic AI era is about to you know become the dominant way we use inferencing because the whole idea of typing stuff into a chat window and then getting responses is not sustainable. like you want to you know give broad instructions and then it goes off and does its thing and gives you exactly what you want that's agentic AI in in in a nutshell right but for all this to actually work we need CPUs now and you mentioned that earlier these CPUs if they have to agentically operate need DAM again right so memory is not going away.
Everywhere you look yes the the GPUs they want tons of HBM they want even more SRAM or the XPUs. The CPUs, they want more DRAM. We're going to do more compute in the network. So, the DPUs are going to need more memory, right? Just everywhere you look.
The Vera Rubin platform is supporting over one tab of DAM 1 to 1.5 I would say in these memories, pluggable memory slots. They're not soldered on anymore. So they want to fill up as much DM as possible in these these servers because it is becoming more CPU heavy workloads are not just GPU heavy and HPM heavy they're also CPU and DRM heavy because of all the orchestration that needs to happen between the Agents and then everybody like you were saying wants to have a lot of context they want to put in the big PDFs they want to like you know put in all their Google Drive documents and ask it to give like a whole summary of various things.
All of this takes like large large scale storage. So, you know, we're in an era of finding a use case for something that just requires so much memory. This I I don't see it changing in the immediate future. And with that, you know, there's some proof to what I'm saying too because Western Digital is booked up in long-term agreements all the way up to 2028 29. Some people are even talking about 2030 long-term agreements. So, think about that. You're like almost we're talking about almost half a decade into the future and like people are signing up supply agreements for storage. It's just ridiculous, right?
Totally. It it's feels, you know, unheard of. And and then on the one on the other hand, you know, you could say like, oh, it feels like a bubble, right? People are just grabbing for everything they can. But I would say when I think about the value created by Agents even in my own life like oh I'll spend 20 minutes planning things and let the machines go do work for me versus me just manually doing all this stuff like it's it's just a step change in the my user experience and I wouldn't go back.
So even if it feels like a bubble, there's a mad rush, more capacity will come online. Will people always, you know, want to buy supply for the next three, four, five, six years, maybe, maybe not. But the fundamental like value that I'm getting isn't going to go away. And I'm not going to stop. And I think there's a lot of the world that has yet to experience that value, right? A lot of people in their day-to-day jobs that have yet to because we're kind of on the bleeding edge, early adopters, tinkering with this stuff. I think most workplaces have yet to realize how much of their workflow they could automate, how they could work differently, think differently, delegate.
So it's very hard to even if there is a bubble and even if things back there's an overbuild fundamentally, it's just hard to see like we live in a different world now. You know, that's what it's like. We're on the forefront of it. Maybe we're sort of two years in the future, if you will, but it's going to come to the rest of the world. So it it feels crazy to say like there won't be sustained demand for memory.
We'll revisit this conversation at future checkpoints. Maybe next year or two years down the line. We'll find out if this whole podcast age like wine or age like milk.
True. Actually this is a good jumping point to you know like what you're saying is like we are not done building out this AI infrastructure. It is still happening. people are still getting into actually using it. I know I just got my mom on using like perplexity. I'm like no ask it whatever you want. And then she's like amazed at the fact that it could like do all this like answering planning. I showed her some of my clawed stuff and like look at how it plans for me and she's like oh my god like are people like believing this thing and doing things in real life based on what it says? I'm like, "Yeah, I don't think it says that terrible stuff." I mean, it could be, I guess, some in some cases act weird, but for the largest for the large case part of it, it's useful. You know, it's pretty amazing.
In that sense if we you know I know you and I track this stuff but if people generally go look at all the earnings calls of these big hyperscalers that came up recently Meta, Google, Amazon, Microsoft and you see how much each company is planning to spend in capex expenditure in 2026. It is on average 70% higher. Like some of them actually have doubled their expenditure. And I know Google I think had a planned people expected they would spend 120 billion in 2026. They announced they're spending 180 billion. That's like a fantastic amount of it's 50% higher than they thought. And and Amazon announced 200 billion. They just these four companies are like over 600 billion.
What are they doing with all this money?
I mean clearly they're investing most of it in data centers, in in real estate, in power, in compute, in memory, in networking, you know, in storage. And it's increasing the it's not only that they're investing more each year, they're actually accelerating the amount they're investing. Now, you you can't accelerate forever. And you know, we might be getting close to the amount that they can commit to per year because these companies, their free cash flow is also going to dwindle, right? Like we're talking companies that are making $200 billion a year and now they're committing to spending $200 billion a year, you know?
But again, like I said, it's not going to go back down to zero. acceleration will slow, the growth will slow. But it's like again, we're in a new world where these hyperscalers especially are committing to building the infrastructure needed to power all of this artificial intelligence for as it expands, you know, to the whole world.
So, you know, I think we could go through company by company like let's start with with Google, you know, what what is Google spending all this capex on? You know, reminder to everyone that at the end of the day, Google is an advertising business. Search is how they monetize it. YouTube is a fantastic way they monetize it, but at the end of the day, their revenue is from advertising, which I know a lot of people may not be terribly excited about the idea of an advertising revenue model. On the other hand, it has fantastic margins and you know, you can we can have long conversation over a beer about the value of putting like if I'm a user and I know that I have desires of things I want to buy and I don't know that the products even exist. There's actually a lot of value if you put something in front of me and I'm like, "Wow, I didn't realize I wanted that, but I actually do want that." You know, conversation for another time.
When you're Google how they monet like a lot of their topline revenue is from advertising but some of it of course is from their cloud business as well. So, Google Cloud. So, you know, if you break it down to thinking about advertising, you think about cloud. Interestingly, when Google starts investing in capex, if they buy Nvidia GPUs, for example, most of the time those are meant to just rent out to customers. It's part of their cloud business, right? Why investors generally feel good about cloud businesses, at least early on in sort of this AI capex boom, because there is a shorter path to return on investment. It's very clear. It's just like, oh, if Google or any cloud can get its hands on Nvidia's GPUs, there are definitely customers waiting to rent them out.
The trade-off is there's lower margins. So, you know, we might be talking if you're Oracle or someone, it might like a NeoCloud, it might be like 10% 20% margins, and if you're a Google or a Microsoft, maybe you can get 30% margins or something, but it's not that great. You do you have you take lower margins, but you have confidence that you're going to be able to like invest that money and get a return on invested capital ROIC pretty quickly. Now of course Google also spends money on TPUs which their cloud business can rent out at say 40 or 50% margins, higher margins because there's no merchant silicon vendor in the middle. Of course, there's, you know, Broadcom and Hawkan, and that's a conversation for another time. So, you know, these things aren't cheap, per se, but cheaper than than merchant silicon.
Where the real money maker is in all of this today is for Google building out TPUs so they can drive their core advertising business. Using generative AI to make advertising better. And I've written about this a bit on Chipstrat, but even if you just think about like LLMs can understand intent. They can understand what you're saying much better. It's not just keyword matching anymore. Like the all the the decades of SEO, you know, where people write articles specifically so they have their keywords just so that when you search it, there's a keyword match. Like LLMs are just say, "No, I know what Austin meant. I know a lot about Austin and when he searched for this, I know what he meant. So, let's understand his intent better and surface him a better ad. And of course, Google can even go so far as to actually use LLMs to rewrite the ad copy to more match me on the fly.
You can see how generative AI can make a huge impact on the core advertising business, making ads better which ultimately results in people being willing to pay more for those ads. But then, of course, we've seen this with Meta. You you know, Google's doing this too. you can use generative AI to actually generate content. So you can have like almost infinitely more ads because now even any small business they don't have to have a designer. They don't have to be a designer. They can just you know ideally push buttons or in the most northstar use case imaginable. They could just give you access to their bank account and say like I am this type of business. Please generate the ads for me, generate the videos for me. You do it all.
For Google, when we're talking about like spending $180 billion dollars of capex or whatever, you want to buy Nvidia GPUs, you want to rent those out, but you want to build a ton of TPUs so that obviously they can train all their own Gemini models, but at the end of the day, in the very near term, it's so they can drive their core business, which, you know, has 70% margin. So, any extra dollar you can make is a really good return on their investment.
I'll pause there, but yeah, what I mean what do you think like do you think those are all pretty defendable reasons to spend?
Compared to like the other hyperscalers we'll talk about to a person like me who doesn't really spend too much time thinking about the advertising business model and the revenues around that and all that. From what I hear you explaining to me, I see that Google has somehow an unfair advantage in this scenario where they can feed back their AI expertise into their advertising business, which is a high margin business, maybe a little lower margin if they're running it with AI now, but anyway, still high enough. And they still have a massive business and the money from that can drive more AI investments. So, they seem to be well positioned.
Yes. They're vertically integrated, fully vertically integrated. Not only do they make their own TPUs, but obviously they do their own AI and then they have the channels by which to take that AI to market whether it's to sell the AI to people whether you know Gemini or even just like AI as a service at the at the cloud level all the way to just taking it and putting it into, you know, Gmail to make it better for you, which they've done for a long time, or YouTube to make it better for you. And so they do sort of have an unfair advantage that they they can optimize everything and they can quickly monetize. They don't have to convince someone else to use AI. They can just immediately take it and make their ads better or they can make their content better, you know.
Yes, it is like Google has I'm thinking about the point that you made about like, well, doesn't generative AI cost more? So like maybe it's not quite as profitable to to use generative AI which you know Google has always used AI in their advertising business. You know historically it was maybe on CPUs and it's moved to GPUs and TPUs. And generative AI is obviously ve you're you're generating a lot right it's very computationally expensive and it requires memory and everything. on the other hand on their earnings call in the last several quarters they've actually showed revenue growing matching pace with their growth of expenses and so it's not definitive but they are trying to point and say like actually we're generating as much new revenue as we are in new expenses so maybe it is actually paying for itself.
That's good that's a good good to know actually because from an Google investor point of view that's a good sign to have really to totally. Now, of course, you know, these will be data points to track over time to say at what point have they have they can they always keep squeezing can they make their ads so much better that even though it costs more to serve, they're generating even more value because it's just amazing ads or or does this taper off?
Between Nvidia GPUs and Google's own TPUs, do you see the direction Google will take is to entirely use their own TPUs for everything they do and not use any G Nvidia GPUs in their cloud cloud business. What do you think will happen there?
Their cloud business will always buy Nvidia GPUs because they as a if you're a cloud business I mean think of yourself as like a property owner as a cloud business you need to own different types of properties to rent out to different customers like let's say commercial properties like you never want to own all the same type of commercial property because there's only a certain set of customers that want that. So, if you're a cloud business and you you want to have choice, you want to have GPUs, you want to have TPUs, you want to have Amazon or sorry, you want to have AMD GPUs, you want to have Nvidia GPUs, like you want choice because your customers are going to want choice because they're going to have different trade-offs. Some enterprises might be optimizing for TCO. some but maybe some like model training startups are are like let's say you're like 11 labs or someone like you just maybe you just want access to the the highest end GPUs. So Google will always buy Nvidia GPUs to rent out to customers.
For their own workloads, you have to ask will Google and will like Amazon will they increasingly use their own custom chips or where do Nvidia GPUs or AMD GPUs come into play for their internal workloads? And I think you know we we are definitely of course GPUs are very flexible and when you buy them for internal workloads the nice thing is you can move them around as you need because these are all huge companies and they have all these departments and all these teams that are all just grabbing for straws and saying like I need a GPU I need a GPU I want to do something I have a feature I want to build right and and so GPUs are very flexible and they can move around to different departments serve different training workloads serve different Inference workloads whatever.
Internally for the big workloads that companies have and know they will have for a long time like those will continue to shift in my opinion to their own custom silicon that will be optimized for those workloads because at hyperscaler sense it just makes sense. If you look all the way back to Google made this VCU ASIC video encoding unit or something, it was literally just like every time you upload a video like this one that we're recording to YouTube, they want to, you know, you upload it at a particular like frame rate and they want to encode it into like 720p and 1440 and 640 and whatever so that if people have like high bandwidth connection or low spotty connection, they can serve the give their users the best experience by serving them like different size videos and it's just a task that they do over and over and over all the time, right? Because you know gazillions of hours or vid videos are uploaded every minute and so for that use case it made a ton of sense for Google to say like dude we could run this on CPUs we could run on GPUs but we should actually make a custom chip very designed for this workload.
Generative AI Inference workloads aren't as tightly constrained but as we talked with Sarab from Microsoft with the Maya there are still lots of trade-offs for their part particular workloads that they want to make like hey we want lots of you know FP4 and FP8 precision and there are like think about Microsoft their big workloads like they're serving a ton of open AI they're serving a ton of co-pilot stuff like they they have the insight to these internal like road maps for many years and they can make even like silicon decisions to support those road maps.
I'm spending a lot of time talking about this but like with a Google at at their size of workloads they have enough runway and enough money that it will make sense to continue to optimize TPUs for their internal workloads and essentially co-design for their internal workloads. their cloud business will always sell GPUs, but internally I think all of these hyperscalers will be incentivized to have as much codeesign silicon as they can for all of their biggest workloads.
Now that you mentioned Microsoft actually I don't think Wall Street was very happy with how much money they are spending. What do you what do you think of that? Why why are they not happy?
Meta went up on their capex news and Microsoft went down. So let's let's compare Microsoft and Google. So Microsoft is building its own silicon and Google's building its own AI silicon. Of course, Google's way further ahead, you know, 10 years in and Microsoft's a few years in. Both companies are training their own, if you move up the stack, both companies are training their own AI models. Microsoft is more dependent on OpenAI, which of course that was a very good bet early on, but Microsoft's also trying to, you know, create their own models. So again, Google sort of has an advantage here where they have internally trained models. They've been doing it for a long time. They're really