
Author: Weights & Biases | Date: October 2023
Quick Insight: This summary is for builders tired of GPU scarcity and researchers needing specialized infrastructure. It explains why the "jack-of-all-trades" public cloud model fails for massive AI training.
Corey Sanders spent 20 years building Azure before joining CoreWeave. He argues that AI workloads are so unique they require a complete departure from the general-purpose cloud model.
"I don't care if the APIs are consistent and commoditized."
"GPU is the most expensive asset... you need a lot of storage to deliver AI workloads."
"The level of quality, performance, and capability... will not win workloads in 2 years."
Podcast Link: Click here to listen

I don't care if the APIs are consistent and commoditized. The level of quality, performance, and capability and experience that we deliver today will not win workloads in 2 years. For anyone who's deployed on a public cloud, especially with GPUs and you suddenly have a burst of capacity, like you're always sweating. And so what I like to think about for us is how do we go make all of that complexity then go away? I think Coreweave already got that. Coreweave is the best place to run training AI workloads and it's why people use us. It's focused on basically bringing as much possible data and throughput into the GPU as you can and that's a very unique requirement that AI workloads have because GPU is the most expensive asset across all of those componentry and so that allows us to make these assumptions that are simplifying for us and daunting for the public cloud such that even the public clouds will come in and talk to us about enabling us for some of their workloads or customers.
You're listening to Gradient Descent, a show about making machine learning work in the real world. And I'm your host, Lucas Bewald. All right, today I'm talking with Corey Sanders, who is currently the SVP of product at Coreweave, and prior to that, a longtime an executive at Azure, where he worked on compute projects and a whole bunch of other things. Um, it might seem like maybe this is Corey we've sponsored somehow or something, but this is actually a conversation that I would be excited to have regardless of where I was working, but you should be aware that Corey is one of my colleagues now that Weights and Biases is bought by Coreweave. But we try to stay as objective as possible and keep it interesting. I hope you enjoy it.
All right, Corey, do you want to sing us in? Should we uh should we do this? The uh the [laughter] Can I be on your podcast? This is a little uh a little beat that I that I created uh begging Lucas to allow me to join this session. So yeah, and I guess we you know we'll we'll put a lot of disclaimers in here, but we're now co-workers.
All right. So this certainly is not an unbiased uh interview. And I guess I haven't really biases, Lucas. We're all about biases here. And I haven't mentioned this in the past, but you know, since Weights Advice is put up by Coreweave, we actually run all these podcasts through Coreweave Compliance. And I'm a little disappointed that we've never triggered anything, including, you know, the episode with Martin Shkreli who's like a little controversial. And we try I try to make this interesting, but we've not ever had compliance complain about anything. So maybe this could this could be our first time. Let's go for it. Let's go for the win here, Lucas.
There's going to be like big like quiet blocks. We'll just we'll just chop it up. Or like really really jumpy chops. [laughter] Yeah. Yeah. Look out for those. I won't answer that stupid question.
It's also It's funny like researching you. So here's what I noticed, you know, as I as I try to like, you know, Google you and find out about your past, right? like you're kind of doing this amazing series of videos for Azure where you're taking like can I say they're like boring topics like even to be like technical guy I'm just like man this is like I'm the kind of video was on backup [laughter] and it's not just backup be like lowcost like backup alternatives for like this situation you know and like man I'm an Azure customer and I'm not really that interested in this but then you like sell it so hard. Like you make it like actually like interesting. Like it's amazing. Like I'm like why am I like watching this and then something happened like I feel like maybe you got a promotion or something and then you're like flying around the world going from like terrible production value like it looks like you just got your camera out to like unbelievable production [laughter] value [snorts] and also like in like fancy places. So, like did the marketing team like notice like we got like a real talent here?
It was probably more that they were trying to stop me from doing my show [laughter] and so they said we would be willing to do this other thing if you promised making that POS show. Please, it's embarrassing the company. Um, but no, yeah, those those trips were amazing actually for cloud cultters. That was fun, man. That's uh that's great. and you're like doing all this wild stuff. I don't know like how did you even set this up?
It was we were having a conversation about a data center expansion that we were having I think it was in London actually and we were we were sort of facing this first wave of a lot of public sentiment of like we don't want these western companies to be building data centers right in our backyard for without a lot of value to us. And you know, I think the realization was gosh, the services that are running in that are from that guy down the street and that big business down the road and like they're all local businesses. And so this was sort of the concept of how do we now translate that like local culture into sort of a an interesting storyline, right? We could basically the culture feeds the technology needs feeds then the sort of the demand.
And you know, some of the interesting things that we did with the podcast that I think, excuse me, the the series that I think were very foreign for for sort of series to that date at Microsoft was like we refused to say Microsoft or Microsoft products in the in the series. Like it was just all about focusing on those end customers and what they were doing. Uh and so was it made it a lot of fun, man, because it was just like talking about talking with people about their stuff, which I guess is kind of what you do in this show.
I guess the good news is you don't need to budget for it. Like I think it's like Cory walking around with the phone. Oh, that's 100%. There was zero budget. It was I think it was my my cell phone that I was recording it on. Uh so, you know, it's it's the ROI was through the roof because and no studio. You're like in the cafeteria. Like, did anyone complain? Like, why are you filming like when we're trying to eat? And I was bringing like guests in and we were just sitting in the cafeteria and [laughter] Yeah. No, man. It was why why would we not? Pretty cool. Pretty cool. It was We really We upgraded when we brought c when we brought microphones in. And that was like the big thing. It was just like, "Wow, we have microphones now." [laughter] Yeah. Versus again the production value of like 16 cameras everywhere I went just tracking me. So that was fun.
All right. So you're like at Microsoft um like work on Azure most of the time for like 20 years and then you came to to Coreweave. I guess like one question I wanted to ask you and this is like really maybe also personal interest, but like you know like what what did you what did you notice like it must have been a was it like a drawing experience to go from like 20 years at this like sort of like you know one of the biggest companies in the world to to like a a startup like I I always kind of like worry about exacts when I hired them and they maybe have like the last five years they were at like a big company like how are they going to feel like are they going to feel like it's total chaos or
Well, you know, it's funny because it it you know, when you're when you had 20 years at any company, you you get to experience a lot of different parts of the company, a lot of different sort of um different groups of people and so on and so forth. And the early days cuz at Microsoft, they had the opportunity to sort of get to work on the very early days of infrastructure as a service. So, we wrote wrote the first spec sort of work through you know, this was sort of the beginnings of containers, Kubernetes partnership. And you know, I was sort of a part of the first Red Hat partnership with Microsoft. Um, and those early days actually Coreweave reminds me a lot of those early days in right where it's basically a a little bit to your point, it's sort of um there's a sort of beauty to the haphazard. Like there's sort of a you know what I mean? Like there's sort of like you can if you take a far enough step back it it makes perfect sense, but if you're real close it looks like insanity.
And you know and I think that like that can exist inside big companies as well as in sort of startups obviously I think it would be in a pretty much all startups and so the you know and I think the fun part of the job is to sort of find that beauty find that sort of that harmony uh in everything that the company is doing and the company's doing amazing amazing sort of groundbreaking work in in almost every facet uh that it's involved in and so it's um it's an exciting and and thrilling place to But it is fast-paced, that's for sure.
So, okay, but here here's like the here's really here's the question I want to ask you. I really just have like one core question that that I I I'm really curious about. Like what what do you think like kind of gives Coreweave the right to exist, right? So like there's there's like a couple years I mean there's like many years of like these clouds taking off and it's sort of chaos and we're back in the as and now there's like you know kind of like three clouds like really and then you see these new neo clouds and and it's like you know like what's going on that there's like um new options here.
Yeah, I think there's a few things at play. Um uh you know I think I think the first thing is this AI revolution sort of this uh this pivot towards massive amounts of new innovation uh new ideiation surrounding AI um has created this sort of new tier of business critical workload. Um uh and you know I compare it to sort of the analytics wave of maybe 10 years ago right where this sort of realization with analytics that it could drive and fundamentally change your business. Um uh and we're seeing that again now with AI and the reason why that's important is because it sets the framework for what I think customer customer need for best in best-in-class solutions.
Right? When something is business critical and has a relatively high cost associated with it, that is when customers are going to be willing to look for the very best there is and be willing to separate from existing sort of contracts, existing uh sort of environments that they're comfortable with to be able to get the best-in-class. And we saw it with analytics. I would argue with companies like Snowflake data bricks as an example where these companies ended up existing outside of this sort of um you know jack-of alltrades type public cloud environment because they enabled a best a best-in-class offering versus just a best-in-uite type solution.
Um, and I think now the AI revolution is seeing that same demand and that same opportunity for a company like Coreweave to deliver best-in-class services and capabilities for this business critical and very high cost requirement. Uh, and uh, I'm excited to get to be a part of that uh, brought about that momentum. Although I guess I guess one difference maybe I mean it's an interesting analogy with like snowflake and data bricks. You could just say you agree with me Lucas that's fine. Oh yeah. Yeah. Yeah. Yeah, it is very exciting to be a part of [laughter] No, you know, forget it. Go on with your question. All right. All right. No, no. I mean, I think it's an interesting gotcha journalism edition. Yeah. Yeah. Hard-hitting. Hard-hitting. Um, [laughter]
so Snowflake and Data Bricks, they run on top of the clouds and they like actually pay money to the clouds. I guess like one difference here that's like, you know, like maybe super surprising is that like Microsoft and Google actually customers of Coreweave right like how how does how does that happen?
Yeah, it's interesting. Although I will say Microsoft good I I I suspect many of them use components from Snowflake and and uh and data bricks as well but yeah I mean I think um uh yeah they're both customers um uh and in some ways partners uh in in some situations and and circumstances. I think part of that is also you know there are elements of what makes an AI cloud special or a neocloud special. um in in sort of decisions and designs and and and implementations that make them not fungeible, right? And my my two favorite examples, you know, one of one of the things that I think from a software side that Coreweave is super unique on is our is our object storage, our our our sort of implementation of object storage, which again for many like backup for many people is like snoozefest like you're talking about object storage. And so that coupled with a with a a caching solution uh that we call our lot of cache um because you need a lot of storage to deliver AI workloads.
look that it it's focused on basically bringing as much possible data and throughput into the GPU as you can right um and that's a very unique requirement that AI workloads and you know particularly training workloads but certainly inferencing as well have to sort of be able to basically feed the GPU so that the GPU is not um idle because GPU is the most expensive asset across all of those um across all of the componentry um and so you know would I say gosh you should use that to run your e-commerce website? No, probably not. Like, I don't think our lot of cash with our chaos storage would be the right thing to run an e-commerce website. Um, but we don't need to design for an e-commerce website, right? We can design for what we're built for, which is AI.
Um, and so that allows us to make these assumptions that are simplifying for us and daunting for the public clouds because how do they make those types of assumptions based on workload? It goes all the way to the physical side where you think of something like liquid cooling. Liquid cooling is a hard thing for fungeible data centers to take advantage of. You've got to plan ahead. You've got to you got to deliver all the componentry. You've got to build the right piping, etc. Um uh and so for us, we can make those assumptions because we know that we're going to need that amount of liquid cooled um uh capacity. Uh you know, I think the public clouds have a harder time about setting that space aside. So again, making that simplifying assumption that we're delivering AI exclusively allows us to build better services, solutions, and experiences such that even the public clouds will come in and and and talk to us about enabling us for some of their workloads or customers.
So like, okay, liquid cooling obviously badass. I mean, I think we're about the same age clearly based on the fact that I understand your obscure musical references and and like, you know, I think I always wanted like a liquid cooled um desktop in my youth, but like what [laughter] would like green liquid like that's you see it going through it, right? Yeah. Yeah. To actually cool things. You just need to see the liquid going through it. Anyway, go. I just want to like I just want to trade off like reliability, you know, for like slightly faster. Like absolutely. I want that. That's right. That's my And that trade-off always wins. Yeah. [laughter] Yeah.
Um, so like if I'm a if I'm a if I'm a customer of like a liquid cool data center, if I'm actually like using it, how how does that like manifest for me? Like I mean, you know, like I I can like actually buy these GPUs in in lots of different data centers. So why would I care if it's liquid cooling?
Well, so so some of the GPUs require liquid cooling because of basically the nature. And so some of the latest greatest largest um actually require liquid cooling and so you can't get them unless you have a liquid cooling solution, right? Um uh so that's at least one. And then you know for others I think look transparently I think the verdict's still out on sort of the full extent of the value. definitely reduce improves efficiency, right? So, it improves um on our side the ability to sort of serve those um uh with sort of um you know lower HVAC costs, sort of lower um uh lower air conditioning because we're able to do the liquid cooling to enable it. Um and you know, the hope is that that would translate into cost savings uh over time. And so I do think there probably the stronger case is the GPUs that are so powerful, so strong that they actually are going to require it and we're just able to then deliver those a much more effectively and efficiently and and at larger scale uh than others out there.
Um that makes sense. And what about like the the what about like storage? I mean it's another one where you know I've used like object stores you know like my whole career. It's like a super you know simple API. I haven't really like you know thought deeply about different ways that it it might work but like I mean I guess like are you saying like the the core of object store is is like kind of faster to to put to like serve the files into a computer because particularly if I'm an e-commerce company I kind of also want I don't know like fast downloads for my customers. It's not like totally different like what yeah what's going on
I think I think um so it it it a key component of it is around sort of the caching tier and how the caching tier is designed across the GPU framework so basically it it takes assumptions around sort of a multi-GPU deployment and sort of delivers and serves the cache from from the GPU or from surrounding GPUs right and so it enables um uh you to optimize sort of a scale GPU deployment um and leverage the cache effectively that way. And you know the e-commerce example probably the the reason why I like to use it is that um uh you know with any sort of cache and cash architecture you've got to make assumptions around your readto write percentages right and and sometimes uh if your caches make the wrong assumptions around you expect to have this many reads and this many writes you can actually hinder performance uh with a cache right uh because if you end up having more writes than reads and so for a lot of these workloads right outside of a checkpointing process which ends up probably requiring less of a back-end storage right than just a local sharing right.
Um uh you know these types of training workloads don't have as much write out. Um, right. And so, yeah, you're right. Some components of an e-commerce site may be sort of exposing the content expos, you know, I'm not maybe an expert on designing an e-commerce site, but certainly all of like the management of the um, uh, you know, of the um, basket, uh, the management of actually making an order, right? These sorts of things take um, consistent rights to be to be um, uh, uh, to be executed. And I think this type of cache is not as optimized for that type of um uh frequent sort of needing consistency and and and sort of high consistency if that makes sense. Totally. Totally.
Are there like other things that you like sort of expect to change when you like kind of look forward at at like workloads that are coming or like people like you know wanting to like run agents all over the place like you know kind of inference is is like you know constantly changing like I don't know like what are you I don't know what what are what are there other places where you sort of expect to like have you know kind of purpose-built stuff. Yeah. I make different decisions. Yeah. Yeah.
Well, I mean, one of the areas that I think um is super interesting with inferencing that, you know, is also potentially a questionable is it a point in time or is it sort of a forever thing, the the amount of time that sort of an overall inferencing call takes a huge percentage of it is inside the GPU, right? Like like to do the actual inferencing call, the sort of response, etc. Whether it be agent, whether it be um uh you know a chatbot, etc. um uh is sent inside the GPU and it dramatically reduces the percentage of overall sort of time to serve that sits on the network, right? Which creates some interesting opportunities around network flexibility, right?
Um, and so, you know, one of the things that I that I like to think about a lot and sort of as we think about designing that I also think puts us in a different position from from others is that flexibility in network positioning and network deployment gives a lot of opportunity for changes in the way we think about workload requirements, right? Whether that be availability, right? Suddenly going from five different data centers to make your calls and basically being able to use whichever one may be available or not. you can suddenly dramatically improve your availability of a giving serving app. um you know I would argue the ability to react to bursts of capacity need suddenly become a lot easier right for anyone who's deployed on a public cloud especially with GPUs and you suddenly have a burst of capacity like you're always sweating that you're going to go astro capacity and it's not going to be there in that location in that region in that zone right um suddenly with sort of um uh with these types of workloads I think you have a lot more flexibility around sort of your original deployment and so what I like to think about for us is how do we go make all of that complexity then go away?
Like how do we go say, you know, you you may want to run a given model off the shelf or you may want to run a deeply customized model with a bunch of custom code that you're going to go write to set it up. Regardless, you shouldn't have to care about how you're going to get your capacity, right? You should be able to say in loose terms, I kind of want it here. I kind of want it there. But you take you sort of send that to the platform and the platform will sort of take care of it for you. Um I feel like that's a pretty novel approach because these workloads are different from an e-commerce site where the network time and your sort of time to the actual VM is meaningful because so much of the time is spent in the GPU. Suddenly that becomes less meaningful. Now does this change over time? I don't know. Right? This is one of the fun things about working in this business is you got to be fast and you got to listen um sometimes more than talk, which I'm not doing well on this podcast to be clear.
Well, that's your role. I'm I'm listening and you're you're talking [snorts] about like But here's let me like let me can we roll let me just rewind just for a minute because you started out being like, "Hey, you do these things. You talk about these really boring things and you make them interesting." And now on this podcast so far, I've talked about object storage. I've talked about liquid cooling. I've talked about like global capacity uh availability like these are super boring topics. Is there anything? But you're making them good. Really? I feel like you're you're selling these topics. I mean, I am actually I think more interested in these topics than the um Tuesdays with uh with Corey topics. Well, I mean, but I I don't know. I'm I'm enjoying it. There's no reason to be mean to your host, but to your guest, excuse me, but that's fine. [laughter] Really? I I actually thought it was pretty interesting.
I mean, do you like I guess the thing in the back the back of my head though, like as you're saying this is like, you know, you're at Microsoft for a long time. I mean, I've I've worked with Microsoft for a long time and had folks on on, you know, the podcast that we're like super smart about like infrastructure and AI and stuff. Um, and and I think that, you know, clearly they h must have liquid cooling if they're offering the same, you know, GPUs. They must be kind of doing the same stuff and thinking along the same stuff. Like, do you think there's like a like a durable advantage here? Okay. Like is the idea that Coreweave kind of gets to a similar scale and just sort of like continues to compete and now there's like a fourth cloud or is there like something like kind of fundamentally different that that's happening in your mind?
Yeah, I mean I think you know those points about fungeability fungeability is the lifeblood of a public cloud right like it's sort of the way in which the business model works. It's the way in which the apps work right. sort of like you know we are going to build um uh services that that meet the needs of a massive scale of customers right for for for tons and tons of different workloads right and so like the liquid cooling example yes there is liquid cooling in certainly in the in the public clouds but it often times is like a an add-on like to a given location which then means it can't work in every location right it's not sort of easy to add it it's also not easy to scale in large quantities right and so that maybe is a very precise example, but I feel it's very illustrative as they say um of the problem, right?
And so it's very very hard for these sorts of special and then I'd say like the lot of cash like certainly that type of technology could be built in the public clouds and as you sort of go down and look at ouruler and our our another good example is our slurm integration on CKS uh the sort of sunk capabilities that we've got. Yes, these could definitely be built in other environments. Um, but just like in sort of other innovation and revolution like I would argue the data bricks and snowflake work could also be built in those public clouds. And so the the the opportunity to be laser focused on delivering to a specific set of scenarios shedding any sort of concern about does it work in these cases? Will it break in those cases? It's like I don't care, right? Like those are not cases I'm going to care about. like does it work and does it does it does it uh deliver unique value in this usage case and as that remains true I think we have the opportunity to continue to innovate um uh and continue to differentiate and then just like any technology advance it's all about continuing to stay ahead right and so we can't slow down we won't slow down um uh and I think we're going to have to see sort of where where the rest of the market moves in innovation um but I think it's it is a I do believe it's a sustainable advant advantage to have that focus. Um uh and uh and we just have to continue to take advantage of it.
Um okay, this is going to sound like a real softball, but it's a really genuine question. Hard so far. We're both coming from that side. I mean, one thing that is like actually pretty remarkable about about Coreweave is like, you know, how how much like real um kind of customer love there is for what I think, you know, before I joined, I really thought was like a commodity product. Honestly, I think like probably most people would would, you know, that aren't like deep in this and and trying to do big data center feel like this is a pretty commodity um product, but it does seem like people are getting like wildly different like you know kind of goodput um and and um Tlops out of the same you know machines because of like you know kind of technical details on on how they're set up or or maybe you know I sort of suspect that it's like actually the way that Coreweave is engaging with customers feels friendlier than than the other clouds having having been on the other side of it for a long time. I do think there's a lot of room for um you know customer service to improve um you know for especially the technical customer service. But I don't know what what's what's kind of your take like do you notice a difference in the way um like Azure versus Coreweave like engages with customers from from the inside? Is that does that feel right or
No absolutely look absolutely look I think the well there's two things maybe to say. I feel the you know the product the product is real like I think uh you know I think in sort of the examples that I give there's sort of numerous additional examples one that you know I didn't actually spend time on but is a key one for customer love is our observability right like the the sort of um awareness and depth that we have into the infrastructure on what's happening and why it's happening and then the automated integration of that into mission control into sort of the uh you know operations that the that the end customer is running simplify the ability to run these jobs and this is part of what like the magic of something like sunk or it's the slurm on CKS um uh with the observability integration customers are able to sort of manage their workloads manage their training jobs in a much simpler way and so that to me like I think the product line is a big part of the love right from the again object storage to the CKS orchestration to the slurm on top of CKS to the observability but I do also believe to your point that that that customer engagement um and you know we've got this really sort of amazing customer success, customer enablement team um that uh is so focused on helping customers figure it out.
You know, I think we sort of start with the assumption that this stuff is complex, right? It's hard to sort of get things working, especially with the fast pace of the infrastructure. um uh and you know for the many of our customers getting down to the infrastructure and using it effectively is critical to success um and so enabling working closely with them you know even sort of the deep engagement of our CTO um uh with sort of the day-to-day activities of our customers is stunning right he is out he is on the channel I think he is like double the amount of Slack messages of anyone else in the company because he's just out there working with customers and helping them solve their problems and making sure that they're using the platform in in you know the best way possible. Um that's pretty unique. That's a pretty unique outcome. You know I would argue that the big clouds probably have that but with you know their top nine customers right or whatever it may be. And so I think just from an overall percentage of our customer base we're able to deliver upon that with with a much greater percentage. Um and so I think that that that delivers upon I think a lot of a lot of love for the platform. Um uh and you know that that alone wouldn't work but that coupled with all the product capabilities that we talked about you know that customer support to get people to use those products in the most effective and efficient ways um that's really where that connecting glue becomes a lot of love.
Was it the same like in the other, you know, kind of new um things that you worked on at at at Azure where you're kind of like making a small number of customers successful at like big scale and then kind of disseminating them to lots of people because like I think what I'm used to from like you know kind of my Silicon Valley SAS background it's like typically the the journey is more like you know we make like a lot of little customers happy and we kind of gradually like move you know up market. I think like you know like the Coreweave um approach I think most of the neocloud approach has been actually totally the opposite of that and sort of different than the model I'm used to.
Yeah, that's right. That's right. I mean, again, I would I would say sort of the the very early days of like infrastructure as a service were closer to that. it was a small amount of customers and sort of make them happy. But like I think today if you look at all the sort of clouds it it's very much I think exactly sort of the way you were talking about it which is you've got a scale of customers and you're building things to meet the needs across the scale of customers versus you know I mean I would argue that that that Coreweave has their ideiation for new products by sitting with a customer customer saying well I'm I'm stuck. I mean in some ways I a lot and chaos came out of this right some some big customers were like I'm stuck in sort of really feeding my GPUs with the fullest amount of data possible um and sort of the design and implementation came out of that and was delivered to that customer first as sort of like a let's work together on it and then became a product right and became something now that that sort of we now can enable for everybody um and is you know growing growing like crazy and so yeah I think that approach is very different and and pretty unique. Um, and a lot of that is also the shape of the of the AI market, right? The shape of the AI market is you've got sort of the big, you know, the big elephants, um, and then a whole lot of what's a smaller animal like little little birds.
So, I guess do you think like is the natural life cycle here this stuff that it like all kind of becomes a commodity then you like innovate on another dimension or do you think there's something like kind of fundamental about like these AI workloads and they kind of stay nonfgeible?
Yeah, I mean I sort of look I believe that while there's value that's being added to a service in in operations in quality and performance and experience um uh that something will never be truly commodity right and you know I've made this argument about the public cloud for years right this like for forever the public cloud you know infrastructure service was going to be this commodity thing that anyone could go move to anywhere. And you know, I made the argument of like, look, I don't care if the APIs are consistent and commoditized. I don't care if we all the same APIs and we all I was like, but the the service we offer, the quality of the experience, right? The quality of sort of the the uptime, right? The performance that you get like these continue to differentiate forever and continue to make something not commodity. And so I would argue of sort of a very similar position here about the AI world. Like you know so many of the things I talked about uh were about enabling TCO, enabling better throughput, enabling better ability to serve on workloads, enabling better usage of your GPUs, right? Enabling betterput. like these things in in the set of services and capabilities that we talked about they're all delivering upon that end goal for a customer and that end goal is quite valuable that end goal is quite valuable and so no I I think that will always there will always be value will the bar continue to go up yes right which is an amazing part of how these types of markets work right like the bar will go up the level of quality performance and capability and experience that we deliver today will not win workloads in two years right like we we better moving the bar up. Um, but there will always be differentiation, especially when the cost is so high and the business value is so high.
So, you would actually argue that Azure and GCP and um AWS are are not equivalent that they're they're they're I mean, I realize it is like hard to move a workload. I definitely would argue that they're not equivalent. I I would argue I mean, I would I would ask you what would be the I mean, so it's funny. I I feel like I'm always like talking to them now. Maybe I can say this. You know, I kind of pretend like, oh yeah, you're you're like special. But I think as a startup guy, you're kind of like, okay, who's going to give me the most credits? I'll use that one first and oh man, now I'm stuck on the thing that I I tried first. I mean, I think that's the startup experience of these things. Like I sort of this sense that like Azure is like a little more like enterprise. I have a experience that GCP is like