
Author: Jeff Dean, Date: [Insert Date Here]
Quick Insight: Google's Chief AI Scientist, Jeff Dean, unpacks the strategic tension between pushing AI's bleeding edge and making it efficient for billions. This conversation reveals how full-stack innovation, from custom hardware to model distillation, defines the future of AI deployment and research.
"I think what often happens is as the models become more capable, people ask them to do more."
"I'm a big fan of very low precision because I think that gets that saves you a tremendous amount of energy."
"I actually wrote a one-page memo saying we were being stupid by fragmenting our resources... and that was the origin of the Gemini effort."
Podcast Link: Click here to listen

Hey everyone, welcome to the L in space podcast. This is Allesio, founder of Colonel Labs, and I'm joined by Swix, editor of L in Space. Hello. Hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome.
Thanks for having me.
It's a bit surreal to have you in the studio. I've watched so many of your talks and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said congrats on owning the Pareto Frontier.
Thank you. Thank you. Pareto Frontiers are good and it's good to be out there.
I mean I think it's a combination of both you have to own the Pareto Frontier you have to have frontier capability but also efficiency and then offer that range of models that people like to use. And you know some part of this was started because of your hardware work some part of that is your model work and you know I'm sure there's lots of secret sauce that you guys have worked on accumulatively but like it's really impressive to see it all come together in like this steadily advancing frontier.
Yeah. I mean I think as you say it's not just one thing it's like a whole bunch of things up and down the stack and you know all of those really combined to help make you able to make highly capable large models as well as you know software techniques to get those large model capabilities into much smaller lighter weight models that are you know much more cost-effective and lower latency but still you know quite capable for their size.
So yeah, how much pressure do you have on like having the lower bound of the Pareto frontier too? I think like the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users and I think initially when you work on the CPU you were thinking about you know if everybody that used Google we used the voice model for like 3 minutes a day they were like you need to double your CPU number like what's that discussion today at Google like how do you prioritize frontier versus like we actually need to deploy it if we build it.
Yeah, I mean I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version.
At the same time, you know, we know there those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader use cases. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of you know lower latency use cases. People can use them for agentic coding much more readily.
And then have the high-end you know frontier model that is really useful for you know deep reasoning you know solving really complicated math problems those kinds of things. And it's not that one or the other is useful. They're both useful. So I think we like to do both.
And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable more modest size model.
Yeah. And I mean you and Jeffrey In came out with this solution in 2014. Don't forget L'Oreal Vine as well. a long time ago. Like I'm curious how you think about the cycle of these ideas even like you know sparse models and you know how do you re-evaluate them? How do you think about in the next generational model what is worth revisiting like a yeah they're just kind of like a you know you worked on so many ideas that end up being influential but like in the moment they might not feel that way necessarily.
Yeah, I mean I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on with, you know, I forget like 20,000 categories or something, so much bigger than ImageNet.
And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals and this one's going to be really good at sort of indoor room scenes or whatever. and you can cluster those categories and train on an enriched stream of data after you do pre-training on on a much broader set of images.
You get much better performance if you then treat that whole set of maybe 50 models you've trained as a large ensemble. But that's not a very practical thing to serve, right? So distillation really came about from the idea of okay what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve.
And that's you know not that different from what we're doing today. You know often today we're instead of having an ensemble of 50 models we're having a much larger scale model that we then distill into a much smaller scale model.
Yeah, a part of me also wonders if distillation also has a story with the RL revolution. So what let me let me maybe try to articulate what I mean by that which is you can RL basically spikes models in a certain part of the distribution and then you have to sort of well you can spike models but usually sometimes it might be lossy in other areas and it's kind of like an uneven technique but you can probably distill it back and you can I think that the sort of general dream is to be able to advance capabilities without regressing on anything else and I think like that that whole capability merging without loss. I feel like it's like you know some part of that should be a distillation process but I can't quite articulate it. I haven't seen much papers about it.
Yeah. I mean I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of sort of coax the right behavior out of the smaller model that you don't wouldn't otherwise get with just the hard labels and and so you know I think that's what we've observed is you can get, you know, clo very close to your largest model performance with distillation approaches.
And that that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.
Dare I ask so it was it was the original map was Flash Pro and Ultra. Is ultra are you just sitting on ultra and distilling from that? Is that like the mother load?
I mean we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are you know our pros scale model and we can distill from that as well into our flash scale model. So I think you know it's an important set of capabilities to have and also inference time scaling can also be a useful thing to improve the capabilities of a model and yeah cool yeah and obviously I think the economy of flash is what led to the total dominance I think the the latest number is like 50 trillion tokens I I don't know I mean obviously it's changing every day but uh you know by market share hopefully hopefully up no I mean there's no I mean Just the economics wise like because flash is so economical like you can use it for everything like it's in Gmail now it's in YouTube like it's it's in everything we're using it more in our search products of various AI mode overviews. Oh my god flash parts AI mode. Oh my god.
Yeah that's yeah I didn't even think about that.
I mean I think one of the things that is quite nice about the flash model is not only is it more affordable it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do something until it actually finishes what you ask it to do because you're going to ask now not just write me a for loop, but like write me a a whole software package to do X or Y or Z.
And so having low latency systems that can do that seems really important. and flash is one direction, one one way of doing that. Yeah. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well like TPUs. The interconnect between chips on the TPUs, is actually quite quite high performance and quite amendable to for example long context kind of attention operations. You know, having sparse models with lots of experts. These kinds of things really really matter a lot in terms of how do you make them servable at scale.
Does it feel like there's some breaking point for like the protoflash distillation kind of like one generation delayed? I almost think about almost like the capability asmtote in certain tasks like the pro model today is as saturated some sort of task. So next generation that same task will be saturated at the flash price point and I think for most of the things that people use models for at some point the flash model in two generation will be able to do basically everything and how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the flash model? I'm curious how you think about that.
I mean I think that's true if your distribution of what people are asking people the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So I mean I think this happens in my own usage like I used to try our models a year ago for some sort of coding task and it was okay at some simpler things but wouldn't do work very well for more complicated things.
And since then we've improved dramatically on the more on the more complicated coding tasks and now I'll ask it to do much more complicated things. And I think that's true not just of coding but of you know now you know can you analyze all the you know renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated you know more complicated task than people would have asked a year ago.
And so you are going to want more capable models to push the frontier in some sense of what people ask the models to do. And that also then gives us insight into okay where does the where do things break down? How can we improve the model in these these particular areas in order to sort of um make the next generation even better?
Yeah. Are there any benchmarks or like test sets that you use internally? Because it's almost like the same benchmarks get reported every time and it's like all right it's like 99 instead of 97. Like how do you have to keep pushing the team internally too to like this is what we're building towards?
Yeah. I mean, I think benchmarks, particularly external ones that are publicly available, have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30% maybe, but not higher.
And then you can sort of work on improving that capability for whatever it is the benchmark is trying to assess and get it up to like 80 90% whatever. I I think once it hits kind of 95% or something you get very diminishing returns from really focusing on that benchmark because it's sort of it's either the case that you've now achieved that capability or there's also the issue of leakage in public data or very related kind of data being being in your training data.
So we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have that it doesn't have now and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it we need different kind of data to train on that's more specialized for this particular kind of task? Do we need um you know a bunch of uh you know architectural improvements or some sort of uh model capability improvements? You know what would help make that better?
Is there is there such an example that you a benchmark inspired an architectural improvement? like I'm just kind of jumping on that because you just I mean I think some of the long context capabilities of the of the Gemini models that came I guess first in 1.5 really were about looking at okay we want to have um you know immediately everyone jumped to like completely green charts of like everyone had I was like how did everyone crack this at the same time like right yeah I mean I think um and once you're set I mean as you say that needle single needle in a haststack benchmark is really saturated for at least context lengths up to 128k or something. I think most people don't actually have you know much larger than 128k these days or 256 or something.
You know we're trying to push the frontier of 1 million or 2 million context language. I think Google's still the leader 2 million. Yep. which is good because I think there are a lot of use cases where you know putting a thousand pages of text or putting you know multiple you know hourlong videos in the context and then actually being able to make use of that is useful but the the single needle in a haststack benchmark is sort of saturated.
So you really want more complicated sort of multi- needle or you know more realistic take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context which is not just you know can you tell me the product number for this particular thing. Yeah it's retrieval it's it's retrieval within machine learning.
Yeah, it's it's interesting because like I think that the more meta lesson level I'm trying to operate at here is you have a benchmark you're like okay I see the architectural thing I need to do in order to go fix that but like should you do it because sometimes you know that's an inductive bias basically that you're Jason we used to work at Google would say like exactly the kind of thing like yeah you're going to win short term longer term I don't know if that's going to scale you might have to undo that I mean I I I like to sort of not focus on exactly what solution one should drive but what capability would you want and I think we're very convinced that you know long context is useful but it's way too short today right like I think what you would really want is can I attend to the internet while I answer my question right but that's not going to be solved by purely scaling the existing solutions which are quadratic so a million tokens kind of pushes what you can do you're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion.
But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd be find all kinds of uses for that. You would have attend to the internet. you could attend to the pixels of YouTube and the sort of deeper representations that we can form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your yeah, your docs, your plane tickets you have.
I think that would be really really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens in some meaningful way?
Yeah. But by the way, I think I I did some math and if like if you spoke all day every day for eight hours a day, you only generate a maximum of like 100k tokens, which like very comfortably fits, right? But if you then say okay I want to be able to understand everything people are putting on video. Exactly. Exactly. Well also I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense.
Yeah. Yeah. I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of humanlike and audio audio humanike modalities. But I think it's also really useful to have Gemini know about nonhuman modalities. like LAR sensor data from say Whimo vehicles or like robots or you know various kinds of health modalities, X-rays and MRIs and imaging and genomics information.
And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. where even if you haven't trained on all the LAR data or MRI data you could have because maybe that's not you know doesn't make sense in terms of trade-offs of you know what you include in your main pre-training data mix at least including a little bit of it is actually quite useful because it sort of tempts the model that this is a thing.
Yeah. Yeah. Do do you believe I mean since we're on this topic and something I just get to ask you all the questions I always wanted to ask which is fantastic. like there are there some king modalities like modalities that supersede all the other modalities. So the a simple example was vision can on a pixel level encode text and deepc had this deepr paper that did that. Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's that's also like a vision capable thing like so so maybe vision is just the king modality and like yeah I mean vision and motion are quite important things right motion video as opposed to static images because I mean there's a reason evolution has evolved eyes like 23 independent ways because it's such a useful capability for sensing the world around you which is really what we want these models to be able to do is interpret the things we're seeing or the things we're we're paying attention to and then help us in using that information to to do things.
Yeah, I I think motion you know I still want to shout out I think Gemini still the only native video understanding model that is out there. So I use it for YouTube all the time. Yeah. Yeah. I mean, it's actually I think people kind of are not necessarily aware of what the Gemini models can actually do with video. Like, uh, I have an example I've used in one of my talks. It had like, uh, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So, it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer uh, goals and things like that. And you can literally just give it the video and say, "Can you please make me a table of what all these different events are, what when the date is, when they happened, and a short description of the event." And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into SQL like table.
Has there been any discussion inside of Google of like you mentioned tending to the whole internet? Right. Google it's almost built because the a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you you can expect a person to look at maybe the first five six links in a Google search versus for an LLM should you expect to have 20 links that are highly relevant? like how do you internally figure out you know how do we build the AI mode that is like maybe like much broader search and span versus like the more human one.
Yeah. I mean I think even pre- language model based work you know our ranking systems would be built to start with a giant number of web pages in our index. Many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. Now you're down to like 30,000 documents or something. And then you have gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show which is you know the final 10 results or you know 10 results plus other kinds of information.
And I think an LLM based system is not going to be that dissimilar, right? you're going to tend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000ish documents that with the, you know, uh, maybe 30 million interesting tokens and then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the task that the user has asked me to do.
And I think you know you can imag you can imagine systems where you have you know a lot of highly parallel processing to identify those initial 30,000 candidates maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks at 117 things. That might be your most capable model.
So I think it has to it's going to be some system like that that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you you know not the illusion but you are searching the internet. Yeah. But you're finding you know a very small subset of things that are that are relevant.
Yeah. I I often tell a lot of people that are not steeped in like Google search history that well you know like BERT was like used like basically immediately inside of Google search and that improves results a lot right like I I don't I don't have any numbers off the top of my head but like I'm sure you that's obviously the most important numbers to to Google.
Yeah, I mean I I think going to an LLMbased representation of text and words and so on enables you to get out of the explicit hard notion of of particular words having to be on the page, but really getting at the notion of this topic of this page or this paragraph is highly relevant to this query.
Yeah. Yeah. I don't think people understand how much LMS have taken over all these very high traffic system. very high traffic. Yeah, like it's Google. It's YouTube. YouTube has this like semantics ID thing where there's like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book which is absurd to me for YouTube size. And then most recently Grock also for for XAI which is like I mean I'll call out even before LLMs were used extensively in search we put a lot of emphasis on softening the notion of what the user actually entered into the query so that do you have like a history of like what's the yeah I mean I actually gave a talk in I guess web search and data mining conference in 2009. Okay. where we never actually published any papers about the origins of Google search sort of but we went through sort of four or five or six generations four or five or six generations of redesigning of the search and retrieval system from about 1999 through 2004 or five and that talk is really about that evolution and one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger so we could retrieve from a larger index which always helps your quality in general because if you don't have the page in your index you're going to not do well. And then we also needed to scale our capacity because we were our traffic was growing quite extensively. And so we had you know a sharded system where you have more and more shards as the index grows. you have like 30 shards and then if you want to double the index size you make 60 shards so that you can bound the latency by which you respond for any particular user query. And then as traffic grows you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and you know 20 copies of each shard we now had 1,200 machines with discs. and we did the math and we're like, hey, one copy of that index would actually fit in memory across,200 machines. So in 2001 we introduced we put our entire index in memory. And what that enabled from a quality perspective was amazing because before you had to be really careful about, you know, how many different terms you looked at for a query because every one of them would involve a disk seek on every one of the 60 shards. And so you as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query because now you can add synonyms like restaurant and restaurants and cafe and beastro and all these things. And you can suddenly start sort of really getting at the meaning of the word as opposed to the exact semantic form. the user typed in. And that was, you know, 2001, very much preLLM, but really it was about softening the the strict definition of what the user typed in in order to get at the meaning.
What are like principles that you use to like design the systems, especially when you have I mean in 2001 the internet is like doubling tripling every year in size. It's not like a you know, and I think today you kind of see that with LLMs too where like every year the jumps in size and like capabilities are just so big. Are there just any you know principles that you use to like think about this?
Yeah, I mean I think you know first whenever you're designing a system you want to understand what are the sort of design parameters that are going to be most important in deciding that you know so you know how many queries per second do you need to handle? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? what happens if traffic were to double or triple you know will that system work well and I think a good design principle is you're want to design a system so that the most important characteristics could scale by like factors of five or 10 but probably not beyond that because often what happens is if you design a system for X and something suddenly becomes 100X that would enable a very different point in the design space that would not make sense at X but all of a sudden 100x makes total sense.
So like going from a disk spaced index to a in-memory index makes a lot of sense once you have enough traffic because now you have enough replicas of the sort of state on disk that those machines now actually can hold you know a full copy of the me index in memory. Yeah. And that all of a sudden enables a completely different design that wouldn't have been practical before.
So, I'm I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But you know, as you said, in the early days of Google, we were you growing the index quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most surprisingly. So it used to be once a month. Yeah. And then we went to a system that could update any particular page in like sub one minute. Okay. Yeah. Because this is a competitive advantage, right? Because all of a sudden news related queries, you know, if you're if you've got last month's news index, it's not actually that useful for a special beast. Was there any like you could have split it onto a separate system? Well, we did we launched a Google News product, but you also want news related queries that people type into the main index to also be sort of updated. So, yeah. Yeah. It's interesting. And then you have to like classify whether the page is you have to decide which pages should be updated at what frequency. Oh yeah, there's a whole like system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to rec crawl important pages quite often because the likelihood they change might be low but the value of having them updated is high. Yeah. Yeah. Yeah. Yeah. what you know this uh you know mention of latency and and saving things to this reminds me of one of your classics which I have to bring up which is latency numbers every programmer should know. Was there was there just a just general story behind that did you just write it down? I mean this has like sort of eight or 10 different kinds of metrics that are like how long does a cache miss take, how long does branch miss predict take, how long does a reference domain memory take, how long does a distance take these how long does it take to send you know a packet from the US to the Netherlands or something. Why Netherlands by the way or is it is that because of Chrome? We had a data center in um so I mean I think this gets to the point of being able to do these back at the envelope calculations. So these are sort of the raw ingredients of those and you can use them to say okay well if I need to design a system to do image search and thumbnailing or something of the result page you know how might I do that? I could premp compute the image thumbnails. I could like try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth I need? How many disc seeks would I do? And you can sort of actually do thought experiments in you know 30 seconds or a minute with the sort of basic basic numbers at your fingertips. And then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to you know look up something in this particular kind of hash table I use or you know how long will it take me to sort a million numbers or something.
Yeah. The the reason I bring it up actually is actually for I think like two years now I've been trying to make numbers every AI programmer should know. Okay. Yeah. I don't have a great one. because it's not as it's not physical constants like you have physical constants in here you know it's and but I do think like so a simple one would be number of parameters to disk size if you if you need to convert that which is a simple bite conversion that's not that's nothing interesting I wonder if you have any if you want if you if you were to update your I mean I think it's really good to think about calculations you're doing in a model either for training or inference. Often a good way to view that is how much state will you need to bring in from memory either like onchip SRAMM or HPM from the accelerator attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of say an actual multiply in the matrix multiply unit and that cost is actually really really low right because it's you know order you know depending on your precision I think it's like sub pico one picole oh okay you measure it by energy yeah yeah I mean it's all going to be about energy and how do you make the most energy efficient And then moving data from the SRAMM on the other side of the chip, not not even off the off chip, but on the other side of the same chip can be, you know, a thousand pajles. Oh. Or Yeah. And so all of a sudden this is why your accelerators require batching because if you move like say the parameter of a model from SRAMM on the on the chip into the multiplier unit that's going to cost you a thousand pico tools. So you better make use of that that thing that you moved many many times with. So that's where the batch dimension comes in because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good. Yeah. Yeah. Right. Because then you paid a thousand podles in order to do your one pico multiply.
I have never heard a energy based analysis of batching. Yeah. I mean, that's why people batch, right? Yeah, ideally you'd like to use batch size one because the latency would be great but the energy cost and the the compute cost inefficiency that you get is is quite large. So yeah is there a similar trick like like you did with you know putting everything in memory like you know I think obviously Nvidia has caused a lot of waves with betting very hard on on SRAMM with grock. I I I wonder if like that's something that you already saw with with the TPUs, right? Like that that you had to to serve at your scale. You probably sort of saw that coming like what what hardware innovations or insights were formed because of what you're seeing there.
Yeah. I mean, I think you know, TPUs have this nice sort of regular structure of 2D or 3D meshes with a bunch of chips connected and each one of those has HPM attached. I think for serving some kinds of models, you know, you you pay a lot higher cost and time latency bringing things in from HBM than you do bringing them in from SRAMM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips, and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smalish scale model over say 16 or 64 chips. But if if you do that and it all fits in SRAMM, that can be a big win. So yeah, that's not a surprise, but it is a good technique.
Yeah. What about the TPU design like how much do you decide where the improvements have to go? So like this is like a good example of like is there a way to bring the thousand pig jewel down jewels down to 50 and like is it worth designing a new chip to do that? The extreme is like when people say oh you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it is it worth doing in hardware when things change so quickly? Like what what's the internal discussion?
Yeah, I mean we we have a lot of interaction between say the TPU chip design architecture team and the sort of