AI Engineer
January 12, 2026

Your MCP Server is Bad (and you should feel bad) - Jeremiah Lowin, Prefect

The Agentic Interface: Why Your MCP Server is a Bad Product

Jeremiah Lowin, Prefect


Quick Insight: This summary is for builders moving beyond simple chatbots to autonomous agents. It reveals why treating an LLM like a human developer leads to bloated context and broken workflows.

  • 💡 Why is discovery: the most expensive part of an agentic handshake?
  • 💡 How does the "small brain" reality: of LLMs change API design?
  • 💡 Why is a REST API wrapper: the worst way to build an MCP server?

Jeremiah Lowin, CEO of Prefect and creator of FastMCP, argues that we are building AI tools with a flawed mental model. We treat LLMs like perfect oracles when they are actually resource-constrained agents that need curated interfaces. The transition from raw infrastructure to opinionated context products defines the next phase of the agentic web.

Top 3 Ideas

🏗️ Outcomes Over Operations

  • Target Specific Outcomes: Stop exposing atomic operations like "get user" or "filter orders." This reduces round trips and prevents the agent from failing at complex orchestration logic.
  • Minimize Agent Orchestration: Agents are expensive and inconsistent glue. Moving the logic into the tool itself ensures reliability while lowering execution costs.
  • Name for Agents: Use explanatory names that help the model pick the right tool. Clear naming conventions act as a primary navigation system for the LLM.

🏗️ The Token Tax

  • Respect Token Budgets: Handshakes often download every tool description at once. Bloated servers consume the context window before the agent even begins its task.
  • Curate Ruthlessly: Aim for fewer than 50 tools per agent. High tool density causes performance issues and increases the probability of the model selecting the wrong instrument.

🏗️ Interfaces for Agents

  • Flatten All Arguments: Avoid complex nested dictionaries. Simple primitives ensure the LLM does not struggle with serialization errors.
  • Errors are Prompts: Treat every failure message as a hint for the next turn. Helpful error messages allow the agent to self-correct without human intervention.

Actionable Takeaways

  • 🌐 The Macro Pivot: Context as Product. We are moving from raw data transport to opinionated context delivery.
  • ⚡ The Tactical Edge: Prune your endpoints. Remove any tool that requires more than one step to achieve a business result.
  • 🎯 The Bottom Line: The winners of the agentic era will be those who build the best Agentic Interface Guidelines.

Podcast Link: Click here to listen

I really do appreciate that you're all here. I'm going to try and make this as painless as possible. We're not going to do an interactive part. We're going to talk through stuff. I'm happy to go off script. I'm happy to take questions if there's stuff we want to explore at any moment in this. My goal is I'd like to share with you a lot of things that I've learned. I'm going to try and make them as actionable as possible. So there is real stuff to do here more than we might in like a more high level talk.

But let's be very honest, it is late. It is a lot. It is long. Let's talk about MCP. I'm hoping that folks here are interested in MCP and that's why you came to this talk. If you're here to learn about MCP, this might be a little bit of a different bent. Just show of hands, heard of MCP, used MCP, written in MCP server. Okay.

Anyone feel uncomfortable with MCP, which is 100% fine. We can tailor. Okay, then I would say let's just go let's dive in. This is who I am. I'm the founder and CEO of a company called Prefect Technologies. For the last seven or eight years, we've been building data automation software and orchestration software.

Before that, I was a member of the Apache Airflow PMC. I originally started Prefect to graduate those same orchestration ideas into data science. Today, we operate the full stack. And then a few years ago, I developed an agent framework called Marvin, which I would not describe as wildly popular, but it was my leg into the world of AI, at least from a developer experience standpoint, and learned a lot from that.

And then more recently, I introduced a piece of software called fastmcp, which is wildly popular, maybe even too popular. Hence my status today. I'm a little overwhelmed. I find myself back in an open source maintenance seat, which I haven't been in in a few years, which has been a hell of a lot of fun.

But the most important thing is that fastmcp has given me a very specific vantage point that is really the basis for this talk today. This is our downloads. I've never seen anything like this. I've never worked on a project like this. It was downloaded a million and a half times yesterday.

There's a lot of MVP servers out there and fastp is just it's it's become the de facto standard way to build MCP servers. I introduced it almost exactly a year ago. As many of you are probably aware, MCP itself was introduced almost exactly a year ago and a few days later I introduced the first version of fast MCP.

David atropic called me up said I think this is great. I think this is how people should build servers. We put a version of it into the official SDK which was amazing. And then as MCP has gone crazy in the last year, we found it actually to be constructive to position fast MCP as I'm maintaining it as the highle interface to the MCP ecosystem while the SDK SDK focuses on the low-level primitives and actually we're going to remove the fastm vocabulary from the low-level SDK in a couple of months.

It's become a little bit of it's it's too confusing that there are these two things called fast MCP. So fast MTP will be a highle interface to the world and as a result we see a lot of not great MCP servers.

I named the talk after this meme and then it occurred to me like do people even know what this meme is anymore? Like this this to me is very funny and very topical and then it's from like a 1999 episode of Futurama. So if you haven't seen this, my talk's title is not meant to be mean. I'm sort of an optimist. I choose to interpret this as but you can do better. And so we're going to find ways to do better. That is the goal of today's talk.

In fact, to be more precise, what I want to do today is I would really like to build an intuition for a gentic product design. I don't see this talked about nearly as much as it should be given how many agents are using how many products today. And what I mean by this is the exact analog of what it would be if I were if I were giving a talk on how to just build a good product for a user, for a human.

And we would talk about human interface guidelines and we talk about user experience and we talk about stories. And I found it really instructive to start talking about those things from an agentic perspective because what else is an MCP server but an interface for an agent and we should design it for the strengths and weaknesses of those agents in the same way that we do everything else.

Now when I put this thought in the world I very very very frequently get this push back which is but if a human can use an API why can't an AI and there are so many things wrong with this question and the number one thing that's wrong with this question is that it has a assumption that I see in so much of AI product design and it drives me nuts which is that AIs are perfect or they're oracles or they're good at everything and they are very very very powerful tools but I'm assuming based on your responses before.

I think everyone in this room has some scars of the fact that they are fallible or they are limited or you know they're imperfect. And so I don't like this question because it presumes that they're like magically amazing at everything. But I really don't like this question. This is a literal question I've got and I didn't paraphrase it. I really don't like this question because humans don't use APIs.

Very very rarely do humans use APIs. Humans use products. We do anything we can to put something between us and an API. We put a website. we put an SDK, we put a client, we put a mobile app. We we do not like to use APIs unless we have to or we are the person responsible for building that interface.

And so one of my core arguments and why I love MCP so much is that I believe that agents deserve their own interface that is optimized for them and their own use case. And in order to design that interface, which is what I want to motivate today, we have to think a little bit about what is the difference between a human and an AI.

And it's one of these questions that's like sounds really stupid when you say it out loud, but it's instructive to actually go through. And I'd like to make the argument to you that it exists on these three dimensions of discovery, iteration, and context.

And so just to begin, humans, we find discovery really cheap. We tend to do it once. If you think if if any of you have had to implement something against a REST API, what do you do? You call up the docs or you go in Swagger, whatever it is, you call it up, you look at it one time, you figure out what you need, you're never going to do that again.

And so, while it may take you some time to do the discovery, it is cheap in the lifetime of the application you are building. AIS, not so much. Every single time that thing turns on, it shakes hands with the server. It learns about the server. It enumerates every single tool and every single description on that server. So discovery is actually really expensive for agents. It consumes a lot of tokens.

Next, iteration. Same idea. If you're a human developer and you're writing code against an API, you can iterate really quickly. Why? Because you do your one-time discovery. You figure out the three routes you're going to call and then you write a script that calls them one after another as fast as your language allows. So iteration is really cheap.

And if that doesn't work, you just run it again until it does. Iteration is cheap. is fast. For agents, I think we all know iteration is slow. Iteration is the enemy. Every additional call subject to your caching setup also sends the entire history of all previous co calls over the wire. Like it is just you do not want to iterate if you can avoid it. And so that's going to be an important thing that we take into consideration.

And the last thing is on context. And this is a little bit handwavy, but it is important as humans in this conversation. I'm talking, you're hearing me, and you're comparing this to different memories you have and different experiences you have on different time scales, and it's all doing wonderful, amazing things in your brain.

And when you plug an LLM into any given use case, it remembers the last 200,000 tokens it saw. And that's the extent of its memory plus whatever is, you know, embedded somewhere in its in its weights and that's it. And so we need to be very very conscious of the fact that it has a very small brain at this moment.

I I think it is a lot closer to when people talk about sending, you know, Apollo 11 to the moon and and with like 1 kilobyte of RAM, whatever it was. I think that's actually how we need to think about these things that frankly feel quite magical because they go and open my PRs for me or whatever it is that they do.

So these are the three key dimensions in my mind of what is different and we should not build APIs that are good for humans on any of these dimensions and pretend that they are also good for agents. And one way that I've kind of started talking about this is this idea which is an agent can find a needle in a hay stack. The problem is it's going to look at every piece of hay and decide if it's a needle.

And that's like not literally true, but it is in an intuitive sense how we should think about what we're putting in front of the agents and how we're posing a problem. And an MCP server is nothing but an interface to that problem andor solution.

And so finally to go back to our product intuition statement, I argued to you that the most important word in the universe for MCP developers is curate. How do you curate from a huge amount of information which might be amenable for a human developer a interface that is appropriate for one of these extremely limited AI agents at least on the dimensions that we just went through.

And that sort of brings us to this slide, YMCP. And I almost made this like the Derek Zoolander slide like but why MCP? Like but I just told you why MCP Derek. It's because it does all of these things. It gives us a standard way of communicating information to agents in a way that's controllable where we can control not only how it's discovered but also how it is acted on.

There's a big asterisk on that because client implementations in the MCP space right now are not amazing and they do some things that are themselves not compliant with the MCP spec. Maybe at the end we'll get into that. It's not directly relevant to now except that all we can do is try to build the best servers we can subject to the limitations of the clients that will use them.

And again, I put this in here. I think we don't need to go through what MCP is for this audience. So, we're going to move quickly through this. But it is, of course, for the for the for the sake of the transcript, the cliche is that it's USBC for the internet. It is a standard way to connect LLMs and either tools or data.

And if you haven't seen fast MCP, this is what it looks like to build a fully fully functional MCP server. This one, I live in Washington DC. the subway is often on fire there and so this checks whether or not the subway is on fire and indeed it is.

Now the question we are here to actually explore is why are there so many bad MCP servers? Maybe a better question is do you all agree with me that there are many bad MCP servers? I sort of declare this as if it's true. I I'm not trying to make a controversial statement. There are many bad MCP servers in the world. I see a lot of them because people are using my framework to build them.

Does that surprise anyone that I'm sort of declaring that I'm genuinely I'm I'm curious if that's a if I'm made an assumption. I don't in my experience I I won't say every every MCB I I came up to is like that but a lot of them are like AI rubbers. They just put a like stringify the content of the API and that's and that's it. They call it an NCB. Yeah.

And I and I think even I'll I'll make the argument going a little off script here, but I'll make the argument that a lot of them even when they're not rappers are just bad products because no thought was put into them. And I mean, uh, one comparison that that I talk about sometimes with my team is if you go to a a bad website, you know it's a bad website.

We don't need to sit there and figure out why it's it's ugly or it's hard to use or it's hard to find what you're looking for or it's all flash. I don't know. I don't know what makes a bad website exactly, but you know what a bad website is when you go to one. We don't like to point out all the things because there's an infinite number of them. Instead, we try to find great examples of good websites.

And so, what I think we need more than anything else are MCP best practices. And so, a big push of mine right now and part of where this talk came from is I want to make sure that we have as many best practices in the world and documented. And I do want to applaud there are a few firms these are screenshots from Block has an amazing playbook which if you hate this talk read their read their blog post it's it's like a better version of what I'm doing right now and GitHub recently put out one and many other companies have done as well.

I I could have I could have put a lot here but these are two that I've referred to quite frequently and so I I recommend them to you. The block team in particular is just phenomenal what they're doing on MCP. By coincidence, the same team has been my customer for six years on the data side and they're I really love the work that they do and the blog posts they put out are very thoughtful and I highly highly recommend them to you.

I want to see more of this and today is sort of one of my humble efforts to try and put some of that in the world. And so what I thought we would do today because I did not want to ask you to open your laptops up and set up environments and actually write code with me because it's 4:25 on Saturday. I thought that we would fix a server together sort of through slides to make this again as I said hopefully actionable but but a gentle a gentle approach to this.

And so here is here is the server that you were describing a moment ago. Right. So someone wrote this server I hope that the notation is is clear enough to folks. We have we have a decorator that says that a function is a tool and then we have the tool itself. And forgive me I didn't bore you with the with the details because we think this is a bad server to begin with.

I think in this server what's our example here right we want to we want to check an order status and so in order to check an order status we need to learn a lot of things about the user and what their orders are we need to filter it we need to actually check the status and if this were a REST API which presumably it is we know exactly what we would do here we would make one call to each of the functions in a sequence and return that as some userfacing output and it would be easy and it would be observable and it would be fast and it would be testable everything would be good.

And instead, if we expose this to an agent, what order is it going to call these in? Does it know what the format of the arguments are? How long is it going to take for the minimum three round trips this is going to require? These are all the problems that we're exposing just just by looking at this. We're not I mean solve them, but that's the problems I see if I were reviewing this as a product facing effort.

And so the first thing that we are going to think about and I think this is probably the most important thing when we think about an effective MCP server because it is product thinking is outcomes not operations. What do we want to achieve? And this is a little bit annoying for engineers sometimes because it's forced product thinking.

It's not someone coming along with a user story and and mapping it all out and saying this is what we need to implement. We cannot put something in this server unless we know for a fact it's going to be useful and have a good outcome. We have to start there. There's just not enough context for us to be frivolous.

And so here's kind of what this feels like so that we can get a sense for it. The trap when you're falling into the trap, you have a whole bunch of atomic operations. This is amazing if you're building a REST API. It is best practice if you're building a REST API. It is bad if you're building an MCP server. Instead, we want things like track latest order and give an email. It's hard to screw up and you know what the outcome is when you call it.

The other version of the trap is agent as glue or agent as orchestrator. Please believe me since I've spent my career building orchestration software and automation software that there are things that are really good at doing orchestration and there are things that are really bad at orchestration and agents are right in the middle because they can do it but it's expensive and slow and annoying and hard to debug and stochastic.

And so if you can avoid that, please do. If you can't, there are times when you don't know the algorithm and you don't know how to write the code and it's not programmatic, that's a perfect time to use an LLM as an orchestrator. Finding out an order status, really bad time, really expensive time to choose to use an LLM as your orchestration service. So don't instead focus on this sort of one tool equals one agent story.

And again, even here, we're trying to introduce a new vocabulary. It's not a user story because user stories everyone thinks human even though it is a user. It's an agent story. It's something that a programmatic autonomous agent with an objective and a limited context window is trying to achieve and we need to satisfy that as much as we can.

And then this is one of those like little tips that feels obvious but I think is important. Name the tool for the agent. Don't name it for you. It's not a REST API. It's not supposed to be clear to future developers who need to write, you know, you're not writing an API for change. You're writing an API so that the agent picks the right tool at the right time.

Don't be afraid about using silly but explanatory names for your tools. I shouldn't say silly. They might feel a little silly, but they're very userf facing in this moment, even though it feels like a deep a deep d a deep API.

This just in case any of you didn't go read the block blog post. I just found this section of it so important where they essentially say something very similar. designed top down from the workflow, not bottom up from the API endpoints. Two different ways to get to the same place, but they will result in very different forms of product thinking and very different MCP server.

So again, I just I really encourage you to go and take a look at that at that blog post. And if we were to go back to that bad code example I showed you a moment ago and start rewriting this and if we had our laptops, you're welcome to have your laptops out and follow along. The code will essentially run, but there's no need. Here's what that could look like.

We did the thing that you would do as a human. We made three calls in sequence that are configured that are to our API, but we buried them in one agentf facing tool. And that's how we went from operations to outcomes. The the API calls still have to happen. There's no magic happening here. But the question is, are we going to ask an agent to figure out the outcome and how to stitch them together to achieve it or are we going to just do it because we know how to how to do it on its behalf.

So thing number one is outcomes over operations. Thing number two, another thing, a lot of these frankly are going to seem kind of silly actually when I say them out loud. Please just trust me from the download graph that these are the most important things that I could offer as advice. And if and if none of them apply to you, think of yourself as in the top 1% of MCP developers.

Flatten your arguments. I see this so often where I do this myself. I'll confess to you where you say here's my tool and one of the inputs is a configuration dictionary hopefully presumably it's documented somewhere in maybe in the agents instructions maybe it's in the doc string you have a real problem when by the way I I don't remember if I have a point for this later so I'll say it now a very frequent trap that you can fall into with arguments that are complex is you'll put the explanation of how to use them in something like a system prompt or a sub aent definition or something like that and then you'll change the tool in the server and now you it's almost worse than a poorly documented tool.

You have a doubly documented tool and and one is wrong and one is right and only error messages will save you. That's really bad. We're not This is a more gentle version of that. Just don't ask your LLM to invent complex arguments. Now you could ask what if it's a pyantic model with every field annotated and fine that's better than the dictionary but it's still going to be hard.

There was until very recently there may still be a bug in maybe it's not a bug because no one seems to fix it but in cloud desktop all all structured arguments like object arguments would be sent as a string and this created a real problem because we do not want to support automatic string conversion to object but clog desktop is one of the most popular MCP clients and so we actually bowed to this in as a matter of like necessity and So fastmcp will now try if you are supplying a string argument to something that is very clearly a structured object, it will try to des serialize it.

It will try to do the right thing. I really hate that we have to do that. That feels very deeply wrong to me that we have a a type schema that said I need an object and yet we're doing clutchy stuff like that. And so this is an example of where this is an evolving ecosystem. It's a little it's a little messy, but what does it look like when you do it right?

Top level primitives. These are the arguments into the function. What's the limit? What is the status? What is the email? Clearly defined. Just like naming your tool for the agent, name the arguments for the agent.

And here's sort of what that looks like when we get that into code. Instead of having config colon dict, we have an email, which is a string. We have include cancelled, which is a a flag. And then I highly highly recommend literals or enums whenever you can. Much better than a string if you know what the options are. at this time very few LLMs know that this kind of syntax is supported and so they would typically write this if you had claude code or something write this.

It would usually write format colon string equals basic which works. It just doesn't know to do this. And so it's one of those little little actionable tips. Use literal or use enum equivalently. When you have a a constrained choice your your agent will thank you.

And I do have instructions or context. So, I did get ahead of myself. I'm sorry everybody. It is 4:35 on a Saturday. The next thing though I want to talk about is the instructions that you give to the agent. This cuts both ways. The most obvious way is when you have none. We mentioned that a moment ago. If you don't tell your agent how to use your MCP server, it will guess. It will try.

It will probably confuse itself and all of those guesses will show up in its history and that's not a great outcome. Please document your MCP server. Document the server itself. Document all the tools on it. Uh, give examples. Examples are a little bit of a double-edged sword. On the one hand, they're extremely helpful for showing the agent how it should use a tool.

On the other hand, it will almost always do whatever is in the example. This is just one of those quirks. Perhaps as models improve, it will stop doing that. But in my experience, if you have an example, let's say you have a field for tags. You want to you want to collect tags for something. If your example has two tags, you will never get 10 tags. You will get two tags pretty much every time.

They'll be accurate. It's not going to do a bad job, but it really uses those examples for a lot more dimensions than just the fact that they work if that makes sense. So, so use examples, but be careful with your examples.

Giving out of distribution examples as a way to solve for that. Have you seen that by out of distribution? Do you mean that are not would not be representative of bacter? It's so interesting. So I don't have a strong opinion on that. That seems super reasonable to me. I don't have an opinion on it. I in my experience the fact that an example has some implicit pattern like the number of objects in array is becomes such a strong signal that I almost gave this its own bullet point called examples are contracts. like if you give one expect to get something like it out of distribution is a really interesting way to sort of fight against I guess that inertia I would imagine it is better to do it that way I would just be careful of falling into this sort of more base layer trap I think so that's completely reasonable and I would endorse it I think this is just a more broad whatever example you put out there weird quirks of it will show up I I on an MCP server that I'm building I encountered this tag thing just yesterday and it really confused me no matter how much I was like, "Use at least 10 tags." It always was two. And I finally figured it was because one of my examples had had two tags.

So yes, good strategy. May or may not be enough to overcome these basic these basic caveats. Oh, I do have examples of contracts. I'm sorry. It's We're 37. This one I think is one of the most interesting things on this slide. Errors are prompts. So, every response that comes out of the tool, your your LLM doesn't know that it's it's like bad. It's not like it gets a 400 or a 500 or something like that. It gets what it sees as information about the fact that it didn't succeed in what it was attempting to do.

And so if you just allow Python in in fastmcp's case or whatever your tool of choice is to raise for example an empty value error or a cryptic MCP error with an integer code that's the information that goes back to your LLM and does it know what to do with it or not probably it knows at least to retry because it knows it was an error but you actually have an opportunity to document your API through errors and this leads to some interesting strategies that I don't want to wholeheartedly endorse but I will mention where for example if you do have a complex API because you can't get away from that.

Then instead of documenting every possibility in the dock string that that documents the entire tool, you might actually document how to recover from the most common failures. And so it's a very weird form of progressive disclosure of information where you are acknowledging that it is likely that this agent will get its first call wrong, but based on how it gets it wrong, you actually have an opportunity to send more information back in an error message.

As I said, this is a kind of a not an amazing way to think about building software, but it is the ultimate version of what I'm recommending, which is be as helpful as possible in your error messages. Do go overboard. They become part of, as far as the agent is concerned, its next prompt. And so, they do matter. If they are too aggressive or too scary, it may avoid the tool permanently. It may decide the tool is inoperable.

So errors really matter. And I don't think this needs too much of an explanation, but this is what it looks like when you have a full dock string and an example, etc. Uh, block in their blog post makes a point which I haven't seen used too widely, although chatbt does take advantage of this in their developer mode, which is this readonly hint.

So the MCP spec has support for annotations, which is a restricted subset of annotations that you can place on various components. One of them for tools is whether or not it's readon. And if you supply this optionally, clients can choose to treat that tool a little bit differently. And so the motivation behind the readonly hint was basically to help with setting permissions.

And I don't know who here is a fan of d- yolo or d-dangerous disable permissions or whatever whatever they're called in different in different terminals, but then you don't care about this. But for example, chat GBT will ask you for extra permission if a tool does not have this annotation set because it presumes that it can take a side effect and can have an adverse effect. So use those to your advantage. It is one other form of design that the client can choose to provide a better experience with.

I've talked about this a bit now. Respects the token budget. I think the meme right now is that the GitHub server ships like 200,000 tokens when you handshake with it, something like that. This is a real thing. And I don't think it makes the GitHub server automatically bad. I think it's actually makes it endemic on folks like myself who build frameworks and folks who build clients to find ways to actually solve this problem because the answer can't always be do less.

In fact, right now we want to do more. We want an abundance of functionality. And so we'll talk about that maybe a little bit later. But respect for the token budget really matters. It is a very scarce resource and your server is not the only one that the agent is going to talk to.

So, I was on a call with a customer of mine recently who is so excited that they're rolling out MCP and I met with the engineering team and and just to be clear, this is an incredibly forward-thinking, high-erforming massive company that I incredibly respect. I won't say who they are, but I really respect them. and they got on the call and they were so excited and they were like, "We're in the process of converting our stuff to MCP so that we can use it." And they had a a strong argument why it actually had to be their API.

So that's not even the punch line of the story, which is a whole other story in in and of itself, but it fundamentally came down to this. They had 800 endpoints that had to be exposed to which I had this thought, which if by the time you finish reading this, this is the token budget for each of those 800 tools. if you assume 200,000 tokens in the context window. So if each of those 800 tools had only this much space to document itself, not even document itself, share its schema, share its name plus documentation, this is the amount of space you would get.

And when you were done taking up this space because you were so careful and each tool really fit in this, you would lobomize the agent on handshake because it would have no room for anything else. So the token budget really matters. if this agent connected to a server with one more tool that had a one-word dock string, it would just fail. It would just have a over effectively an overflow, right?

So, the token budget matters. There is probably a budget that's appropriate for whatever work you're doing. You may know what it is, you may not know what it is. Pretend you know what it is and be mindful of it. In a worst case scenario, try to be parsimonious. Try to be as efficient as possible. That's why we do experiments like sending additional instructions in the error message. It's one way to save on the token budget on handshake. And the handshake is painful.

I'm not sure folks know that when an when an LLM connects to an NCP server, it typically does download all the descriptions in one go so that it knows what's available to it. And it's usually not done in like a progressively disclosed way. That is done outright.

Absolutely. facive disclosure mechanisms where when it first initializ describe step for each one. So it's 95% less context window and then whatever service it doesn't actually expose that to the unless it needs that's okay. So that's that's awesome. Let's let's talk about this idea for one second because it's a really interesting design.

There's a debate right now about what you can do that's compliant with the spec versus what you do that's not compliant with the spec. And as long as you do things that are compliant with the spec, then then by all means do them. Who cares? One of the problems is that there are clients that are not compliant with a spec. Cloud Desktop is one of them. I'

Others You May Like